The trap of elegantly stated goals

Last updated on July 10th, 2021 at 11:16 am

Rock cairns in a wilderness area
Rock cairns—also known as rock stacks—in a wilderness area. Some rock stacks in parks and wilderness areas serve a practical purpose as trail markers. But in recent decades, rock stacking has become a fad. Rock stacks aren’t permanent—they use no glue, rings, or fasteners—but their presence does degrade the experience of Nature. The practice continues, in part, because many find the elegance of the structures appealing. As with rock stacks, the elegance of elegantly stated goals has a dark side.

For organizations, an elegantly stated goal is one that anyone can understand, remember, and recite to others. An example for technical debt management is, “We’ll drive technical debt to zero over five years.” Or, “No project is finished if it increases technical debt.” But when the goal relates to solving a problem that has only messy solutions, stating that goal elegantly risks becoming ensnared in what I call the trap of elegantly stated goals.

Because elegant goal statements are so memorable and repeatable, elegantly stated goals spread rapidly, especially if they’re even a bit inspirational. But elegantly stated goals become traps when they incorporate overly simplistic views of how to attain those goals.

And that often happens when technical debt is involved. Here are four guidelines that can help organizations avoid the trap of elegantly stated goals for technical debt.

Beware the halo effect

The halo effect [Thorndike 1920] is a cognitive bias [Kahneman 2011]. It systematically skews our assessments of the qualities of a person, product, brand, company, or any entity, really. If our sense of one quality of the entity is positive, we’re more likely to assess as positive other qualities of that entity. The elegance of a goal statement can cause us to regard the goal as more desirable than we would if the goal were stated less elegantly. For example, the statement, “We’ll achieve zero technical debt in five years,” can increase the chances that we’ll believe that such a goal is attainable. Indeed, some might not even question its desirability, let alone its attainability.

When devising goals for technical debt management, beware the halo effect. Always question desirability, taking costs and benefits into account.

Technical debt matters less than its metaphorical interest charges

The metaphorical interest charges (MICs) on technical debt, rather than the metaphorical principal (MPrin) of the debt itself, are what matter. A goal for total technical debt might be more elegant and more simply stated than would be a goal for technical debts that carry high MICs. But goals for total technical debt can lead to effort spent on debts with low MICs.  And those efforts produce little benefit.

When setting goals for technical debt management, pay attention to the MICs. Distinguish between low-MICs and high-MICs technical debt. Keep in mind that MICs can fluctuate. One kind of technical debt can be a low priority at one point in time, and a high priority at another.

Controlling technical debt is safer than trying to drive it to zero

Blind application of an elegantly stated goal can have strikingly silly unintended consequences. Keep in mind that the policymaker’s definition of technical debt is any technological element that contributes, through its existence or through its absence, to lower productivity, or depressed velocity, or a higher probability of defects.

Consider this example of strikingly silly unintended consequences for the goal of zero technical debt. An engineer creates an innovative and superior solution to a previously solved problem. Existing assets that incorporate the old solution are instantly outmoded by the innovative solution. Those existing assets now carry technical debt. If the enterprise directive mandates zero technical debt, some engineering managers might be tempted to do the unexpected. They might inhibit the kind of creativity that leads to innovative solutions to previously solved problems. The temptation arises because introducing those new solutions creates exogenous technical debt in existing assets. Therein lies the trap of the elegantly stated goal.

Throttling efforts to find innovative solutions to previously solved problems is one example of an unintended consequence of trying to drive technical debt to zero. Controlling technical debt is probably a safer option than trying to drive it to zero. Before adopting elegantly stated goals for technical debt, it would be wise to be aware of their possible unintended consequences.

Get control of the behaviors that lead to technical debt

Technical debt management efforts typically emphasize debt retirement or engineering process improvement. While both activities are worthwhile, the root causes of technical debt often lie beyond engineering. See, for example, the thread in this blog exploring nontechnical precursors of technical debt.

For example, across-the-board budget cuts can lead to technical debt. This happens because teams suspend efforts that have already created technical artifacts. If those teams lack resources needed for retracting partially implemented capabilities, the partial implementations remain in place. See “How budget depletion leads to technical debt” for a more detailed explanation of this technical debt formation mechanism.

Budget control tactics like across-the-board cuts can be counter-effective. If they don’t attend to their technical debt implications, they can add to future expenses through the MICs on the debt they generate. They can thus create future needs for budget cuts, and that leads to a vicious cycle. To gain control of technical debt, we must alter these budget control tactics. We need to provide teams with the resources they need for retracting partial implementations. That would ensure that budget reductions don’t lead to technical debt formation.

Investments in technical debt retirement and engineering process improvement are worthwhile. But they can be futile unless we first address the nontechnical causes of technical debt. It’s like bailing out a sinking rowboat without first plugging its leaks. The stated goal, “We’ll drive technical debt to zero in five years,” might better be replaced with, “We’ll get control of the behaviors that lead to technical debt within two years.”

Last words

The template known as SMART goals provides one approach to setting goals with limited exposure to the risk of elegantly stated goals. See “Using SMART goals for technical debt reduction,” for details.

Achieving control of technical debt—rather than attaining any particular level of technical debt—is a useful goal. Either we’ll control technical debt or technical debt will control us.

References

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

Other posts in this thread

Synergy between the reification error and confirmation bias

Last updated on July 10th, 2021 at 08:53 am

In deciding whether to undertake technical debt retirement projects, organizations risk making inappropriate decisions because of a synergy between the reification error and confirmation bias. Together, these two errors of thought create conditions that make committing appropriate levels of resources difficult. And when organizations do commit resources, they tend to underestimate costs. That underestimate can elevate the chance of failure in technical debt retirement projects.

The reification error and confirmation bias

As explained elsewhere in this blog, the reification error is an error of reasoning in which we treat an abstraction as if it were a real, concrete, physical thing. Because technical debt is an abstraction, we risk committing the reification error when we deal with it. (See “Metrics for technical debt management: the basics”)

Confirmation bias is a cognitive bias that causes us to favor and seek only information that confirms our preconceptions, or to avoid information that disconfirms them. (See “Confirmation bias and technical debt”)

How the reification error affects management

The reification error might be responsible, in part, for a widely used management practice that often appears in the exploratory stages of undertaking projects. Let’s start with an illustration from the physical world.

In the physical world, when we want cherries, we go to a market and check the price per pound or kilo. Then we decide how much we want. If the price is high, we might decide to buy fewer cherries. If the price is low, we might buy more cherries. We have in mind a total cost target, and we adjust the weight of the cherries to meet the target. In the physical world, we can often adjust what we purchase to match our willingness to pay.

Retiring technical debt doesn’t work like that, in part, because technical debt is an abstraction. But we try anyway; here’s how it goes. Management decides to retire a particular class of technical debt. They ask an engineer for an estimate of the cost. Sometimes Management reveals the target they have in mind if they have one; sometimes not. The estimate comes back as Total ± Uncertainty. Management decides that’s too high, or the Uncertainty is too great. They then ask the engineer to find a way to do it for less, or to reduce the Uncertainty.

Management—the “customer” in this scenario—makes this request, in part, based on the belief that adjusting the work is possible. Management hopes that the engineer can adjust the work to meet a (possibly unstated) target, in analogy to buying cherries. That thinking is an example of the reification error. In this dynamic, we rarely take into account the fact that retiring technical debt isn’t exactly like buying cherries.

How confirmation bias affects engineering estimates

Return now to the interaction between Management and the engineer/estimator. The engineer now suspects that Management does have a target in mind. Some engineers might ask what the target is. Some don’t. In any case, the engineer makes a lower estimate, which might still be too high. This process repeats until either Management decides against retiring the debt, or accepts the lowest Total ± Uncertainty.

In adjusting their estimates, engineers have a conflict of interest. That conflict of interest can compromise their objectivity through the action of confirmation bias. For technical debt retirement efforts, engineers are usually highly motivated to gain Management approval of the project. The motivation arises, in part, from the frustrating loss of engineering productivity. And since engineers typically sense that Management approval of the project is contingent on finding an estimate that’s low enough, the engineers have a preconception. That is, engineers have an incentive to convince themselves that Management’s adjustments to budget and schedule are reasonable. Because of the confirmation bias, engineers tend to seek justifications for the adjustments. And they tend to avoid seeking justifications for believing that their adjustments might not be feasible. That’s the confirmation bias in action.

How synergy between the reification error and confirmation bias comes about

Because of the reification error, Management tends to believe that retiring technical debt is a more adjustable activity than it actually is. Because of confirmation bias, engineers tend to believe that Management’s proposed cost and schedule are feasible. Too often, the synergy between the two errors of thinking provides a foundation for disaster.

Why this synergy creates conditions for disaster in technical debt retirement projects

Management usually equates estimates with commitments. Engineers don’t. Management usually forgets or ignores the upside Uncertainty. Typically, when Management accepts an estimate, the engineering team finds that it has made a commitment to deliver the work for the cost Total, with zero upside Uncertainty. Rarely does Management make this explicit. An analogous problem occurs with schedule.

By ignoring the Uncertainty, Management (the buyer) transfers the uncertainty risk to the project team. That strategy might work to some extent with conventional development or maintenance projects, where we can adjust scope and risk before the work begins. But for technical debt retirement projects, this practice creates problems for two reasons.

Adjusting the scope of debt retirement projects is difficult

First, with technical debt retirement we’re less able to adjust scope. To retire a class of technical debt, we must retire it in toto. If we retire only some portion of a class of technical debt, we would leave the asset in a mixed state that can actually increase MICs. So it’s usually best to retire the entirety of any class of technical debt, so as to leave the asset in a uniform state.

Debt retirement efforts are notoriously unpredictable

Second, the work involved in retiring a particular class of technical debt is more difficult to predict than is the work involved in more conventional projects. (See “Useful projections of MPrin might not be attainable”) Often, we must work with older assets, or older portions of younger assets. The people who built them aren’t always available, and documentation can be sparse or unreliable. Moreover, it’s notoriously difficult to predict with accuracy when or for how long affected assets will be out of production. Revenue stream interruptions, which can comprise a significant portion of total costs, can be difficult to schedule or predict. Thus, technical debt retirement projects tend to be riskier than other kinds of projects. They have wider uncertainty bands. Ignoring the Uncertainty, or trying to transfer responsibility for it to the project team, is foolhardy.

A strategy for reducing the effects of this synergy

To intervene in the dynamic between the consequences of the reification error and the consequences of confirmation bias, we must find a way to limit how their consequences can interact. That will curtail the ability of one phenomenon to reinforce the other. This task is well suited for application of Donella Meadows’ concept of leverage points [Meadows 1999]. See “Leverage points for technical debt management.”

In that post, I summarized Meadows’ concepts of using leverage points to alter the behavior of complex systems. One can intervene at one or more of 12 categories of leverage points. These are elements in the system that govern the behavior of the people and institutions that comprise the system. In that post, I sketched the use of Leverage Point #9, Delays, to alter the levels of technical debt in an enterprise.

In what follows I sketch the use of interventions at Leverage Point #8, “The strength of negative feedback loops, relative to the impacts they are trying to correct against.”

Our strategy is this:

A feedback loop that now provides budgetary control in most organizations

One feedback loop at issue in this case, illustrated above, influences managers who might otherwise overrun their budgets. It does so by triggering some sort of organizational intervention when a manager overruns his or her budget. And the feedback loop leads to increases in the size and stature of the portfolios of managers who handle their budgets responsibly.  Presumably, that’s one reason why managers compel estimators to find approaches that cost less. The feedback loop to which managers are exposed causes them to establish another feedback loop involving the engineer/estimator, and later the engineering team. That second loop causes engineers to hold down their estimates, and later to limit actual expenditures.

A diagram of effects analysis

A feedback loop that now provides budgetary control in most organizations
A feedback loop that now provides budgetary control in most organizations.

We can use a diagram of effects [Weinberg 1992] to illustrate the feedback mechanism commonly used to control the performance of managers who are responsible for portfolios of project budgets. In the diagram (above), the oval blobs represent quantities indicated by their respective captions. Each of these quantities is assumed to be measurable, though their precise values and the way we measure them are unimportant for our rather qualitative argument.

What the arrows mean

Notice that arrows connect the blobs. The arrows represent the effect of changes in the value represented by one blob on the value represented by another. The blob at the base of the arrow is the effector quantity. The blob at the point of the arrow is the affected quantity. Thus, the arrow running from the blob labeled “Actual Spend” to the blob labeled “Overspend” expresses the idea that a positive (or negative) change in the amount of actual spending on projects causes a positive (or negative) change in Overspend. When a change in the effector quantity causes a like-signed change in the affected quantity, we say that their relationship is covariant.

Because increases in Budget Authority tend to decrease Overspend, all other things being equal, the relationship between Budget Authority and Overspend is contravariant. We represent a contravariant relationship between the effector quantity and the affected quantity as an arrow with a filled circle on it.

Finally, notice that the arrow from Overspend (effector) to Promotion Probability (affected) has a filled Delta on it. This represents the idea that as Overspend increases, it negatively affects the probability that the manager will be promoted at some point in the future. The Delta indicates a delayed effect; that the Delta is filled indicates a contravariant relationship. (An unfilled Delta would indicate a delayed covariant effect.)

Loops in the diagram of effects

This diagram, which contains a loop connecting Budget Authority, Overspend, and Promotion Probability, has the potential to “run away.” That is, as we go around the loop, we find self-re-enforcement, because the loop has an even number of contravariant relationships. It works as follows:

As Overspend increases, after a delay, the Probability of Promotion decreases. This causes reductions in Budget Authority because, presumably, the organization has reduced faith in the manager’s performance. Reductions in Budget Authority make Overspend more likely, and round and round we go.

Similarly:

As Overspend decreases, after a delay, the Probability of Promotion increases. This causes increases in Budget Authority because, presumably, the organization has increased faith in the manager’s performance. Increases in Budget Authority make Overspend less likely, and round and round we go.

Fortunately, other effects usually intervene when these self-re-enforcing phenomena get too large, but that’s beyond the scope of this argument. For now, all we need observe is that managers who manage their budgets effectively tend to rise in the organization; those who don’t, don’t.

The result is that managers limit spending to avoid overspending their budget authority. And that’s one reason why they push engineers to produce lower estimates for technical debt retirement projects.

How this feedback loop overlooks important drivers of technical debt formation

To break the connection between the managers’ reification error and the engineers’ confirmation bias, our intervention must cause the managers and the engineers to make calculations differently. We can accomplish this by requiring that they consider more than the mere cost of retiring the class of technical debt under consideration. They must estimate the consequences of not retiring that technical debt, and they must also estimate costs beyond the cost of retiring the debt. In what follows, I’ll use the shorthand TDBCR to mean the class of Technical Debt Being Considered for Retirement.

Specifically, estimates for technical debt retirement projects cover only the cost of performing the work required to retire the TDBCR. Management then decides whether, when, and to what extent to commit resources to execute the project. The primary considerations budgetary.

Since the debt retirement project can potentially provide benefits beyond the manager’s own portfolio, failing to undertake the project can have negative consequences. Mnagers who decline to undertake debt retirement projects are responsible for the consequences. But accountability for these decisions is rare. That’s the heart of the problem. So let’s look at some examples of relevant considerations.

Adjustments that would support these feedback loops to gain control of technical debt

In allocating resources for a technical debt retirement project, there are considerations beyond the cost of retiring the debt. A responsible decision is possible only if other kinds of estimates are also available. Here are some examples of the estimates we need:

  • The effects of retiring TDBCR on the cost of executing any other development or maintenance effortsy
  • The effects of retiring TDBCR on revenue and market share for all existing assets that directly produce revenue and which could be affected by retiring TDBCR
  • The revenue that would become available (and timing thereof) from any new products or services that become possible because of retiring TDBCR
  • The effects of retiring TDBCR on the cost of executing other technical debt retirement efforts

And these items might not be related to anything for which the decision maker is responsible. But the feedback loop we now use to influence the decision maker excludes considerations that are affected by the decision maker’s decisions. Until we install feedback loops that cause the decision maker to consider these indirect consequences, or until we make decisions at levels that include these other consequences, the effects of the decision maker’s decisions are uncontrolled, and might not lead to decisions optimal for the enterprise.

References

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

Other posts in this thread

The resilience error and technical debt

Last updated on July 10th, 2021 at 07:46 am

I’ve mentioned the reification error in a previous post (see “Metrics for technical debt management: the basics”), but I haven’t explored its dual, the resilience error. Let me correct that oversight now.

Reification risk is the risk that an error of reasoning known as the reification error might affect decisions—in this case, decisions regarding technical debt. The reification error [Levy 2009] [Gould 1996] (also called the reification fallacy, concretism, or the fallacy of misplaced concreteness [Whitehead 1948]) is an error of reasoning in which we treat an abstraction as if it were a real, concrete, physical thing. Reification is useful in some applications, such as object-oriented programming and design.

Where reification risk is most likely

The future USS Zumwalt (DDG 1000) in sea trials, December 2015
The future USS Zumwalt (DDG 1000) is underway for the first time conducting at-sea tests and trials in the Atlantic Ocean Dec. 7, 2015. The first of the Zumwalt class of US Navy guided missile destroyers, it is designed to be stealthy, and to be supported by a minimal crew. After the program experienced explosive cost growth, the class has been downsized from 32 ships to three, and complement increased from 95 to over 140 to reduce capital costs. The three vessels on order now have significantly reduced missions.

As one might expect, the causes of these troubles are much debated. But it’s possible that the resilience error plays a role. Before the first of a new class of ships goes to sea, it exists as an abstraction—a collection of concepts, plans, promises, and technologies, tried and untried. Many elements of this collection have never inter-operated with other elements. The first ship represents the first opportunity to see how all the elements work together. Although troubles often appear even before the ship is fully assembled, anticipating all troubles is extraordinarily difficult.

But when we reify in the domain of logical reasoning, troubles can arise. For example, we can encounter trouble when we think of “measuring” technical debt. Strictly speaking, we cannot measure technical debt. It isn’t a real, physical thing that can be measured. What we can do is estimate the cost of retiring technical debt, but estimates are only approximations. And in the case of technical debt, the approximations are usually fairly rough—they have wide uncertainty bands. That’s one way for trouble to enter the scene. When we regard the estimate as if it were a measurement, we tend to think of it as more certain than it actually is. Technical debt retirement projects then overrun their budgets and schedules, and chaos reigns.

For example, if we think we’ve measured the MPrin of a class of technical debt, rather than that we’ve estimated it, we’re more likely to believe that one measurement will suffice, and that it will be valid for a long time (or indefinitely). On the other hand, if we think we’ve estimated the MPrin of a class of technical debt, we’re more likely to believe that obtaining a second independent estimate would be wise, and that the estimate we do have might not be valid for long. These are just some of the many consequences of the reification error.

The resilience error

If the reification error is risky because it entails regarding an abstraction as a real, physical thing, we might postulate the existence of a resilience error that’s risky because it entails regarding an abstraction as more resilient, pliable, adaptable, or extensible than it actually is.

When we commit the resilience error with respect to an abstraction, we adopt the belief—usually without justification, and possibly outside our awareness—that if we make changes in the abstraction without fully investigating the consequences of those changes, we can be certain that the familiar properties of the abstraction we modified will apply, suitably modified, to the new form of the abstraction. Or we assume incorrectly that the abstraction will accommodate any changes we make to its environment.

Sometimes we benefit when we modify abstractions; usually we encounter unintended and unpleasant consequences. For example, unless we examine our modifications carefully, it’s possible that the implications of a modification might conflict with one or more of the fundamental assumptions of the abstraction.

A metaphorical example of the resilience error

Perhaps a (ahem) concrete example will illustrate. Consider the steel hull of an ocean liner. We can manufacture it more cheaply if we can devise a way to use less steel. So one approach to using less steel is to remove a small portion of the bottom of the hull. We decide to cut out of the hull a circular hole one meter in diameter. We send some people into the ship to do the work, and they return with panicky reports of water coming in. But the ship seems fine, so we reject the reports. Even a day later, all seems well. But by the end of the second day, the trouble is obvious. The ship is sinking.

The problem in our example is that the circular hole in the hull violated a fundamental assumption about how ship hulls work. They work by keeping all water out of the ship. We had extended the idea of hull to make it lighter, but in doing so, we encountered some unintended consequences because our extension violated a fundamental property of hulls.

A more realistic example of the resilience error

Now for a more realistic example. Let’s consider a fictitious business situation.

Consider the fictitious company Alpha Properties LLC. Alpha manages small condominium associations of from 25 to 100 units. Things have been going swimmingly at Alpha. They’ve decided to expand to handle large condominium associations. Alpha’s financial accounting software has worked well, and their employees have become quite expert in its use. Alpha management has heard good reports about a different software package. Because the reports are from other management companies that deal with large client associations, Alpha decides to use the same software for its larger accounts too. But things don’t work out so well.

The software is fine, but the processes used by Alpha’s staff are cumbersome and slow. For example, setting up a new association requires much manual data entry. For a 100-unit association, client setup wasn’t a burden, but for a 900-unit association the problem is just unmanageable.

This is a fine example of the resilience error. When we make this error, we fail to appreciate how an abstraction can encapsulate assumptions from one context when we apply that abstraction in another context. In this example, Alpha’s data flow processes are the abstraction. The context is signing up a new client association. When the context (signing up a large new client) changes, it violates an internal assumption of the abstraction (the data flow process for signing up a new client).

How the resilience error leads to technical debt

In many cases, the resilience error is at the heart of the causes of technical debt. It works like this. We have an asset that works perfectly well in one set of contexts. We want to apply that asset in a new way, which might (or might not) require some minor extensions. When we try it, we find that the asset incorporates some assumptions about the original context, and one or more of those assumptions are violated by the new context. Scrambling, we find some quick fixes that can get things working again. But those fixes usually aren’t well-designed or easily maintained. The result is a trail of technical debt.

Acquiring companies is like that. Before the acquisition, we think we’ll be able to merge the IT operations to save some expenses in operations. But when we actually try it, merging them proves to be far more expensive than we imagined. Ah, the resilience error.

What makes this situation so difficult is that often we’re unable to anticipate what assumptions we might be about to violate. That’s why we make the resilience error.

Last words

Spotting difficulties with adapting to new applications and new contexts isn’t so difficult with physical entities. For example, we can see in advance that a square peg won’t fit into a round hole. But with abstractions, we can’t always see the problems in advance. Piloting, prototypes, games, and simulations can help us avoid some trouble, but not all.

References

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

Other posts in this thread

Three cognitive biases

Last updated on July 10th, 2021 at 08:49 am

Technical debt arises in enterprise assets through the effects of two classes of drivers: obsolescence and decision-making. When technologies advance, or new technologies arise, or laws or regulations evolve. Existing assets or assets under development can sometimes be left behind. That’s how obsolescence produces technical debt. Debt driven mainly by decision-making is more difficult to describe. But anything that biases decisions away from strictly rational results presents risk. Three cognitive biases likely have strong effects on technical debt formation and persistence.

Cognitive biases affect decision-making

Photo of Daniel Ellsberg, speaking at a press conference in New York City in 1972
Photo of Daniel Ellsberg, speaking at a press conference in New York City in 1972. Best known for his role in releasing the Pentagon Papers, Dr. Ellsberg made important contributions to decision theory while at the RAND Corporation. Photo by Bernard Gotfryd, courtesy U.S. Library of Congress.
Decision-making produces technical debt as the people of the enterprise make choices in design, development, and resource acquisition or allocation. Typically, both obsolescence and decision-making contribute to producing technical debt, though either obsolescence or decision-making might be more important than the other in any given instance.

Managing debt driven principally by obsolescence isn’t difficult, but I’ll leave that topic for another time. For now, let’s focus on decision-making. Already widely accepted is the contribution of engineering decisions to technical debt formation. Indeed, many believe (in my view, incorrectly) that all or most technical debt arises from faulty decisions by engineers. Some engineering decisions are indeed faulty. But the current scale of technical debt is so large that faulty engineering is unlikely to account for it all. Investigating how resource allocation decisions might contribute to technical debt formation is certainly worthwhile.

In this post, I offer three examples illustrating how resource allocation decisions might contribute to technical debt formation and persistence. Each example shows how people make faulty decisions while believing they’re proceeddecision makering objectively and rationally. In each case, what causes the problem is a phenomenon called cognitive bias, though each example in this post illustrates the action of a different cognitive bias.

Loss aversion

Amos Tversky and Daniel Kahneman were the first to identify the cognitive bias known as loss aversion [Kahneman 1984]. It’s the tendency to favor options that avoid losses in preference to options that lead to equivalent or even greater gains. A decision maker affected by loss aversion bias might conclude that not losing $5 is better than finding $5 or even $10. In this way, loss aversion skews decisions in favor of options that enable the enterprise to protect or enhance existing revenue streams. And it skews decisions in this way even if those decisions cause increases in operating expenses. This bias has effect even if the increases in operating expenses exceed the value of whatever revenue that decision protected.

Short term effects of loss aversion

Retiring technical debt usually entails deferring revenue in the short term, for two classes of reasons. First, we must turn the attention of some part of the engineering organization to debt retirement. Assuming that they would have been working on maintaining or enhancing existing products or services, this redirection can lead to reducing or deferring revenue. Second, during the debt retirement operation, some work might require short-term interruptions of revenue streams while the work is underway.

Thus, debt retirement efforts often do reduce revenue—or reduce revenue increases—in the short term. Some decision makers can perceive that effect as a loss.

Long term effects of loss aversion

The long-term effects of debt retirement can be gains, and those gains can be considerable. Typically, by retiring an asset’s technical debt, we reduce the difficulty (read: time required, effort, cost, and risk) of future maintenance and enhancement efforts involving the asset. We also reduce the probability of debt contagion.

Since these long-term effects of debt retirement are ongoing, their impact on the enterprise can be significant. But unless one is familiar with dealing with the consequences of technical debt, recognizing the value of retiring technical debt can be difficult. When loss aversion is in play, intuitive comparisons of the effects of (a) a short-term revenue loss or delay to (b) a long-term benefit of debt retirement favor development and maintenance over retiring technical debt.

Insulating decisions about debt retirement from the effects of loss aversion bias requires objective mathematical modeling of revenue losses and operating cost benefits for all options under consideration. Those models must also account for uncertainty, which makes them inherently ambiguous. And that leads us to consider our next cognitive bias, the ambiguity effect.

The ambiguity effect

The ambiguity effect is a cognitive bias that causes us to prefer options for which the probability of a desirable outcome is relatively better known, over options for which the probability of a desirable outcome is less well known, even if the expected value of that more ambiguous outcome exceeds the expected value of the less ambiguous outcome. The effect was first described by Daniel Ellsberg [Ellsberg 1961].

Consider a choice between allocating resources to new development and allocating resources to technical debt retirement. In most enterprises, decision makers are familiar with new development projects. Likewise, project champions, project sponsors, and project managers are also familiar with new development projects. All parties are less familiar with debt retirement. It’s reasonable to suppose that when confronted with such a choice, decision makers are likely to see debt retirement as carrying with it a probability of positive outcome that is less well known than the probability of a positive outcome for the new development project.

Because of the ambiguity effect, resource allocation decisions are likely to be biased against technical debt retirement, and in favor of maintenance or new development.

But there’s more. Most projects, of any kind, encounter trouble from time to time. When that happens, the urge to reallocate organizational resources can be powerful. Troubled projects might receive more resources if they’re viewed as important to the organization. If so, those resources often come from other projects. The ambiguity effect biases these resource reallocation decisions in a way analogous to initial resource allocation decisions, as described above. In other words, because of the ambiguity effect, when projects encounter trouble, debt retirement projects are less likely to be able to retain previously allocated resources than are maintenance or new development projects.

The availability heuristic

The availability heuristic is a method humans use to evaluate the validity or effectiveness of decisions, concepts, methods, or propositions [Tversky 1973]. According to the heuristic, if we recognize the item being evaluated as familiar, or related to something with which we are familiar, we’re more likely to regard it as valid or workable. And when making comparisons between two alternative decisions, concepts, methods, or propositions, we’re likely to assess more favorably the decision, concept, method, or proposition with which we’re more familiar, all other things being equal.

In organizations where decision makers have more experience evaluating maintenance or development project proposals than they have with technical debt retirement proposals, the availability heuristic acts to reduce the relative assessed favorability of technical debt retirement proposals. It does this in three ways.

Technical debt retirement projects are less familiar

First, in most organizations, technical debt retirement projects are less familiar to decision makers than are maintenance or development projects. On that ground alone the technical debt retirement project proposals are at a disadvantage.

The effects of retiring technical debt are less obvious

But the second effect of the availability heuristic is more important. To grasp the value of working on an asset, we must understand how it will affect the asset’s users. Likewise, to grasp the value of a technical debt retirement project, we must understand how technical debt hampers the enterprise. We must also understand how retiring the technical debt might confer advantages in terms of future engineering efforts. Usually, understanding the consequences of maintenance or development projects is more “available” to decision makers than is understanding the consequences of technical debt retirement projects. Even more dramatic is the difference between understanding the consequences of not funding a maintenance or development project and the consequences of not funding a technical debt retirement project.

Much of the benefit of technical debt retirement is indirect

That is, although there is some direct benefit in terms of the assets from which the debt has been retired, the most dramatic benefits are manifested in projects that follow the debt retirement project, and which depend on the assets that have been relieved of debt. Sometimes, those follow-on projects are known at the time decision makers are considering funding the debt retirement project. Sometimes those follow-on projects have yet to be specified or even recognized. In either case, they are less “available” to decision makers because those follow-on projects are indirect beneficiaries.

These three effects of the availability heuristic cause decisions about resource allocations to tend to favor maintenance or development projects over debt retirement projects.

Mitigating the risks of these three cognitive biases

Over time, as everyone becomes more familiar with technical debt retirement projects, these effects may wane somewhat. But waiting for that to happen isn’t exactly what one might call risk mitigation. For one thing, familiarity grows only if one is motivated and pays attention. As busy as are decision makers in modern organizations, depending on them to actively enhance their own familiarity with technical debt retirement projects is probably not the safest course.

An effective program of actively mitigating the risks of these three cognitive biases probably should focus on four areas.

Familiarity

Do what you can to increase decision maker familiarity with the concept of technical debt, and with the consequences of carrying existing technical debt. Conventional presentation-based training will help, but interactive, experiential training is far more effective. Participants must actually experience the consequences of technical debt in a well-designed and professionally facilitated simulation of a problem-solving task. A faithful simulation would include estimation, changing and ambiguous requirements, and team composition volatility.

Retrospectives

Retrospectives are also known as after-action reviews, post mortems, debriefings, or lessons-learned sessions. They’re meetings that review processes that have just completed pieces of work [Kerth 2001]. Typically, only project team members attend. To maintain psychological safety and to encourage truth telling, enterprise decision makers and supervisors don’t attend, unless the organizational culture includes appropriate safeguards. In any case, a section of the retrospective dedicated to investigating the causes and consequences of technical debt can ensure capture of relevant knowledge and experience.

Mathematical modeling practice

Mathematical modeling is one path to creating a more objective foundation for decisions. It’s essential for improving estimation quality. Also helpful are high quality effort data and metrics data related to the formation and lifetime of technical debt. Reviews of estimates and projections during retrospectives can help improve their quality over time.

Metrics development

Determining the effects of risk mitigation failure provides important guidance for corrective action in risk mitigation. Developing metrics that reveal these failures is therefore essential to managing cognitive bias risk. I’ll be suggesting some valuable metrics in a future post.

Last words

These three cognitive biases are by no means the only cognitive biases that can affect the formation or persistence of technical debt. Of the more than 200 identified cognitive biases, those most likely to be relevant are those that affect decision-making. Watch this space for links to posts about additional cognitive biases and their affects on technical debt formation or persistence.

Other posts relating to cognitive biases

References

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

Other posts in this thread

Undercounting nonexistent debt items

Last updated on July 10th, 2021 at 08:55 am

Sherlock Holmes and Doctor Watson, in an illustration by Sidney Paget
Sherlock Holmes and Doctor Watson, in an illustration by Sidney Paget, with the caption, “Holmes gave me a sketch of the events.” In 1892 The Strand magazine published this illustration to accompany a story called “The Adventure of Silver Blaze” by Sir Arthur Conan Doyle. It’s in this story that the following dialog occurs:

Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”

Gregory: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

From this, Holmes deduces that the dog’s master was the villain. This is an example of looking for what isn’t there, and failing to notice it. It’s an example of absence blindness.

Original book illustration, courtesy Wikimedia Commons.

People and companies are developing technologies for assessing the nature and volume of technical debt borne by enterprise assets. The key word is developing. Some tools do exist, and they can be helpful. But they can’t do it all. Most assessments also rely on surveys and interviews of engineers and their managers. But these tools have limitations, too. Among these limitations is undercounting nonexistent debt items in surveys about technical debt.

It’s well known that survey results can exhibit biases. Collectively, these biases are known as response biases [Furnham 1986]. Sources of response bias include phrasing of questions, the demeanor of the interviewer, the desires of the participants to be good experimental subjects, attempts by subjects to respond with the “right answers,” selection of subjects, and more. These sources of bias are real, and we must address them when we design surveys.

Selection bias and absence blindness

But I have in mind here a set of biases more specific to technical debt. For example, when we ask subjects for examples of technical debt, they’re more likely to recall and provide examples of artifacts that exist than they are to provide examples of artifacts that don’t exist. This happens because of a cognitive bias called selection bias. The effect isn’t intentional, and it can dramatically skew results.

Selection bias is an example of a cognitive bias. In this case, selection bias acts to skew the data in such a way as to interfere with proper randomization, which ensures that the sample data doesn’t accurately represent the actual population of technical debt artifacts. Specifically, the data will tend to under-represent technical debt artifacts that don’t exist. Related phenomena are absence blindness and survivorship bias.

For example, regression testing is an essential step used in refactoring systems. When regression tests are unavailable, and we try to refactor a system to retire some of its technical debt, we face a problem. We can’t be certain that we haven’t changed something important. And so, when a survey design doesn’t mitigate the effects of selection bias, we can expect an elevated probability of failing to note any missing regression tests.

Mitigating the risk of undercounting nonexistent debt items

It’s helpful for surveys to include questions that specifically ask subjects to report technical debt items that don’t exist, but which would be helpful if they did exist—like missing regression tests. Even more helpful: conduct brainstorming sessions for engineers in which the goal is to list missing artifacts, tools, or processes that comprise technical debt precisely because they’re missing.

References

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Furnham 1986] Adrian Furnham. “Response bias, social desirability and dissimulation,” Personality and Individual Differences 7:3, 385-400, 1986.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

How outsourcing leads to increasing technical debt

Last updated on July 8th, 2021 at 01:37 pm

Most of the nontechnical precursors of technical debt that afflict in-house work also afflict outsourced work. For example, the planning fallacy affects internal planners. But it also afflicts the bidders for contracts that cover outsourced work. As described in “Unrealistic optimism: the planning fallacy and the n-person prisoner’s dilemma,” Boehm, et al., call this the “Conspiracy of Optimism.” [Boehm 2016] But outsourcing engineering work can introduce pathways for incurring technical debt that are specific to outsourcing.

The risks of incurring technical debt from outsourcing

Green fields
Green fields. Greenfield outsourcing, also known as startup outsourcing, is the outsourcing of activity that the enterprise has never performed in-house. Greenfield outsourcing is especially tricky with respect to technical debt formation. The risk arises because much of the expertise necessary to perform the work in question is probably not resident within the enterprise. That void in enterprise expertise makes for difficulties in managing technical debt in the outsourced artifacts.
Outsourcing is inherently more likely to incur technical debt than is equivalent work performed in-house. When most enterprises contract for development of systems or software, the criteria for acceptance rarely include specifications for maintainability or extensibility. This happens, in part, because such qualitative specifications are difficult to define quantitatively. That’s why the condition of deliverables relative to maintainability and extensibility is so variable. Outsourced deliverables can best be described as bearing an unknown level of technical debt.

The root cause of the problem is likely a lack of a universally accepted metrics for quantifying technical debt [Li 2015]. That’s why it’s difficult to specify in the vendor contract an acceptability threshold for technical debt. And since the consequences of technical debt in deliverables potentially remain hidden during the lifetime of the outsource contract, years might pass before the issue becomes evident. That inevitably complicates the task of understanding the root cause of the problem.

Six ways outsourcing can contribute to technical debt

In what follows, I use the term vendor to denote the organization performing the outsourced work. I use the term enterprise to denote the organization that has outsourced a portion of its engineering work. The retained organization is the portion of the enterprise directly relevant to the outsourced work, and which has remained in-house.

Among the mechanisms that lead to incurring technical debt in the outsourcing context are the six mechanisms sketched below. Gauging the size of these effects by auditing deliverables or by auditing the internal processes of the vendor could be helpful in managing levels of technical debt arising from outsourcing.

This list isn’t exhaustive. Quite possibly other phenomena also contribute to technical debt formation in the context of outsourcing.

Progressive erosion of retained organization capabilities

Over time, after outsourcing work on a particular asset, the enterprise can expect erosion of some engineering expertise. The expertise most vulnerable is that needed to manage, evaluate, understand, or, if need be, to re-insource (or backsource) the outsourced work [Willcocks 2004][Beardsell 2010]. Typically, staff who formerly performed the outsourced work do move on to other work, voluntarily or not, either within the enterprise or elsewhere. Indeed, cost savings from terminations and employee buyouts often accompany outsourcing decisions. That reallocation is part of the economic justification of outsourcing decisions.

But even if the enterprise continues to employ the people who formerly performed the outsourced work, the deleterious effects remain. Those employees, filling new roles, likely become less familiar with the current work and therefore less able to perform it. And since they’re now carrying out other assignments, their availability is limited. In the public sector, the organizations that perform the outsourced work exacerbate this phenomenon by recruiting staff from their former agencies [Kusnet 2007]. In manufacturing, Kinkel, et al., suggest that, paraphrasing, outsourcing disturbs the formation of internal competence [Kinkel 2016].

Thus, outsourcing engineering efforts can erode the ability of the enterprise to perform the outsourced work internally. Likewise, it erodes enterprise capability to monitor or evaluate the work when vendors perform it. Consequently, the enterprise is less able to monitor, evaluate, or retire any technical debt that accumulates in the outsourced work products.

Stovepiping among vendors

Most multi-vendor efforts use a separation-of-concerns approach [Laplante 2007] to dividing the work. That is, each vendor is empowered to use any approach it can, consistent with its own contract and statement of work. In some cases, two or more vendors might have overlapping needs that cause them each to produce similar capabilities as elements of their respective deliverables. Sharing their results is of course possible. But unless both of their contracts anticipate the need for sharing, sharing is unlikely. Failure to share those results that could have been shared can lead to incurring unrecognized technical debt.

Stovepiping within vendors

Technical debt formation is possible even if there is only one vendor. Different teams or individuals working for that vendor might unwittingly create elements with overlapping capabilities. That duplication could lead to technical debt, or it could constitute technical debt in itself. For example, two teams working for the same vendor might have similar needs, and might develop duplicative tools independently. As a second example, consider a version of stovepiping combined with temporal displacement. Suppose that one team is unaware that a previous effort for the same customer had already developed a capability that it now needs. That team then might re-create that already-existing capability.

Stovepiping within vendors is less likely when the vendor operates under a single vendor technical lead, and the enterprise interacts with that single lead through a single in-house technical lead. When either side of the relationship manages communications through multiple contacts, stovepiping is more likely. New technical debt is more likely to form.

Loss of continuity in the outsourced engineering staff

Unless the agreement between the vendor and the enterprise specifically addresses staff volatility, an additional risk arises. Staff turnover or reassignment within the vendor organization can lead to technical debt. This can happen in just the same way that turnover in-house can lead to technical debt. But the risk is most significant at interfaces between one version of the product or service and the next. With outsourcing, however, the vendor has little internalized motivation to control this phenomenon. Moreover, the enterprise likely has less control and less awareness of staff assignments within the vendor organization than it does within the enterprise. Thus, loss of continuity in the outsourced engineering staff is both more likely and more likely to lead to technical debt.

Reduced coordination of engineering approaches and business objectives

Lack of coordination between engineering approaches and business planning can cause technical debt formation. This happens because engineers create and deploy artifacts that they must revisit later. The need for rework arises after engineers learn of business plans they didn’t know about at the time they produced those artifacts. See “Failure to communicate long-term business strategy.” This scenario is more likely, and possibly irresolvable, when the enterprise withholds its long-term plans from the outsourcing vendor to protect its strategy.

Hiring and retention problems

In some instances, the enterprise has never before performed work like the outsourced work [Delen 2007]. This kind of outsourcing is startup outsourcing or greenfield outsourcing. In these cases, typically, the enterprise must reassign existing employees or hire new employees to interface with the outsource vendor. These roles are analogous to remote supervisors, except that the teams they supervise aren’t enterprise employees. Hiring and retaining people for these roles can be difficult. Startup challenges arise both within the enterprise and within the vendor organization. Recruitment and especially retention problems in these roles can decrease the likelihood of controlling technical debt, and increase the likelihood of incurring technical debt.

Last words

A policy that would address these risks is one that would facilitate accomplishing three objectives:

  • Retain organizational capability sufficient to assess the effect on technical debt of any outsourced engineering work
  • Represent the effect on technical debt faithfully in any financial models used in making the outsourcing decision
  • Include in financial models the effects of technical debt, the cost of carrying technical debt, and the effects on controlling technical debt when deciding whether to extend outsourcing contracts with vendors.

References

[Beardsell 2010] Julie Beardsell. “IT Backsourcing: is it the solution to innovation?”, SMC Working Paper Series, Issue: 02/2010, Swiss Management Center University, 2010.

Available: here; Retrieved: February 15, 2018

Cited in:

[Boehm 2016] Barry Boehm, Celia Chen, Kamonphop Srisopha, Reem Alfayez, and Lin Shiy. “Avoiding Non-Technical Sources of Software Maintenance Technical Debt,” USC Course notes, Fall 2016.

Available: here; Retrieved: July 25, 2017

Cited in:

[Delen 2007] Guus Delen. “Decision and Control Factors for IT-sourcing,” in Handbook of Network and System Administration, Jan Bergstra and Mark Burgess, eds., Boston: Elsevier, 929-946, 2007.

Order from Amazon

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Furnham 1986] Adrian Furnham. “Response bias, social desirability and dissimulation,” Personality and Individual Differences 7:3, 385-400, 1986.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kinkel 2016] Steffen Kinkel, Angela Jäger, Djerdj Horvath, and Bernhard Rieder. “The effects of in-house manufacturing and outsourcing on companies’ profits and productivity,” 23rd International Annual EurOMA Conference, At Trondheim, Volume: 23, June 2016.

Cited in:

[Kusnet 2007] David Kusnet. “Highway Robbery II,” report of the National Association of State Highway and Transportation Unions (NASHTU).

Cited in:

[Laplante 2007] Phillip A. Laplante. What Every Engineer Should Know About Software Engineering. New York: CRC Press, 2007.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Li 2015] Zengyang Li, Paris Avgeriou, and Peng Liang. “A systematic mapping study on technical debt and its management,” Journal of Systems and Software 101, 193-220, 2015.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Willcocks 2004] L. Willcocks, J. Hindle, D. Feeny, and M. Lacity. “IT and Business Process Outsourcing: The Knowledge Potential,” Information Systems Management 21:3, 7-15, 2004.

Cited in:

Other posts in this thread

Confirmation bias and technical debt

Last updated on July 8th, 2021 at 01:35 pm

Confirmation bias is a cognitive bias [Kahneman 2011]. It’s the human tendency to favor and seek only information that confirms our preconceptions. It also causes us to avoid information that disconfirms our preconceptions. For example, the homogeneity of cable news channel audiences is a result of confirmation bias. Another result is the alignment between preconceptions of the audience and the slant of the newscast for that channel.

How confirmation bias can lead to technical debt

Third stage ignition, sending the Mars Climate Orbiter to Mars in December, 1998
Computer-generated image of the third stage ignition, sending the Mars Climate Orbiter (MCO) to Mars in December, 1998. The spacecraft eventually broke up in the Martian atmosphere due to what is now called the “metric mix-up.” The Lockheed Martin team that constructed and programmed the MCO used Imperial units. But the team at JPL that was responsible for flying the MCO used metric units. After the loss of the MCO, an investigation led by NASA uncovered the mix-up.
One of the many changes resulting from this loss was increased use of reviews and inspections. We don’t know why reviews and inspections weren’t as thorough before the loss of the MCO as they are now. But one possibility is the effects of confirmation bias in assessing the need for reviews and inspections. Image courtesy Engineering Multimedia, Inc., and U.S. National Aeronautics and Space Administration
Confirmation bias causes technical debt by biasing the information on which decision makers base decisions involving technical debt. Most people in these roles have objectives they regard as having priority over eliminating technical debt. This causes them to bias their searches for information about technical debt. The bias favors information that would support directly the achievement of those primary objectives. Decisions tend, for example, to discount warnings of technical debt issues. They also tend to underfund technical debt assessments, and set aside advice regarding avoiding debt formation in current projects.

An example

For example, anyone determined to find reasons to be skeptical of the need to manage technical debt need only execute a few Google searches. Searching for there is no such thing as technical debt yields about 300,000 results at this writing; technical debt is a fraud about 1.6 million; and technical debt is a bad metaphor about 3.7 million. Compare this to technical debt which yields only 1.6 million. A skeptic wouldn’t even have to read any of these pages to come away convinced that technical debt is at best a controversial concept. This is, of course, specious reasoning, if it’s reasoning at all. But it does serve to illustrate the potential for confirmation bias to contribute to preventing or limiting rational management of technical debt.

Last words

Detecting confirmation bias in oneself is extraordinarily difficult because confirmation bias causes us to (a) decide not to search for data that would reveal confirmation bias; and (b) if data somehow becomes available, to disregard or to seek alternative explanations for it if that data tends to confirm the existence of confirmation bias. Moreover, another cognitive bias known as the bias blind spot causes individuals to see the existence and effects of cognitive biases much more in others than in themselves [Pronin 2002].

Still, the enterprise would benefit from monitoring the possible existence and effects of confirmation bias in decisions with respect to allocating resources to managing technical debt. Whenever decisions are made, we must manage confirmation bias risk.

References

[Beardsell 2010] Julie Beardsell. “IT Backsourcing: is it the solution to innovation?”, SMC Working Paper Series, Issue: 02/2010, Swiss Management Center University, 2010.

Available: here; Retrieved: February 15, 2018

Cited in:

[Boehm 2016] Barry Boehm, Celia Chen, Kamonphop Srisopha, Reem Alfayez, and Lin Shiy. “Avoiding Non-Technical Sources of Software Maintenance Technical Debt,” USC Course notes, Fall 2016.

Available: here; Retrieved: July 25, 2017

Cited in:

[Delen 2007] Guus Delen. “Decision and Control Factors for IT-sourcing,” in Handbook of Network and System Administration, Jan Bergstra and Mark Burgess, eds., Boston: Elsevier, 929-946, 2007.

Order from Amazon

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Furnham 1986] Adrian Furnham. “Response bias, social desirability and dissimulation,” Personality and Individual Differences 7:3, 385-400, 1986.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kinkel 2016] Steffen Kinkel, Angela Jäger, Djerdj Horvath, and Bernhard Rieder. “The effects of in-house manufacturing and outsourcing on companies’ profits and productivity,” 23rd International Annual EurOMA Conference, At Trondheim, Volume: 23, June 2016.

Cited in:

[Kusnet 2007] David Kusnet. “Highway Robbery II,” report of the National Association of State Highway and Transportation Unions (NASHTU).

Cited in:

[Laplante 2007] Phillip A. Laplante. What Every Engineer Should Know About Software Engineering. New York: CRC Press, 2007.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Li 2015] Zengyang Li, Paris Avgeriou, and Peng Liang. “A systematic mapping study on technical debt and its management,” Journal of Systems and Software 101, 193-220, 2015.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Pronin 2002] Emily Pronin, Daniel Y. Lin, and Lee Ross. “The bias blind spot: Perceptions of bias in self versus others.” Personality and Social Psychology Bulletin 28:3, 369-381, 2002.

Available: here; Retrieved: July 10, 2017

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Willcocks 2004] L. Willcocks, J. Hindle, D. Feeny, and M. Lacity. “IT and Business Process Outsourcing: The Knowledge Potential,” Information Systems Management 21:3, 7-15, 2004.

Cited in:

Other posts in this thread

Unrealistic optimism: the planning fallacy and the n-person prisoner’s dilemma

Last updated on July 8th, 2021 at 01:31 pm

In a 1977 report, Daniel Kahneman and Amos Tversky identify one particular cognitive bias [Kahneman 2011], the planning fallacy, which afflicts planners [Kahneman 1977] [Kahneman 1979]. They discuss two types of evidence planners use. Singular evidence is specific to the case at hand. Distributional evidence is specific to similar past efforts. The planning fallacy is planners’ tendency to pay too little attention to distributional evidence and too much to singular evidence. They do this even when the singular evidence is scanty or questionable. Failing to harvest lessons from the distributional evidence, which is inherently more diverse than singular evidence, the planners tend to underestimate cost and schedule. So there’s a tendency to promise lower costs, faster delivery, and greater benefits than anyone can reasonably expect.

Enter the n-person prisoner’s dilemma

Boehm et al. [Boehm 2016] describe a dynamic that exacerbates the problem. They observe that because organizational resources are finite, project champions compete with each other for resources. This competition compels them to be unrealistically optimistic about their objectives, costs, and schedules. Although Boehm et al. call this mechanism the “Conspiracy of Optimism,” possibly facetiously, it isn’t actually a conspiracy. Rather, it’s a variant of the N-Person Prisoner’s Dilemma [Hamburger 1973].

A special property of pressure-induced debt

Hoover Dam, aerial view, September 2017
Hoover Dam, aerial view, September 2017. Under construction from 1931 to 1936, the cost of the dam was $48.8M ($639M in 2016 dollars) under a fixed-price contract. It was complete two years early. Apparently the planning fallacy doesn’t act inevitably. 112 men died in incidents associated with its construction. 42 more died of a condition diagnosed as pneumonia, but which experts now believe to have been carbon monoxide poisoning due to poor ventilation in the dam’s diversion tunnels during construction. There’s little doubt that unrealistic optimism affects more than budget and schedule projections. It also affects risk projections, including deaths. Photo (cc) Mariordo (Mario Roberto Durán Ortiz), courtesy Wikimedia Commons.
Unrealistic optimism creates budget shortfalls and schedule pressures. In turn, they both contribute to conditions favorable for creating nonstrategic technical debt. And this mechanism, or any mechanism associated with schedule or budget pressure, tends to produce technical debt that’s subtle—it’s the type least likely to become evident in the short term. For example, technical debt that might make a particular enhancement more difficult in the next project is more likely to appear than technical debt that creates trouble in the current effort. Debt that creates trouble in the current effort is more likely to be retired in the short term, if not in the current effort. Awkward architecture might be more difficult to identify. It’s therefore more likely to survive in the intermediate or long term.

The bad news of schedule pressure

In other words, the technical debt most likely to be generated is that which is the most benign in the short term, and which is therefore more likely to escape notice. If noticed, it’s more likely to be forgotten unless carefully documented. And that action is unlikely under schedule and budget pressure. In this way, the nonstrategic technical debt created as a result of unrealistic optimism is more likely than most technical debt to eventually become legacy technical debt.

Last words

Policymakers can assist in addressing the consequences of unrealistic optimism by advocating for education about it. They can also advocate for changes in incentive structures and performance management systems. It’s good business to establish organizational standards with respect to realism in promised benefits, costs, and schedules.

References

[Beardsell 2010] Julie Beardsell. “IT Backsourcing: is it the solution to innovation?”, SMC Working Paper Series, Issue: 02/2010, Swiss Management Center University, 2010.

Available: here; Retrieved: February 15, 2018

Cited in:

[Boehm 2016] Barry Boehm, Celia Chen, Kamonphop Srisopha, Reem Alfayez, and Lin Shiy. “Avoiding Non-Technical Sources of Software Maintenance Technical Debt,” USC Course notes, Fall 2016.

Available: here; Retrieved: July 25, 2017

Cited in:

[Delen 2007] Guus Delen. “Decision and Control Factors for IT-sourcing,” in Handbook of Network and System Administration, Jan Bergstra and Mark Burgess, eds., Boston: Elsevier, 929-946, 2007.

Order from Amazon

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Furnham 1986] Adrian Furnham. “Response bias, social desirability and dissimulation,” Personality and Individual Differences 7:3, 385-400, 1986.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Hamburger 1973] Henry Hamburger. “N-person Prisoner’s Dilemma,” Journal of Mathematical Sociology, 3, 27–48, 1973. doi:10.1080/0022250X.1973.9989822

Cited in:

[Kahneman 1977] Daniel Kahneman and Amos Tversky. “Intuitive Prediction: Biases and Corrective Procedures,” Technical Report PTR-1042-7746, Defense Advanced Research Projects Agency, June 1977.

Available: here; Retrieved: September 19, 2017

Cited in:

[Kahneman 1979] Daniel Kahneman and Amos Tversky, “Intuitive Prediction: Biases and Corrective Procedures,” Management Science 12, 313-327, 1979.

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kinkel 2016] Steffen Kinkel, Angela Jäger, Djerdj Horvath, and Bernhard Rieder. “The effects of in-house manufacturing and outsourcing on companies’ profits and productivity,” 23rd International Annual EurOMA Conference, At Trondheim, Volume: 23, June 2016.

Cited in:

[Kusnet 2007] David Kusnet. “Highway Robbery II,” report of the National Association of State Highway and Transportation Unions (NASHTU).

Cited in:

[Laplante 2007] Phillip A. Laplante. What Every Engineer Should Know About Software Engineering. New York: CRC Press, 2007.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Li 2015] Zengyang Li, Paris Avgeriou, and Peng Liang. “A systematic mapping study on technical debt and its management,” Journal of Systems and Software 101, 193-220, 2015.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Pronin 2002] Emily Pronin, Daniel Y. Lin, and Lee Ross. “The bias blind spot: Perceptions of bias in self versus others.” Personality and Social Psychology Bulletin 28:3, 369-381, 2002.

Available: here; Retrieved: July 10, 2017

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Willcocks 2004] L. Willcocks, J. Hindle, D. Feeny, and M. Lacity. “IT and Business Process Outsourcing: The Knowledge Potential,” Information Systems Management 21:3, 7-15, 2004.

Cited in:

Other posts in this thread

The fundamental attribution error

Last updated on July 8th, 2021 at 01:29 pm

When we try to understand the behavior of others, we often make a particularly human mistake. We tend to attribute too much to character and disposition and too little to situation and context. This mistake is so common that it has a name: The Fundamental Attribution Error (FAE) (See “The Fundamental Attribution Error” at my other blog). And although little experimental data is available regarding its effects on technical debt, we can plausibly argue that its effects are significant—and unwelcome.

Arapaho moccasins ca. 1880-1910.
Arapaho moccasins ca. 1880-1910. An American Indian proverb advises, “Don’t judge any man until you have walked two moons in his moccasins.” From the perspective of the FAE, the proverb is a way of mitigating FAE risks. Photo of Arapaho moccasins, ca. 1880-1910 on exhibit at the Bata Shoe Museum, in Toronto, Canada. Photo by Daderot, courtesy Wikimedia
The FAE contributes to technical debt in at least two ways. First, it distorts assessments by non-engineers of the motivations of engineers as they warn of future difficulties from technical debt. Second, it distorts assessments by engineers of the motivations of non-engineers as they oppose allocation of resources to technical debt retirement. They oppose these allocations in order to conserve resources for their own efforts or to accelerate efforts in which they have more immediate interest. The two effects are symmetrical in the large, though not in detail.

Below is a description of the effects of the FAE on engineers and non-engineers. Some of the non-engineers are the internal customers of the engineers. I examine the effects of the FAE that arise from three different claims of the parties to the exchange.

Claim: Technical debt depresses engineering productivity

Many engineers or their managers hold this position.

Engineers notice incidents in which some of the work they must perform on an asset would be much easier or even unnecessary were it not for the technical debt that the asset carries. They sense the burden of the extra effort because they know how much easier and faster the work would be if they could retire the debt.

The internal customers of engineers don’t see these circumstances as clearly as engineers and their managers do. Consequently, they tend to discount engineers’ claims of depressed productivity. Some experience engineers’ complaints, requests, and warnings as whining, self-serving nest feathering, or worse. They tend to attribute engineers’ complaints to faults in the character or “work ethic” of engineers.

Claim: Instead of retiring the technical debt now, just document it for later

This is a suggestion senior managers or the engineers’ internal customers often put forth.

The internal customers of engineers have pressing needs for immediate engineering results. They see new products or repairs to existing products as a means of achieving the objectives the enterprise sets. Focusing limited engineering resources on technical debt retirement conflicts with producing results that would help internal customers of engineers meet these more immediate objectives. As a compromise, non-engineers propose that engineers document instances of technical debt as they find them, so that they can be addressed more efficiently after engineers meet the immediate needs of internal customers.

Engineers discount the validity of this approach for three reasons. First, they don’t experience the pressures their internal customers do. To engineers, their customers’ reports of more pressing needs seem to be merely excuses to get what they want when they want it. Second, the proposed documentation work doesn’t advance the engineers’ customer’s current project toward its objectives. Instead, it actually delays the current project, in ways invisible to non-engineers. These delays induce increases in schedule pressure, and therefore technical debt. The technical debt occurs because the customer of the current project rarely cares enough about the technical debt documentation effort to allow for the extra time it takes. Finally, because many assets evolve continuously, such documentation has a short shelf life. And that limits its value in ways non-engineers might not appreciate.

In these ways, the FAE both creates the documentation suggestion, and limits the ability of engineers to appreciate its motivation. But it also limits the ability of non-engineers to appreciate how limited is the value of the documentation.

Claim: Addressing technical debt is important, but not urgent

Senior managers or the engineers’ internal customers are most common among adherents of this belief.

When the engineering organization presents a business case for investing in addressing technical debt, they aren’t alone. Other functions in the enterprise also make business cases of their own. Too often, these cases are evaluated against each other. Investment in one entails reduced investment in another. But the benefits of technical debt retirement tend to become most visible to non-engineers much later than do the benefits of some other proposals. Sometimes the benefits of technical debt retirement are wholly invisible to non-engineers. For these reasons, technical debt retirement projects tend perhaps more often than most to be deferred at best, or, worse, rejected.

The FAE is in part responsible for the perception of non-engineers that the benefits of technical debt remediation arrive in the distant future. Engineers notice the benefits relatively immediately, because they interact with the rehabilitated assets on a daily basis. Since non-engineers don’t have these experiences, they notice the benefits only upon delivery of the results of engineering work. This mismatch of the timescales of perceptions of engineers and non-engineers prevents non-engineers from perceiving what is in daily evidence to engineers.

Last words

Both engineers and non-engineers are subject to deadlines and resource limitations beyond their control. Their ability to appreciate the challenges their counterparts face is the key to effective collaboration. Too often, neither part feels that it has the time or resources to accommodate the needs of the other.

References

[Beardsell 2010] Julie Beardsell. “IT Backsourcing: is it the solution to innovation?”, SMC Working Paper Series, Issue: 02/2010, Swiss Management Center University, 2010.

Available: here; Retrieved: February 15, 2018

Cited in:

[Boehm 2016] Barry Boehm, Celia Chen, Kamonphop Srisopha, Reem Alfayez, and Lin Shiy. “Avoiding Non-Technical Sources of Software Maintenance Technical Debt,” USC Course notes, Fall 2016.

Available: here; Retrieved: July 25, 2017

Cited in:

[Delen 2007] Guus Delen. “Decision and Control Factors for IT-sourcing,” in Handbook of Network and System Administration, Jan Bergstra and Mark Burgess, eds., Boston: Elsevier, 929-946, 2007.

Order from Amazon

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Furnham 1986] Adrian Furnham. “Response bias, social desirability and dissimulation,” Personality and Individual Differences 7:3, 385-400, 1986.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Hamburger 1973] Henry Hamburger. “N-person Prisoner’s Dilemma,” Journal of Mathematical Sociology, 3, 27–48, 1973. doi:10.1080/0022250X.1973.9989822

Cited in:

[Kahneman 1977] Daniel Kahneman and Amos Tversky. “Intuitive Prediction: Biases and Corrective Procedures,” Technical Report PTR-1042-7746, Defense Advanced Research Projects Agency, June 1977.

Available: here; Retrieved: September 19, 2017

Cited in:

[Kahneman 1979] Daniel Kahneman and Amos Tversky, “Intuitive Prediction: Biases and Corrective Procedures,” Management Science 12, 313-327, 1979.

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kinkel 2016] Steffen Kinkel, Angela Jäger, Djerdj Horvath, and Bernhard Rieder. “The effects of in-house manufacturing and outsourcing on companies’ profits and productivity,” 23rd International Annual EurOMA Conference, At Trondheim, Volume: 23, June 2016.

Cited in:

[Kusnet 2007] David Kusnet. “Highway Robbery II,” report of the National Association of State Highway and Transportation Unions (NASHTU).

Cited in:

[Laplante 2007] Phillip A. Laplante. What Every Engineer Should Know About Software Engineering. New York: CRC Press, 2007.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Li 2015] Zengyang Li, Paris Avgeriou, and Peng Liang. “A systematic mapping study on technical debt and its management,” Journal of Systems and Software 101, 193-220, 2015.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Pronin 2002] Emily Pronin, Daniel Y. Lin, and Lee Ross. “The bias blind spot: Perceptions of bias in self versus others.” Personality and Social Psychology Bulletin 28:3, 369-381, 2002.

Available: here; Retrieved: July 10, 2017

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Willcocks 2004] L. Willcocks, J. Hindle, D. Feeny, and M. Lacity. “IT and Business Process Outsourcing: The Knowledge Potential,” Information Systems Management 21:3, 7-15, 2004.

Cited in:

Other posts in this thread

The Dunning-Kruger effect can lead to technical debt

Last updated on July 7th, 2021 at 07:54 pm

Cropped detail from Charles Robert Darwin, a painting by John Collier
Cropped detail from Charles Robert Darwin, a painting by John Collier (1850-1934). The painting was given to the National Portrait Gallery, London, in 1896. Darwin writes, in The Descent of Man (1871): “… ignorance more frequently begets confidence than does knowledge …” which is the essence of the Dunning-Kruger effect. Image courtesy WikiQuote.

The Dunning-Kruger effect [Kruger 1999] can lead to formation or persistence of technical debt in two ways. First, it can cause technologists or their managers to overestimate their ability to maintain the resource focus needed for retiring technical debt in a timely fashion. Second, it can cause senior managers to be reluctant to accede to resource requests of technologists and their managers in support of technical debt management programs.

Kruger and Dunning conducted experiments that yielded results consistent with the following four principles (paraphrasing):

  1. Incompetent individuals, compared to their more competent peers, tend to dramatically overestimate their own ability and performance
  2. Incompetent individuals, compared to their more competent peers, tend to be less able to gain insight into their own true levels of performance
  3. Incompetent individuals can gain insight about their shortcomings, but, paradoxically, this comes about by gaining competence
  4. Incompetent individuals, compared to their more competent peers, are less able to recognize competence when they see it

The first three principles lead to distorted assessments of one’s own capabilities. The fourth principle leads to distorted assessments of the capabilities of others.

How the Dunning-Kruger Effect affects teams

As an example of distorted self-assessment, consider a team or its managers who must undertake retirement of some types of technical debt. And suppose that they must do this in the course of enhancing or repairing an asset. Such a task plan seems at first to offer efficiencies. The engineers can readily make both kinds of changes at one go. Metaphorically, if we must go to the store for milk, we can also pick up bread while we’re there, rather than making two trips.

However, modifying an existing complex technological asset is unlike shopping for bread and milk. The two kinds of modifications—debt retirement and asset enhancement or repair—might seem at first to be separable. Often they are. But if they aren’t separable, and we undertake the two tasks together, testing and debugging can become extremely complicated. The complications arise because of interactions between defects in the two kinds of modifications. Under some circumstances, an experienced team and its managers might be more likely to anticipate these difficulties. An inexperienced team and its managers might be more likely to underestimate the difficulties, as a consequence of the Dunning-Kruger effect. Budget and schedule overruns are possible consequences of underestimating the complexity of the problem.

How the Dunning-Kruger Effect affects decision makers

As an example of the fourth principle above, the Dunning-Kruger effect can cause some decision makers to discount the warnings and resource requests of engineers and their managers. Decision makers who are unsophisticated in matters related to technical debt must nevertheless assess the validity of the resource requests. In making these assessments, these decision makers may be at a disadvantage for a number of reasons. Examples:

  • Decision makers might hold mistaken beliefs about technical debt. For example, many believe that the main causes of technical debt are poor decisions by engineering managers. And others believe that technical debt is the result of slovenly work habits of engineers. Those who hold such beliefs might be reluctant to allocate yet more resources to engineers to address technical debt.
  • If the advocates for resources for technical debt management aren’t fully informed about the strategic direction of the enterprise, their requests might be inconsistent with enterprise strategy. As a result of a cognitive bias [Kahneman 2011] known as the halo effect [Thorndike 1920], decision makers might notice that portions of those proposals don’t take enterprise strategy into account properly. This can cause them to tend to discount valid portions of the technologists’ proposals.
  • Decision makers might be affected by unrealistic optimism [Weinstein 1996], also known as optimism bias. It’s a cognitive bias that can cause them to discount the sometimes-vivid warnings of technologists about the unfavorable consequences of failing to provide technical debt management resources.

Last words

Investigations of organizational behavior relative to technical debt and the Dunning-Kruger effect could be fruitful. For example, what is the degree of correlation between burdens of technical debt and the incidence of rejected or severely curtailed proposals for resources? Also rewarding would be a survey of the nearly 200 known cognitive biases, to determine which of them might be most likely to affect decision-making relative to technical debt.

References

[Beardsell 2010] Julie Beardsell. “IT Backsourcing: is it the solution to innovation?”, SMC Working Paper Series, Issue: 02/2010, Swiss Management Center University, 2010.

Available: here; Retrieved: February 15, 2018

Cited in:

[Boehm 2016] Barry Boehm, Celia Chen, Kamonphop Srisopha, Reem Alfayez, and Lin Shiy. “Avoiding Non-Technical Sources of Software Maintenance Technical Debt,” USC Course notes, Fall 2016.

Available: here; Retrieved: July 25, 2017

Cited in:

[Delen 2007] Guus Delen. “Decision and Control Factors for IT-sourcing,” in Handbook of Network and System Administration, Jan Bergstra and Mark Burgess, eds., Boston: Elsevier, 929-946, 2007.

Order from Amazon

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Furnham 1986] Adrian Furnham. “Response bias, social desirability and dissimulation,” Personality and Individual Differences 7:3, 385-400, 1986.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Hamburger 1973] Henry Hamburger. “N-person Prisoner’s Dilemma,” Journal of Mathematical Sociology, 3, 27–48, 1973. doi:10.1080/0022250X.1973.9989822

Cited in:

[Kahneman 1977] Daniel Kahneman and Amos Tversky. “Intuitive Prediction: Biases and Corrective Procedures,” Technical Report PTR-1042-7746, Defense Advanced Research Projects Agency, June 1977.

Available: here; Retrieved: September 19, 2017

Cited in:

[Kahneman 1979] Daniel Kahneman and Amos Tversky, “Intuitive Prediction: Biases and Corrective Procedures,” Management Science 12, 313-327, 1979.

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kinkel 2016] Steffen Kinkel, Angela Jäger, Djerdj Horvath, and Bernhard Rieder. “The effects of in-house manufacturing and outsourcing on companies’ profits and productivity,” 23rd International Annual EurOMA Conference, At Trondheim, Volume: 23, June 2016.

Cited in:

[Kruger 1999] Justin Kruger and David Dunning. “Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments,” Journal of Personality and Social Psychology, 77:6, 1121-1134, 1999.

Cited in:

[Kusnet 2007] David Kusnet. “Highway Robbery II,” report of the National Association of State Highway and Transportation Unions (NASHTU).

Cited in:

[Laplante 2007] Phillip A. Laplante. What Every Engineer Should Know About Software Engineering. New York: CRC Press, 2007.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Li 2015] Zengyang Li, Paris Avgeriou, and Peng Liang. “A systematic mapping study on technical debt and its management,” Journal of Systems and Software 101, 193-220, 2015.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Pronin 2002] Emily Pronin, Daniel Y. Lin, and Lee Ross. “The bias blind spot: Perceptions of bias in self versus others.” Personality and Social Psychology Bulletin 28:3, 369-381, 2002.

Available: here; Retrieved: July 10, 2017

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Weinstein 1996] Neil D. Weinstein and William M. Klein. “Unrealistic Optimism: Present and Future,” Journal of Social and Clinical Psychology 15:1, 1-8, 1996. doi:10.1521/jscp.1996.15.1.1

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Willcocks 2004] L. Willcocks, J. Hindle, D. Feeny, and M. Lacity. “IT and Business Process Outsourcing: The Knowledge Potential,” Information Systems Management 21:3, 7-15, 2004.

Cited in:

Other posts in this thread

Show Buttons
Hide Buttons