The trap of elegantly stated goals

Last updated on July 10th, 2021 at 11:16 am

Rock cairns in a wilderness area
Rock cairns—also known as rock stacks—in a wilderness area. Some rock stacks in parks and wilderness areas serve a practical purpose as trail markers. But in recent decades, rock stacking has become a fad. Rock stacks aren’t permanent—they use no glue, rings, or fasteners—but their presence does degrade the experience of Nature. The practice continues, in part, because many find the elegance of the structures appealing. As with rock stacks, the elegance of elegantly stated goals has a dark side.

For organizations, an elegantly stated goal is one that anyone can understand, remember, and recite to others. An example for technical debt management is, “We’ll drive technical debt to zero over five years.” Or, “No project is finished if it increases technical debt.” But when the goal relates to solving a problem that has only messy solutions, stating that goal elegantly risks becoming ensnared in what I call the trap of elegantly stated goals.

Because elegant goal statements are so memorable and repeatable, elegantly stated goals spread rapidly, especially if they’re even a bit inspirational. But elegantly stated goals become traps when they incorporate overly simplistic views of how to attain those goals.

And that often happens when technical debt is involved. Here are four guidelines that can help organizations avoid the trap of elegantly stated goals for technical debt.

Beware the halo effect

The halo effect [Thorndike 1920] is a cognitive bias [Kahneman 2011]. It systematically skews our assessments of the qualities of a person, product, brand, company, or any entity, really. If our sense of one quality of the entity is positive, we’re more likely to assess as positive other qualities of that entity. The elegance of a goal statement can cause us to regard the goal as more desirable than we would if the goal were stated less elegantly. For example, the statement, “We’ll achieve zero technical debt in five years,” can increase the chances that we’ll believe that such a goal is attainable. Indeed, some might not even question its desirability, let alone its attainability.

When devising goals for technical debt management, beware the halo effect. Always question desirability, taking costs and benefits into account.

Technical debt matters less than its metaphorical interest charges

The metaphorical interest charges (MICs) on technical debt, rather than the metaphorical principal (MPrin) of the debt itself, are what matter. A goal for total technical debt might be more elegant and more simply stated than would be a goal for technical debts that carry high MICs. But goals for total technical debt can lead to effort spent on debts with low MICs.  And those efforts produce little benefit.

When setting goals for technical debt management, pay attention to the MICs. Distinguish between low-MICs and high-MICs technical debt. Keep in mind that MICs can fluctuate. One kind of technical debt can be a low priority at one point in time, and a high priority at another.

Controlling technical debt is safer than trying to drive it to zero

Blind application of an elegantly stated goal can have strikingly silly unintended consequences. Keep in mind that the policymaker’s definition of technical debt is any technological element that contributes, through its existence or through its absence, to lower productivity, or depressed velocity, or a higher probability of defects.

Consider this example of strikingly silly unintended consequences for the goal of zero technical debt. An engineer creates an innovative and superior solution to a previously solved problem. Existing assets that incorporate the old solution are instantly outmoded by the innovative solution. Those existing assets now carry technical debt. If the enterprise directive mandates zero technical debt, some engineering managers might be tempted to do the unexpected. They might inhibit the kind of creativity that leads to innovative solutions to previously solved problems. The temptation arises because introducing those new solutions creates exogenous technical debt in existing assets. Therein lies the trap of the elegantly stated goal.

Throttling efforts to find innovative solutions to previously solved problems is one example of an unintended consequence of trying to drive technical debt to zero. Controlling technical debt is probably a safer option than trying to drive it to zero. Before adopting elegantly stated goals for technical debt, it would be wise to be aware of their possible unintended consequences.

Get control of the behaviors that lead to technical debt

Technical debt management efforts typically emphasize debt retirement or engineering process improvement. While both activities are worthwhile, the root causes of technical debt often lie beyond engineering. See, for example, the thread in this blog exploring nontechnical precursors of technical debt.

For example, across-the-board budget cuts can lead to technical debt. This happens because teams suspend efforts that have already created technical artifacts. If those teams lack resources needed for retracting partially implemented capabilities, the partial implementations remain in place. See “How budget depletion leads to technical debt” for a more detailed explanation of this technical debt formation mechanism.

Budget control tactics like across-the-board cuts can be counter-effective. If they don’t attend to their technical debt implications, they can add to future expenses through the MICs on the debt they generate. They can thus create future needs for budget cuts, and that leads to a vicious cycle. To gain control of technical debt, we must alter these budget control tactics. We need to provide teams with the resources they need for retracting partial implementations. That would ensure that budget reductions don’t lead to technical debt formation.

Investments in technical debt retirement and engineering process improvement are worthwhile. But they can be futile unless we first address the nontechnical causes of technical debt. It’s like bailing out a sinking rowboat without first plugging its leaks. The stated goal, “We’ll drive technical debt to zero in five years,” might better be replaced with, “We’ll get control of the behaviors that lead to technical debt within two years.”

Last words

The template known as SMART goals provides one approach to setting goals with limited exposure to the risk of elegantly stated goals. See “Using SMART goals for technical debt reduction,” for details.

Achieving control of technical debt—rather than attaining any particular level of technical debt—is a useful goal. Either we’ll control technical debt or technical debt will control us.

References

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

Other posts in this thread

The resilience error and technical debt

Last updated on July 10th, 2021 at 07:46 am

I’ve mentioned the reification error in a previous post (see “Metrics for technical debt management: the basics”), but I haven’t explored its dual, the resilience error. Let me correct that oversight now.

Reification risk is the risk that an error of reasoning known as the reification error might affect decisions—in this case, decisions regarding technical debt. The reification error [Levy 2009] [Gould 1996] (also called the reification fallacy, concretism, or the fallacy of misplaced concreteness [Whitehead 1948]) is an error of reasoning in which we treat an abstraction as if it were a real, concrete, physical thing. Reification is useful in some applications, such as object-oriented programming and design.

Where reification risk is most likely

The future USS Zumwalt (DDG 1000) in sea trials, December 2015
The future USS Zumwalt (DDG 1000) is underway for the first time conducting at-sea tests and trials in the Atlantic Ocean Dec. 7, 2015. The first of the Zumwalt class of US Navy guided missile destroyers, it is designed to be stealthy, and to be supported by a minimal crew. After the program experienced explosive cost growth, the class has been downsized from 32 ships to three, and complement increased from 95 to over 140 to reduce capital costs. The three vessels on order now have significantly reduced missions.

As one might expect, the causes of these troubles are much debated. But it’s possible that the resilience error plays a role. Before the first of a new class of ships goes to sea, it exists as an abstraction—a collection of concepts, plans, promises, and technologies, tried and untried. Many elements of this collection have never inter-operated with other elements. The first ship represents the first opportunity to see how all the elements work together. Although troubles often appear even before the ship is fully assembled, anticipating all troubles is extraordinarily difficult.

But when we reify in the domain of logical reasoning, troubles can arise. For example, we can encounter trouble when we think of “measuring” technical debt. Strictly speaking, we cannot measure technical debt. It isn’t a real, physical thing that can be measured. What we can do is estimate the cost of retiring technical debt, but estimates are only approximations. And in the case of technical debt, the approximations are usually fairly rough—they have wide uncertainty bands. That’s one way for trouble to enter the scene. When we regard the estimate as if it were a measurement, we tend to think of it as more certain than it actually is. Technical debt retirement projects then overrun their budgets and schedules, and chaos reigns.

For example, if we think we’ve measured the MPrin of a class of technical debt, rather than that we’ve estimated it, we’re more likely to believe that one measurement will suffice, and that it will be valid for a long time (or indefinitely). On the other hand, if we think we’ve estimated the MPrin of a class of technical debt, we’re more likely to believe that obtaining a second independent estimate would be wise, and that the estimate we do have might not be valid for long. These are just some of the many consequences of the reification error.

The resilience error

If the reification error is risky because it entails regarding an abstraction as a real, physical thing, we might postulate the existence of a resilience error that’s risky because it entails regarding an abstraction as more resilient, pliable, adaptable, or extensible than it actually is.

When we commit the resilience error with respect to an abstraction, we adopt the belief—usually without justification, and possibly outside our awareness—that if we make changes in the abstraction without fully investigating the consequences of those changes, we can be certain that the familiar properties of the abstraction we modified will apply, suitably modified, to the new form of the abstraction. Or we assume incorrectly that the abstraction will accommodate any changes we make to its environment.

Sometimes we benefit when we modify abstractions; usually we encounter unintended and unpleasant consequences. For example, unless we examine our modifications carefully, it’s possible that the implications of a modification might conflict with one or more of the fundamental assumptions of the abstraction.

A metaphorical example of the resilience error

Perhaps a (ahem) concrete example will illustrate. Consider the steel hull of an ocean liner. We can manufacture it more cheaply if we can devise a way to use less steel. So one approach to using less steel is to remove a small portion of the bottom of the hull. We decide to cut out of the hull a circular hole one meter in diameter. We send some people into the ship to do the work, and they return with panicky reports of water coming in. But the ship seems fine, so we reject the reports. Even a day later, all seems well. But by the end of the second day, the trouble is obvious. The ship is sinking.

The problem in our example is that the circular hole in the hull violated a fundamental assumption about how ship hulls work. They work by keeping all water out of the ship. We had extended the idea of hull to make it lighter, but in doing so, we encountered some unintended consequences because our extension violated a fundamental property of hulls.

A more realistic example of the resilience error

Now for a more realistic example. Let’s consider a fictitious business situation.

Consider the fictitious company Alpha Properties LLC. Alpha manages small condominium associations of from 25 to 100 units. Things have been going swimmingly at Alpha. They’ve decided to expand to handle large condominium associations. Alpha’s financial accounting software has worked well, and their employees have become quite expert in its use. Alpha management has heard good reports about a different software package. Because the reports are from other management companies that deal with large client associations, Alpha decides to use the same software for its larger accounts too. But things don’t work out so well.

The software is fine, but the processes used by Alpha’s staff are cumbersome and slow. For example, setting up a new association requires much manual data entry. For a 100-unit association, client setup wasn’t a burden, but for a 900-unit association the problem is just unmanageable.

This is a fine example of the resilience error. When we make this error, we fail to appreciate how an abstraction can encapsulate assumptions from one context when we apply that abstraction in another context. In this example, Alpha’s data flow processes are the abstraction. The context is signing up a new client association. When the context (signing up a large new client) changes, it violates an internal assumption of the abstraction (the data flow process for signing up a new client).

How the resilience error leads to technical debt

In many cases, the resilience error is at the heart of the causes of technical debt. It works like this. We have an asset that works perfectly well in one set of contexts. We want to apply that asset in a new way, which might (or might not) require some minor extensions. When we try it, we find that the asset incorporates some assumptions about the original context, and one or more of those assumptions are violated by the new context. Scrambling, we find some quick fixes that can get things working again. But those fixes usually aren’t well-designed or easily maintained. The result is a trail of technical debt.

Acquiring companies is like that. Before the acquisition, we think we’ll be able to merge the IT operations to save some expenses in operations. But when we actually try it, merging them proves to be far more expensive than we imagined. Ah, the resilience error.

What makes this situation so difficult is that often we’re unable to anticipate what assumptions we might be about to violate. That’s why we make the resilience error.

Last words

Spotting difficulties with adapting to new applications and new contexts isn’t so difficult with physical entities. For example, we can see in advance that a square peg won’t fit into a round hole. But with abstractions, we can’t always see the problems in advance. Piloting, prototypes, games, and simulations can help us avoid some trouble, but not all.

References

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

Other posts in this thread

A policymaker’s definition of technical debt

Last updated on July 18th, 2021 at 03:32 pm

Servers like the ones that made this page available to you. Most servers would benefit from comparison with a policymaker’s definition of technical debt
Shown here are servers like the ones that made this page available to you. Cybersecurity is concerned with defending servers like these, among others.

Policymakers have in mind the best interests of the entire enterprise. They need a definition of technical debt that’s neutral relative to its causes and manifestations. Defining technical debt in terms of what caused it or where it lies in the enterprise could compromise that necessary neutrality.

That neutrality is important because it enables us to recognize technical debt in whatever form it takes. For example, suppose enterprise policy assumes that technical debt lies only in software. And suppose that the root causes of some instances of technical debt are new threats in the cybersecurity environment that render obsolete our cyberdefenses. Then enterprise policy vis-à-vis technical debt is likely to be ineffective. It might lead to decision makers focusing too much attention on the software development process and too little attention on the cybersecurity and threat intelligence processes.

A definition that’s useful for guiding policy

Here’s a cause-neutral and manifestation-neutral definition of technical debt. It’s what I call the policymaker’s definition [Brenner 2017a]:

Technical debt is any technological element that contributes, through its existence or through its absence, to lower productivity or to a higher probability of defects during development, maintenance, or enhancement efforts, or which depresses velocity in some other way. It is therefore something we would like to revise, repair, replace, rewrite, create, or re-engineer for sound engineering reasons. It can be found in—or it can be missing from—software, hardware, processes, procedures, practices, or any associated artifact, acquired by the enterprise or created within it.

Extending the technical debt metaphor just a bit, people often talk about the principal and the interest charges associated with a technical debt. These ideas are analogous to the principal and interest charges associated with a financial debt. They’re convenient concepts, but the parallels between finance and technology aren’t real, and that’s where the trouble lies. Read more

An important extension beyond conventional definitions

There’s one other generalization contained in this definition of technical debt that differs from most other definitions. It’s in the phrase “or missing from.” Our policymaker’s definition doesn’t require that the technical debt item actually exist. That is, the absence of something can constitute technical debt. My favorite example is one due to Ken Pugh, who defines acceptance test debt as “…the nonexistence or nonautomation of acceptance tests.” [Pugh 2010] If we want to include all sources of reduced organizational agility or unnecessary operating expense arising from technical debt, our definition must also address non-existence issues like those Pugh has identified.

The definition above is workable for systems of all kinds. Consider two examples of “hardware”:

But the definition also applies to anything that takes a technological form, including business plans, legislation, procedures, and microprocessor designs—almost anything.

References

[Brenner 2017a] Richard Brenner. “A Policy Maker’s Definition of Technical Debt,” Cutter Consortium Executive Update, February 27, 2017.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Pugh 2010] Ken Pugh. “The Risks of Acceptance Test Debt,” Cutter Business Technology Journal, October 2010, 25-29.

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

Related posts

Technical debt in a rail system

Last updated on July 18th, 2021 at 05:18 pm

Acela Express rounds a curve in Connecticut
Acela Express rounds a curve in Connecticut. Shown is the trailing power car of a southbound Acela Express and the front of a northbound Metro-North railcar.

Most definitions of technical debt require that the asset bearing the debt be software. From the policymaker’s perspective, this requirement is rather limiting. So for the purposes of this blog, I define technical debt as any property of a technological asset that we would like to revise, replace, or create, and which limits the ability of the enterprise to gain or maintain a market dominance. (See “A policymaker’s definition of technical debt.”)

An example from the railroad industry

In the United States, the highest-speed rail line is Acela Express. Acela travels in the northeast corridor between Boston and Washington, D.C. Parts of the right-of-way, track, and catenary it uses are from legacy lower-velocity applications. That’s why Acela trains cannot operate at their highest possible speed [Maloney 2000]. On the 231-mile section from Boston to New York’s Penn Station, Acela achieves an average speed of only 63 mph (101 km/h), even though the trains can operate safely on straight track at 150 mph (240 km/h). Yet, Acela still manages to capture a 54% share of the total air and rail market between these two cities

How Acela’s technical debt slows its trains

That 54% share might be higher still if not for technical debt. To compensate for centrifugal forces as Acela rounds curves, its passenger cars tilt their passenger spaces. The tilt enables the train to round the curves at higher speeds than would otherwise be comfortable for passengers. In effect, the cars “lean into” the curves, just as a running athlete leans when rounding a curve. Although the cars could tilt by as much as 6.8º, the adjacent set of tracks is too close to permit this without risk of collision with trains on those tracks. The maximum permissible tilt in this system is therefore 4.2º, which reduces the maximum speed consistent with passenger comfort that the trains can attain on curves. The technical debt in the tracks Acela uses thus prevents Acela from offering a service that would be more competitive with alternative transport modes, especially airlines.

Last words

In August 2016, Amtrak announced that it will be upgrading its trainsets and tracks to exploit new technologies, including active tilt technologies. All existing trainsets are due to be replaced in 2021-22.

References

[Brenner 2017a] Richard Brenner. “A Policy Maker’s Definition of Technical Debt,” Cutter Consortium Executive Update, February 27, 2017.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Maloney 2000] Brenna Maloney and Don Phillips. “All Aboard AMTRAK’s Acela,” The Washington Post, November 30, 2000.

Available: here; Retrieved April 18, 2017.

Cited in:

[Pugh 2010] Ken Pugh. “The Risks of Acceptance Test Debt,” Cutter Business Technology Journal, October 2010, 25-29.

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

Related posts

Show Buttons
Hide Buttons