Refactoring for policymakers

Last updated on July 16th, 2021 at 05:00 pm

An LED traffic light
An LED traffic light. This type of signal is more efficient and more cheaply and easily maintained than incandescent signals. But in terms of traffic control, LED signals and incandescent signals are equivalent.

Policymakers whose areas of expertise overlap minimally with software engineering might be at a slight disadvantage when the conversation turns to refactoring. That can happen when the matter at hand involves software assets that carry technical debt. The engineers are likely to argue for resources and time to be set aside for refactoring. Other “re” words are also likely to pop up: restructuring, re-architecting, rewriting, replacing, repairing, retiring, retreating, and reengineering are examples. Some of these are clear—and clearly unaffordable—but some are less clear. What’s needed is a lucid explanation of refactoring for policymakers.

To refactor1 a software asset is to improve its internal structure without altering its external behavior [Fowler 1999]. The improvements usually relate to maintainability or extensibility, and for software, that usually requires improving the readability of its code (for engineers), though it might entail some minor changes of other kinds. Instance by instance, these improvements are usually small in scale. Even so, a refactoring effort might involve small changes throughout the entire asset or throughout an entire suite of assets.

Although we usually regard refactoring as a software-related activity, refactoring, like technical debt, is a concept that can apply to any technological asset. To render the refactoring concept useful for assets other than software, we must be a bit more precise about the effects of the changes involved in refactoring.

A more general definition of refactoring

Refactoring an asset inherently changes that asset; what distinguishes refactoring from other kinds of changes is the observability of the changes. For the definition of refactoring used in software engineering, the changes are observable only to the software engineers who maintain or enhance the asset.

Here’s a definition of refactoring that’s somewhat more widely applicable:

To refactor a technological asset is to apply a series of small, behavior-preserving changes to improve the structure of the asset in ways that have effects that aren’t ordinarily observable externally. When effects are observable externally, they’re very specific, usually related to attributes such as quality and usability.

For example, after a municipality replaces incandescent traffic lights with LED traffic lights, there’s no effect on traffic control. To the untrained eye, the change isn’t noticeable. But those responsible for signal maintenance or for monitoring operating costs will notice significant advantages. With respect to traffic flow, we can regard the change to LED traffic lights as a refactoring of the traffic control system.

Refactoring in manufactured consumer items

Refactoring in manufactured consumer items can be more difficult to recognize, because the useful life of the item so often ends while the item is still in the hands of the consumer. For example, we might ask how to refactor a certain subassembly of an automobile that’s already in service. Some writers have identified the vehicle recall as a kind of refactoring [Shroyer 2016]. But I prefer to regard successive models of manufactured items as containing refactorings of earlier models.

For example, in robot vacuum cleaners, the iRobot Roomba is now available in a ninth-generation “series,” though the exact number of the generations depends on what one counts as first-generation. In laptop computers, most manufacturers’ offerings do change from one model to the next version of that model. Some of these changes are more significant than what we might consider to be refactoring, such as Apple’s removal of the MagSafe power connector [Spence 2018]. For laptops, more likely to be a refactoring would be a change to a slightly more efficient internal fan.

Other applications of the refactoring concept

The refactoring concept can also apply to processes. Indeed, failure to refactor business processes is sometimes a cause of needless complexity, high maintenance costs, and other difficulties. Some of these difficulties can appear in technological assets that must interact with processes that need refactoring [Distante 2014]. The refactoring concept might even find use in organizational restructuring and debt restructuring.

Endnote

[1] One might wonder why this process is called refactoring. Martin Fowler, the author of the classic 1999 book about refactoring, has investigated the etymology of the word and concludes that it likely arose in the Forth and Smalltalk communities in the 1980s. [Fowler 2003] Jump back to the text

References

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

Other posts in this thread

Related posts

Retiring technical debt can be a super wicked problem

Last updated on July 10th, 2021 at 10:51 am

In my last post I provided a list of attributes of wicked problems [Rittel 1973]. I included the reasons why I feel that designing technical debt retirement projects can be wicked problems. As a review, here are the attributes of wicked problems as Rittel and Webber see them, rephrased for brevity:

  1. Wicked problems have no definitive formulation
  2. Wicked problems have no stopping rule
  3. Solutions to wicked problems aren’t true-or-false; they’re good-or-bad
  4. There is no immediate ultimate test of a solution to a wicked problem
  5. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by
    trial-and-error, every attempt counts significantly
  6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions; nor is there a
    well-described set of permissible operations that we can incorporate into the plan
  7. Every wicked problem is essentially unique
  8. We can regard every wicked problem as a symptom of another problem
  9. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation
    determines the nature of the problem’s resolution
  10. The planner (or designer) has no right to be wrong

Four properties of super wicked bproblems

We can regard a subset of wicked problems as super wicked [Levin 2012]. Levin, et al. list the following four properties of super wicked problems. With each one, I’ve added reasons why planning a technical debt retirement project can qualify as a super wicked problem.

Time is running out

Super wicked problems have inherent timescales. For example, many believe that climate change will become irreversible within 30 years if current practices continue.

Technical debt retirement can have an inherent time scale. For example, Microsoft ended mainstream support for Windows 7 in January 2015. At that time, computers running Windows 7 incurred a technical debt. Yet by September 2018, 46.7% of computers running Windows were still running Windows 7. That was a 0.6% increase from the previous month [Keizer 2018]. At this writing, standard support will end in January 2020. That confers a timescale on this kind of technical debt.

When time runs out for solving wicked problems, the consequences can be severe. With respect to Windows 7, the consequences are more serious than forcing conversion to Windows 10. Some applications running on those machines might be compatible with Windows 10, but some might require udates. And all users of converted machines must learn how to use Windows 10 and any updated or replaced applications. So there’s also a training issue, a learning curve, and period of elevated user error rates.

Letting a time-boxed technical debt remain in place can be financially dangerous. As the retirement window closes, the cost of debt retirement concentrates on a declining number of fiscal quarters. If retirement costs are high enough, the impact of debt retirement on net income can be severe and negative. For enterprises whose securities are publicly traded, this effect can be costly for shareholders. At this writing there are only about five fiscal quarters remaining for Windows 10 conversions. For other technical debts, the number of fiscal quarters available for diluting the costs of retirement might be more—or less.

Those who cause the problem also advocate a solution
Two pilots line up their F/A-22 Raptor behind a tanker
Two pilots line up their F/A-22 Raptor behind a tanker. When Hurricane Michael made landfall on October 10, 2018, it passed over Tyndall Air Force Base. Tyndall has responsibility for air dominance training for F-22s. As Hurricane Michael approached, 33 of the 55 F-22s at Tyndall were repositioned to Wright-Patterson Air Force Base in Ohio [Phillips 2018a]. The remaining aircraft at Tyndall were undergoing maintenance and weren’t operational [Gabriel 2018]. The storm damaged some of them, but they’re believed to be repairable.

Because of climate change [Cook 2016], increases in storm intensity and frequency are likely. Air bases in coastal regions are at risk [Phillips 2018b]. They now constitute a technical debt. Relocating them could be a wicked problem, and possibly a super wicked problem. But if federal policies continue to fail to address climate change, they could prevent relocation. If that happens, the policies themselves represent a technical debt. U.S. Air Force photo by TSgt Ben Bloker, courtesy Wikipedia

The phrase “seek to provide a solution” might be somewhat tactful. I expect that some super wicked problems have the property that those who cause the problem exert some degree of control over what kinds of solutions are acceptable, or even discussible. In many cases, this represents a conflict of interest that can prevent the organization from deploying the more effective options.

That conflict of interest is certainly present in the context of many technical debt retirement projects. Technical debt formation and persistence are due, in part, to a failure to commit resources to retiring it, or, at least, to inhibiting its formation. That failure is the responsibility of those in leadership roles in the enterprise. Typically, these are the same people who must decide to commit resources to retire technical debt in the future.

The central authority needed to address the problem is weak or nonexistent

Again, I find this description unnecessarily limiting. I would prefer a phrasing such as, “The central authority, for whatever reason, chooses to exert, or is unable to exert, its authority in furtherance of solution, or even investigation.” In other words, the central authority need not be weak for it to be a source of difficulty in addressing the super wicked problem. It need only choose not to act. This can happen when those who cause the problem are the people who constitute the central authority, or they capture the central authority, or they capture the function to which the central authority has delegated responsibility for solution.

With respect to technical debt retirement, consider this scenario. At AMUFC, A Made-Up Fictitious Corporation, the sales and marketing functions have repeatedly struggled with the engineering function for shares of budget resources. Engineering has argued repeatedly, and unsuccessfully, that it needs additional resources to address the technical debt that has accumulated in several products. But the CEO is a former VP Sales, and a close friend of the CFO. Together, they have always decided to defer technical debt retirement in favor of new products and enhancements favored by the VP Marketing, by customers, and by investors.

Scenarios like this are common. Enterprise leadership is strong, but not inclined to address the technical debt retirement issue.

Partly as a result, policy responses discount the future irrationally

Irrational discounting of future costs and benefits occurs when policies are deployed that give too much emphasis to producing short-term benefits and/or to avoiding short-term costs or inconveniences. Benefits are pulled in from the future towards the present; costs and inconveniences are pushed out toward the future and deferred. One form of this discounting scheme—one of many—is hyperbolic discounting.

This tendency is one way of distracting attention from the actual problem. It is the principal tactic that enables the persistence of technical debt, and the means by which enterprises repeatedly defer attention to the problem of retiring technical debt.

Both the problem of managing technical debt and the problem of designing technical debt retirement projects, exhibit all of these properties to some degree. It’s likely, in my view, that these problems are super wicked problems.

Intervention strategies for super wicked problems

Levin, et al., recommend four distinct strategies for resolving super wicked problems [Levin 2012]. They are all approaches to devising policies that are difficult to alter, thus committing the organization to a particular path forward.

Lock-in

Lock-in is usually regarded as dysfunctional adherence to a strategy or course of action despite the existence of superior alternatives [Brenner 2011]. It occurs when a policy confers some kind of immediate benefit on a subset of the population. If that benefit is significant, if the population subset would be harmed by alterations of the policy that remove the benefit, and if the subset has enough political power to defend the benefit, the policy will be “locked in” and thus difficult to change. Levin, et al., suggest that this phenomenon can serve a beneficial purpose by protecting a constructive policy, thus preventing its abandonment.

Most technical debt retirement efforts focus solely on retiring the debt. All (or most) of the benefit appears in the form of increased engineering productivity, decreased sources of frustration for engineers, or increased engineering agility. Benefits for non-engineering stakeholders tend to be indirect. To establish policies that exploit lock-in we must craft them so that they provide ongoing, direct benefit to the most politically powerful stakeholders. For example, addressing first the forms of technical debt that are most likely to lead to product innovations that non-engineering stakeholders would value highly could cause those stakeholders to favor further technical debt retirement efforts.

Positive feedback

Exploiting lock-in makes policies durable when people or organizations already supporting the policy derive some kind of increased benefit, leading to others not yet supporting or covered by the policy to decide to support it. This mechanism is sometimes known as a “network effect.” When network effects are present, the value of a product or service increases as the size of the population using it increases [Shapiro 1998].

To exploit network effects when devising technical debt retirement efforts, focus on retiring the kinds of technical debts that confer benefits on stakeholders of platform assets. A platform asset is an asset that supports multiple other assets. Examples: an application development tool suite, a product line architecture, or an enterprise data network. Platform assets that support collaboration communities are more likely to generate network effects.

Increasing Returns

Policies and interventions that enable increasing returns to the population are more likely to be durable than those that offer steady returns. Because people adapt to steady levels of stimuli, policies that produce a change in the context only during the period immediately following initial adoption of those policies are less likely to maintain popular support than are policies that continue to provide increasing returns as long as they’re in place.

But among policies that provide increasing returns, Levin, et al., identify two types. Type I policies, which are less durable, confer their benefits on an existing population of supporters. They don’t cause others to become supporters. Type II policies also confer benefits on supporters, but they do cause others to become supporters. They are thus far more durable than are Type I, because they foster growth in the supporting population.

Framing technical debt retirement projects as individual projects with the objective of retiring a specified kind of technical debt is likely to lead to the enterprise population viewing the effort as a Type I policy at best. But framing each project as a phase of a longer-term effort could position the larger effort as a Type II policy, if the larger effort affects increasing portions of the enterprise population.

Self-reinforcing

Self-reinforcing policies create a dynamic that makes them more durable. Reinforcement can come about for two reasons. It can be a result of increases in the benefits the policy generates, or it can result from increases in the cost of rescinding the policy. In some cases, reinforcement can result from a combination of both effects. As with the strategy of Increasing Returns, there are two types of self-reinforcing policies. Type I self-reinforcing policies focus on maintaining support for the policy within the subset of the population consisting of its original supporters. In analogy with Type II Increasing Returns policies, Type II Self-reinforcing policies affect both the original supporting population and portions of the population not yet affected directly by the policy.

To exploit self-reinforcement, technical debt retirement programs must emphasize retiring debts that have curtailed organizational agility in recognizable ways, or which have prevented introduction of capabilities that the population values. Communicating these objectives is an important part of the program, because self-reinforcing popular support is possible only if the population understands the strategy and how it benefits the enterprise.

Last words

Because of the essential uniqueness of any wicked problem (Proposition 7 of Rittel and Webber), it is futile to attempt to apply as a template any retirement program that worked for some other organization, or for some other portion of a given organization at an earlier time with a different form of technical debt. But these four strategies, implemented carefully and communicated widely and effectively within the organization, can build organizational commitment to a long-term technical debt retirement program, even though retiring technical debt may be a super wicked problem.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

Other posts in this thread

Retiring technical debt can be a wicked problem

Last updated on July 16th, 2021 at 07:38 pm

Prototypes of President Trump’s “border wall.”

Prototypes of President Trump’s “border wall.” Building the wall is an example of a wicked problem. Building prototypes in short segments of the wall is a tame problem. But these are just prototypes of short segments of the wall. They aren’t prototypes of the project.

Prototypes of the wall itself don’t demonstrate the process for taking private property, or how to build construction access roads, or the effects on wildlife, or how the government of Mexico will respond, or how to repair the wall when drug gangs destroy sections of it in isolated regions, or even the effectiveness of the wall. Prototyping works well for tame problems. It helps us project how the finished project will perform, and how difficult completing the project will be. But for wicked problems, prototyping is of limited value. As Rittel observes, prototyping can make the problem worse.

Photo by Mani Albrecht, U.S. Customs and Border Protection Office of Public Affairs, Visual Communications Division.

The theory of wicked problems originated with Horst Rittel in the mid-1960s. He was addressing “that class of problems which are ill-formulated, where the information is confusing, where there are many decision makers and clients with conflicting values, and where the ramifications in the whole system are confusing.” [Churchman 1967] The term wicked isn’t a moral judgment. It suggests the mischievous streak in these problems. Many of them have the property that proposed solutions can lead to conditions even more problematic than the original situation. Is it just me, or are you also thinking, “Ah, technical debt”? In this post, I suggest that retiring technical debt can be a wicked problem. I’ll show how wickedness explains many of the difficulties we associate with retiring forms of technical debt that involve many stakeholders, assets, revenue streams, policies, or strategies.

Introduction

Horst Rittel was a design theorist at the University of California at Berkeley. His interest in wicked problems came about because designers must deal with the interactions between architecture and politics. In today’s technology-dependent enterprises, analogous problems arise when we retire technical debt. When we do, we affect multiple sets of quasi-independent stakeholders.

Applicability to the technical debt problem

In the years since Rittel originated the wicked problem concept, others have extended it. These extensions have led some to regard the concept as inflated and less than useful. But extension or less-than-useful concepts rarely occurs, I take it as an indicator of worth. The focus of this post, then, is applying Rittel’s version of wicked problems to the problem of designing a complex technical debt retirement project.

The wicked problem concept has propagated mostly in the realm of public policy and social planning. Certainly wicked problems abound there. Poverty, crime control, and climate change are examples. But I know of no attempt to explore the wickedness of retiring technical debt in large enterprises, but have a look below and see what you think.

Rittel defines a problem as the discrepancy between the current state of affairs and the “state as it ought to be.” For the purposes of technical debt retirement planning, the state as it ought to be might at times be a bit ambitious. So I take the objective of a technical debt retirement project to be an attempt to resolve the discrepancy between the current state of affairs and some other state that’s more desirable. For the present purpose, then, the problem is designing a technical debt retirement project that converts the current state of an asset to a more desirable state that might still contain technical debt in some form. But in that new state, the asset is in a better configuration.

A note on super-wicked problems

Actually, there is a subset of wicked problems—super-wicked problems—that I think might include some technical debt retirement problems. I address them in the post “Retiring technical debt can be a super wicked problem.”

For now, though, let’s examine the properties of wicked problems. Let’s see how well they match up with the problem of designing technical debt retirement projects.

Attributes of wicked problems

Rittel’s summary of the attributes of wicked problems [Rittel 1973] convinced me that major technical debt retirement projects present wicked problems. Here are those attributes. In what follows, I use Rittel’s term tame problem to refer to a problem that isn’t wicked. (See also [Kreuter 2004])

1. [No Definitive Formulation] Wicked problems have no definitive formulation

For any given tame problem, it’s possible to state it in such a way that it provides the problem-solver all information necessary to solve it. That’s what definitive formulation means. For wicked problems, on the other hand, our understanding of the problem depends on the solution we’re considering. Each candidate solution might potentially require its own understanding of the problem.

When designing a technical debt retirement project, we must fully grasp the impact of the effort on all activities in the enterprise. Each proposed project plan has its own schedule and risk profile. Each proposed project plan affects enterprise activities in its own way. In principle, each candidate approach to the effort affects a different portfolio of enterprise assets in its own unique order. Because examining all possible candidate project plans is impractical, choosing a project plan by seeking an optimal set of effects is also impractical. By the time you’re ready to execute a given project plan, the data supporting your decision might be obsolete.

2. [No Stopping Rule] Wicked problems have no stopping rule

For any given tame problem, solutions have “stopping rules.” Stopping rules of solutions are signatures that indicate clearly that they are indeed solutions. For example, in a chess problem to be solved in N moves, N and checkmate provide a stopping rule. We know how to count to N and the position of checkmate is well defined.

Wicked problems have no stopping rule.

When planning a major technical debt retirement project, we must determine the attributes of the project. The attributes include a task breakdown, a sequence for performing the tasks, a resource array including both human and non-human resources, a risk plan including risk mitigations and risk responses, a revenue stream interruption schedule, and so on. For each such plan, we can estimate the direct and indirect costs to the enterprise. We can project the effects of the plan on market share for every affected product or service. Every plan has these attributes. When we compute them for a given candidate plan, the result doesn’t reveal that we’ve found “the solution.” We will have found only an estimate for that given solution. What we learn by doing this doesn’t reveal whether or not a “better” solution exists.

There is no indicator contained in any given candidate solution that tells us we can “stop” solving the problem. Most often, we just stop when we run out of time for finding solutions. In some cases, we stop when we find just one solution.

3. [Solutions Are Good/Bad] Solutions to wicked problems aren’t true-or-false, but good-or-bad

The criteria for finding solutions to tame problems are unambiguous. For example, if a candidate function satisfies a differential equation, it’s a solution to the equation. The volume of concrete required to pave a section of roadway is a single number, determined by computing the area of roadway and multiplying by the thickness of the roadbed, and subtracting the volume of any reinforcing steel.

The solutions to wicked problems have no such clarity. When evaluating a candidate project plan for retiring a technical debt, we can estimate its cost, the time required, interruptions in revenue streams, and the timing of resource requirements. But determining how “good” that is might be difficult. Much depends on what other demands there might be for those resources or funds. Much also depends on the political power of the people making those demands. No single number measures that.

4. [No Test of Solutions] There is no immediate ultimate test of a solution to a wicked problem

To test a candidate solution to a tame problem, the problem-solving team determines whether the solution meets the requirements set in the tame problem statement. The consequences of implementing the solution are all evident to the problem-solving team. The team has everything it needs to judge the success of the solution.

Not so with wicked problems. Any candidate solution to a wicked problem generates waves of consequences. As these waves propagate, some of the problem’s stakeholders might find the solution unsatisfactory. They’ll report their objections, possibly through politically powerful people or organizations. Because the consequences can be so diverse, the team can’t anticipate all of them. In some cases, the team might have difficulty understanding how the troubles that plague some stakeholders were actually related to the implemented solution. Some undesirable consequences can be far more harmful than any intended benefits are helpful. In other cases, the undesirable consequences might remain undiscovered until long after the solution is in use and operational.

When designing a technical debt retirement project, it’s necessary to determine everything that must be changed, what resources must be assembled to do the work, and what processes might be interrupted, when and for how long. Only rarely, if ever, can we determine all of that with certainty in advance. For that reason, determining that the design of the project is “correct” isn’t possible, except perhaps in the probabilistic sense. We never really know in advance that we’ve found a solution. Most of the time, after execution begins, we must make adjustments along the way, in real time.

5. [No Trial-and-error] Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly

When solving tame problems, we can try candidate solutions without incurring significant penalties. That is, trying a solution might require some effort, and therefore incur a cost. But it doesn’t otherwise affect the ability to find other solutions. Wicked problems are different. Every attempt to “try” a solution leaves traces that can potentially make further solution attempts more difficult, costly, or risky than they would have been if we hadn’t tried that solution. These traces of past solution attempts might also impose constraints on future solutions. Those constraints can effectively transform the wicked problem into a different wicked problem. This property makes trial-and-error approaches undesirable and possibly infeasible. Indices of such undesirability are the half-lives of the traces of attempts to address the problem. A long half-life might mean that the problem solver has only one shot at addressing the problem.

When designing a technical debt retirement project, we sometimes try to “pilot” a potential approach to determine difficulty, costs, feasibility, political issues, or risk profiles. Even when we can revert the asset to its former state after a pilot is completed or suspended, the consequences for stakeholders and for stakeholder operations might not be reversible. When we next try another “pilot,” or perhaps a fully committed retirement project, these stakeholders might be significantly less willing to cooperate. Every attempted solution can thus leave political or financial traces like these, making future attempts riskier and more challenging.

6. [Solutions Are Not Describable] Wicked problems don’t have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that we can incorporate into the plan

In devising solutions to tame problems, one common approach entails first gathering the full set of possibilities. Next, we screen them according to a set of favorability criteria. Reducing the field of possibilities is a useful strategy for finding optimal or acceptable solutions to tame problems.

Wicked problems defy such strategies. Gathering the full set of possible solutions to a given wicked problem can be a wicked problem in itself. We cannot parameterize the set of possible solutions to a wicked problem. We cannot define a finite set of attributes that fully covers the solution space. For these reasons, we can never be certain that the set of candidate solutions is complete.

Candidate designs for technical debt retirement projects present this same quality. We have a dizzying array of choices. In what order should we retire different kinds of technical debts? In what order should we address the debts different assets bear? Can we “refinance” portions of the debt to intermediate forms [Zablah 2015]? What kinds of refactoring should we perform and when? Because options like these are neither denumerable nor parameterizable, we cannot know whether a given set of candidate project designs is complete.

7. [Essential Uniqueness] Every wicked problem is essentially unique

Among tame problems, we can define classes or categories of problems that share a solution method. That is, using the method associated with a given class, we can solve all problems in that class. For example, we can solve all second order linear differential equations with the same method.

Even though we can define classes of wicked problems whose members are in some sense similar, that similarity doesn’t enable us to find a unified solution strategy that works for every member of the class.

So it is with designing technical debt retirement projects. Certainly, the collection of all technical debt retirement projects is a class. But the problem of designing a given retirement project is essentially unique. What “works” for one project in one enterprise in one fiscal year probably won’t work for another project in another enterprise in another fiscal year. It might not even work for another project in that same enterprise in that same fiscal year. Elements of the solution for one project might be useful for another project. But even then, we might need to adapt them to the conditions of that next project.

This essential uniqueness property of technical debt retirement projects collides with a common pattern decision makers use when chartering major efforts. That pattern is reliance on consultants, employees, or contractors who “have demonstrated success and experience with this kind of work.” Because each technical debt retirement project is essentially unique, relying on a history of demonstrated success is a much less viable strategy than it would be with tame problems. Decision makers would do well to keep this in mind when they seek approaches, leaders, and staff for major technical debt retirement efforts: no major technical debt retirement project is like any other.

8. [Problems as Symptoms] We can regard every wicked problem as a symptom of another problem

With tame problems or wicked, we typically begin the search for solutions by inquiring as to the cause of the current condition. When we find the cause or causes, and remove them, we usually find a new problem underlying them. Thus, for wicked problems, what we regarded initially as the problem is thereby converted into a symptom of a newly recognized underlying problem. By repeating this process, we escalate the “level” of the problem we’re addressing. Higher-level problems do tend to be more difficult to resolve, but addressing symptoms, though easier, isn’t a path to ultimate resolution.

Rittel also observes that incremental approaches to resolving wicked problems can be self-defeating. The difficulty arises from the traces left behind by incrementalism, as described in the discussion of the unworkability of trial-and-error strategies. Rittel provides the example of the increase in difficulty of changing processes after we automate them.

To regard the wicked problem of designing a technical debt retirement project as a symptom of a higher-level wicked problem, we must be willing to regard as problems the very things that make the technical debt retirement project design effort a wicked problem. That is, the processes that lead to formation of technical debt, or that enhance its persistence, are themselves wicked problems. For example, one might inquire about how to change the enterprise culture so as to reduce the incidence of technical debt contagion. To undertake major technical debt retirement efforts without first determining what can be done to limit technical debt formation or persistence due to contagion or due to other processes, might be unwise.

9. [No Controlled Experiments] The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution

When addressing tame problems, problem-solving teams can often perform controlled experiments. The general framework of these experiments is as follows. The team forms a hypothesis H as to the cause of the problem, conjecturing a solution. Then assuming H is correct, and given a set of conditions C, they deduce the consequences E that must follow. If any elements of E don’t occur, then H is incorrect. The process repeats until an H’ is found that provides all elements of E. H’ then provides the basis of a solution. Essentially, this is the scientific method.

With wicked problems, the method fails in numerous ways. Foremost among these failure modes is the inability to control C. That is, interventions that might be required to set C to be a desired C0 tend to be impossible. Moreover, even if we can establish C0, the experiments that determine whether E is observed tend to leave the traces discussed in Proposition 5 [No Trial-and-error]. Finally, the determining the presence or absence of the elements of E is usually subjective.

When planning enterprise-scale technical debt retirement projects, as with many projects of similar scale, we believe that we can benefit from running a pilot of our proposed plan, to determine its fitness. These trials are sometimes called “proof of concept” exercises. However, because we cannot control the conditions in which we execute the pilot, we cannot be confident that our interpretation of the results of the pilot will apply to the actual project. Moreover, a small-scale pilot cannot generate some of the effects we most want to observe because they occur only at full scale. These effects include staff shortages, resource contention, and revenue interruption incidents.

10. [100% Accountability] The planner (or designer) has no right to be wrong

In solving tame problems, solvers can experiment with proposed solutions. They make conjectures about what might work, and gather the results of trials to determine how to improve their conjectured solutions. There is no social or legal penalty for failed conjectures.

In solving wicked problems, experiments don’t exist. Any trial solution is a real solution, with real effects on stakeholders and later, real effects on the problem solvers themselves. Problem solvers are accountable for the undesirable consequences of each solution, whether it’s a trial or not.

In planning a technical debt retirement project, any attempt to gather data about how the approach would affect the enterprise could potentially have real, lasting, deleterious effects. The project bears the costs associated with these consequences, if not officially and financially, then politically. The politics of failure can lead to serious consequences for the problem solvers. Any approach that the team deploys, on any scale no matter how small, can potentially create financial problems for the enterprise, and political problems for anyone associated with the technical debt retirement project.

Last words

The fit between wicked problems and technical debt retirement project design looks pretty good to me. But the research on a subset of wicked problems—super wicked problems—is also intriguing. I’ll look at that in my next post. After that, we’ll be ready to examine which approaches to retiring technical debt take these matters into account.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

The resilience error and technical debt

Last updated on July 10th, 2021 at 07:46 am

I’ve mentioned the reification error in a previous post (see “Metrics for technical debt management: the basics”), but I haven’t explored its dual, the resilience error. Let me correct that oversight now.

Reification risk is the risk that an error of reasoning known as the reification error might affect decisions—in this case, decisions regarding technical debt. The reification error [Levy 2009] [Gould 1996] (also called the reification fallacy, concretism, or the fallacy of misplaced concreteness [Whitehead 1948]) is an error of reasoning in which we treat an abstraction as if it were a real, concrete, physical thing. Reification is useful in some applications, such as object-oriented programming and design.

Where reification risk is most likely

The future USS Zumwalt (DDG 1000) in sea trials, December 2015
The future USS Zumwalt (DDG 1000) is underway for the first time conducting at-sea tests and trials in the Atlantic Ocean Dec. 7, 2015. The first of the Zumwalt class of US Navy guided missile destroyers, it is designed to be stealthy, and to be supported by a minimal crew. After the program experienced explosive cost growth, the class has been downsized from 32 ships to three, and complement increased from 95 to over 140 to reduce capital costs. The three vessels on order now have significantly reduced missions.

As one might expect, the causes of these troubles are much debated. But it’s possible that the resilience error plays a role. Before the first of a new class of ships goes to sea, it exists as an abstraction—a collection of concepts, plans, promises, and technologies, tried and untried. Many elements of this collection have never inter-operated with other elements. The first ship represents the first opportunity to see how all the elements work together. Although troubles often appear even before the ship is fully assembled, anticipating all troubles is extraordinarily difficult.

But when we reify in the domain of logical reasoning, troubles can arise. For example, we can encounter trouble when we think of “measuring” technical debt. Strictly speaking, we cannot measure technical debt. It isn’t a real, physical thing that can be measured. What we can do is estimate the cost of retiring technical debt, but estimates are only approximations. And in the case of technical debt, the approximations are usually fairly rough—they have wide uncertainty bands. That’s one way for trouble to enter the scene. When we regard the estimate as if it were a measurement, we tend to think of it as more certain than it actually is. Technical debt retirement projects then overrun their budgets and schedules, and chaos reigns.

For example, if we think we’ve measured the MPrin of a class of technical debt, rather than that we’ve estimated it, we’re more likely to believe that one measurement will suffice, and that it will be valid for a long time (or indefinitely). On the other hand, if we think we’ve estimated the MPrin of a class of technical debt, we’re more likely to believe that obtaining a second independent estimate would be wise, and that the estimate we do have might not be valid for long. These are just some of the many consequences of the reification error.

The resilience error

If the reification error is risky because it entails regarding an abstraction as a real, physical thing, we might postulate the existence of a resilience error that’s risky because it entails regarding an abstraction as more resilient, pliable, adaptable, or extensible than it actually is.

When we commit the resilience error with respect to an abstraction, we adopt the belief—usually without justification, and possibly outside our awareness—that if we make changes in the abstraction without fully investigating the consequences of those changes, we can be certain that the familiar properties of the abstraction we modified will apply, suitably modified, to the new form of the abstraction. Or we assume incorrectly that the abstraction will accommodate any changes we make to its environment.

Sometimes we benefit when we modify abstractions; usually we encounter unintended and unpleasant consequences. For example, unless we examine our modifications carefully, it’s possible that the implications of a modification might conflict with one or more of the fundamental assumptions of the abstraction.

A metaphorical example of the resilience error

Perhaps a (ahem) concrete example will illustrate. Consider the steel hull of an ocean liner. We can manufacture it more cheaply if we can devise a way to use less steel. So one approach to using less steel is to remove a small portion of the bottom of the hull. We decide to cut out of the hull a circular hole one meter in diameter. We send some people into the ship to do the work, and they return with panicky reports of water coming in. But the ship seems fine, so we reject the reports. Even a day later, all seems well. But by the end of the second day, the trouble is obvious. The ship is sinking.

The problem in our example is that the circular hole in the hull violated a fundamental assumption about how ship hulls work. They work by keeping all water out of the ship. We had extended the idea of hull to make it lighter, but in doing so, we encountered some unintended consequences because our extension violated a fundamental property of hulls.

A more realistic example of the resilience error

Now for a more realistic example. Let’s consider a fictitious business situation.

Consider the fictitious company Alpha Properties LLC. Alpha manages small condominium associations of from 25 to 100 units. Things have been going swimmingly at Alpha. They’ve decided to expand to handle large condominium associations. Alpha’s financial accounting software has worked well, and their employees have become quite expert in its use. Alpha management has heard good reports about a different software package. Because the reports are from other management companies that deal with large client associations, Alpha decides to use the same software for its larger accounts too. But things don’t work out so well.

The software is fine, but the processes used by Alpha’s staff are cumbersome and slow. For example, setting up a new association requires much manual data entry. For a 100-unit association, client setup wasn’t a burden, but for a 900-unit association the problem is just unmanageable.

This is a fine example of the resilience error. When we make this error, we fail to appreciate how an abstraction can encapsulate assumptions from one context when we apply that abstraction in another context. In this example, Alpha’s data flow processes are the abstraction. The context is signing up a new client association. When the context (signing up a large new client) changes, it violates an internal assumption of the abstraction (the data flow process for signing up a new client).

How the resilience error leads to technical debt

In many cases, the resilience error is at the heart of the causes of technical debt. It works like this. We have an asset that works perfectly well in one set of contexts. We want to apply that asset in a new way, which might (or might not) require some minor extensions. When we try it, we find that the asset incorporates some assumptions about the original context, and one or more of those assumptions are violated by the new context. Scrambling, we find some quick fixes that can get things working again. But those fixes usually aren’t well-designed or easily maintained. The result is a trail of technical debt.

Acquiring companies is like that. Before the acquisition, we think we’ll be able to merge the IT operations to save some expenses in operations. But when we actually try it, merging them proves to be far more expensive than we imagined. Ah, the resilience error.

What makes this situation so difficult is that often we’re unable to anticipate what assumptions we might be about to violate. That’s why we make the resilience error.

Last words

Spotting difficulties with adapting to new applications and new contexts isn’t so difficult with physical entities. For example, we can see in advance that a square peg won’t fit into a round hole. But with abstractions, we can’t always see the problems in advance. Piloting, prototypes, games, and simulations can help us avoid some trouble, but not all.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Leverage points for technical debt management

Last updated on July 10th, 2021 at 07:17 am

McMurdo Station, Antarctica, as seen from nearby Observation Hill
McMurdo Station, Antarctica, as seen from nearby Observation Hill. The United States Antarctic Program, a unit of the National Science Foundation, operates the station. It can house as many as 1258 people in Summer. Photo (cc) Gaelen Marsden courtesy McMurdo Station in Antarctica.

Adopting a technical debt management programs entails significant organizational change. The problem can seem so daunting that we don’t know where to begin. The places to begin are the places where the change agents have greatest leverage—what systems analysts call leverage points. Consider this scenario:

You’re sitting in the kickoff meeting of the new Technical Debt Management Task Force. The CEO is talking about how she realized that the company had a technical debt problem. It was when the Marigold project went through delay after delay, and was finally declared done, with multiple objectives waived. She’s saying something about, “we were trying to do backflips with millstones around our necks. So I want this task force to show us how to get rid of the millstones, and then get rid of them.”

OK, you think. But how? We’re a global enterprise with thousands of engineers and operations on every continent. Except maybe Antarctica. No wait, we’re there, too. McMurdo I think. We have software we don’t even know much about, acquired long ago along with the companies that built it. And we’re building new systems or modifying old ones all the time, trying to move everything to the cloud while enhancing data security. Where do we begin to look for the millstones of technical debt?

Have you been in that meeting? If not, can you imagine being in that meeting? Meetings like that are happening around the globe. We’re all in the same soup.

Leverage points: how to get rid of the millstones

It turns out that the answers to the millstone questions are available, but the pioneers and deep thinkers who have shown the way aren’t working on technical debt. Their field is called systems analysis. They work on problems like the collapse of the North Atlantic fishery, urban deterioration, unemployment, poverty, climate change, and the causes of the Great Recession of 2008—really difficult problems. Although the technical debt problem isn’t quite that challenging, it’s challenging enough to justify taking a look at the methods of systems analysis.

And when we do that, we immediately encounter a concept many call leverage points.

What are leverage points?

Leverage points are places in complex systems where a small change in one thing can produce big changes in system behavior. In a brilliant 1997 article, Donella Meadows describes what she calls “places to intervene in a system.” [Meadows 1997] She followed this article, making improvements each time, in 1999 [Meadows 1999] and 2008 [Meadows 2008]. Let me summarize Meadows’ work here.

To alter the behavior of a complex system, intervene at one or more of 12 categories of leverage points. For example, one category is called “Rules.” It consists of the incentives, punishments, and constraints that govern the behavior of the people and institutions that comprise the system. By adjusting the system’s rules, we can alter overall system behavior.

One more thing: the leverage points form an ordered hierarchy, ordered by effectiveness. Acting at a higher-level leverage point is more effective than acting at a lower-level leverage point. And more difficult, too. The ordering of the categories is a bit fuzzy, because every situation has its own quirks, but generally, the order is as given in the list below.

The twelve leverage points

In a moment I’ll give an example of using leverage point #9, Delays, to bring about change in the way the enterprise deals with technical debt. But first, here’s a brief summary of the leverage points in increasing order of leverage; not enough to truly understand what they are, but probably enough to pique your interest. As I write posts that illustrate interventions at these leverage points, I’ll link to them from here.

  1. Numbers: Constants and parameters such as subsidies, taxes, and standards
  2. Buffers: The sizes of stabilizing stocks relative to their flows
  3. Stock-and-Flow Structures: Physical systems and their nodes of intersection
  4. Delays in feedback loops
  5. Balancing Feedback Loops: The strength of the feedbacks relative to the impacts they are trying to correct
  6. Reinforcing Feedback Loops: The strength of the gain of driving loops
  7. Information Flows:  The structure of who does and does not have access to information
  8. Rules: Incentives, punishments, and constraints
  9. Self-Organization: The power to add, change, or evolve system structure
  10. Goals: The purpose or function of the system
  11. Paradigms: The mind-set out of which the system—its goals, structure, rules, delays, parameters—arises
  12. Transcending Paradigms

Delays in feedback loops

When we use feedback to control systems, and there are delays in the feedback, we can potentially create destructive system behavior. And that can happen when we try to control technical debt.

Whenever we try to control a quantity in an enterprise process, we must (a) set a target value for that quantity; then (b) measure its current value; and then (c) take action as appropriate to move the current value toward the target value. Systems analysts (and control theorists) call that arrangement a feedback loop. The action taken to move the current value to the target value is sometimes called the control signal. Under certain conditions, the feedback works as expected.

For example, to control the profitability of the enterprise, we can examine its net income, say, quarterly. And at the end of each quarter we can make adjustments if net income isn’t in the target range.

Feedback loops generally work pretty well, but under some conditions, oscillations can develop. One of those troublesome situations occurs when there’s a delay in the loop that’s of the same order as (or longer than) the time the system takes to respond to adjustments. Meadows uses the example of adjusting the water temperature of a shower when there’s a long delay between making the adjustment and feeling its effects. Overcorrection is almost inevitable, and that’s what causes system oscillation.

How controlling technical debt can create feedback loops

So let’s suppose that we’re trying to control the rate of accumulation of technical debt. One approach is to set a target for TDnew, the new technical debt generated in a project. To be fair to all projects, we decide to normalize this quantity according to the project budget B. So we set targets for each project’s N = TDnew/B, and we require that projects estimate N, on an ongoing basis, with a goal of having N in some target range when the project is complete.

Identifying technical debt isn’t straightforward

One problem with this approach is that we rarely identify accurately all the technical debt we’ve incurred until some time has passed after project delivery. With time, as the newly produced assets go into production and learning accumulates, we acquire the wisdom needed to identify more of the technical debt we created. This is one source of delay in this feedback loop.

So let’s assume that this happens for several projects, and management decides that delayed recognition of incurred technical debt is a common occurrence. To account for this, management lowers the target ranges for N for future projects. This causes project managers and project sponsors to include in their project plans additional effort directed at retiring more of their incremental technical debt before their projects complete, to enable them to project lower values of N. They must therefore identify as much of the incremental technical debt as they can, and retire it, to meet the lower targets for N.

How oscillations set in

But recall that technical debt identification sometimes requires time and experience using the newly produced asset. And the reverse process also occurs. Technical artifacts that we thought were technical debt prove to be useful in unexpected ways, and actually turn out not to be debt items after all. As a result, some of the incremental technical debt that got retired before the project was completed actually should not have been retired. Eventually, people realize that this happens with uncomfortable frequency, and so the targets for N are raised once more.

Oscillations thus set in. Long delays inevitably cause them. To prevent oscillations, shorten the delays.

How to shorten delays in feedback controlling technical debt

When we use feedback to control a system, delays in that feedback can lead to instability. Trying to control technical debt is no exception. With technical debt we can shorten delays in several ways.

  • If the asset is meant for human use, involve representatives of the user population in the development and design process as soon as practical. Have them exercise the asset, or prototypes, early. Listen to their suggestions. Observe how they use the asset.
  • If the asset must interact with non-human assets, exercise it early and often. Don’t think of this as testing, though it might look very much like testing. What you’re actually doing is searching for shortcomings in how the asset interacts with non-human assets, in design and implementation in an asset that already works.
  • Subject the asset to multiple reviews all along the development trajectory. Don’t wait for final release to review it.

These practices expose technical debt items early—potentially, during initial design—thereby reducing delays in identifying what is and what isn’t technical debt. They help to advance the date at which we uncover missing capabilities or capabilities designed or implemented in awkward ways. No surprise, I’m sure, but these practices are consistent with Agile approaches to technological development.

Indirect effects can add to delayed recognition of technical debt

Most of the argument above assumed that the incremental technical debt associated with the project was incurred within the asset undergoing development or maintenance. But technical debt can occur in other assets as well. When the development team is unaware of such “remote” or “indirect” incremental technical debt, recognition of that new incremental technical debt can be significantly delayed. The project’s N (the ratio of incremental technical debt to project budget) will appear to be smaller that it actually is, until that remote incremental technical debt is recognized.

This form of delay is likely to occur when the debt incurred is asset-exogenous. Recall the example of line extension of mobile phones. In that example, the enterprise incurs technical debt in one set of products as a result of the introduction of a different product. In some cases, the newly incurred technical debt is immediately evident. When it is not, delays can be substantial.

This effect is by no means rare. Any organizational change can potentially add to the technical debt portfolio—reorganizations, acquisitions, expansions, wholly new products, and much more.

Last words

Interventions at the leverage points of an organization can produce the changes we want with a minimum of effort. Some subtlety is involved, because Meadows’ leverage points are expressed at a high level of abstraction.  But applying them to the problem of technical debt management is a promising approach.

Bookmark this post. I’ll be linking to more examples of using leverage points to manage technical debt. So far:

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Meadows 1997] Donella H. Meadows. “Places to Intervene in a System,” Whole Earth, Winter 1997.

Available: here; Retrieved: June 28, 2018

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Meadows 2008] Donella H. Meadows and Diana Wright. Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green Publishing, 2008.

Order from Amazon

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Exogenous technical debt

Last updated on July 9th, 2021 at 04:58 pm

Exogenous technical debt is debt that arises from causes not directly related to the asset that bears the debt. Mastering understanding of exogenous technical debt is essential to controlling technical debt formation. Exogenous technical debt is particularly troublesome to those who work on the affected assets. They can’t control its formation, and they’re rarely responsible for creating it. But their internal customers and those who control resources often fail to understand this. Indeed, those who work on the affected assets often bear blame for the formation of exogenous technical debt even though they had no role in its formation, and could have done nothing to prevent its formation.

Exogenous technical debt and endogenous technical debt

Technical debt is exogenous when it’s brought about by an activity not directly related to the assets in which the debt appears. The word exogenous comes from the Greek exo– (outside) + –genous (related to producing). So exogenous technical debt is that portion of an asset’s debt that comes about from activities or decisions that don’t involve the asset directly.

Why we must track exogenous technical debt

Asbestos with muscovite.
Asbestos with muscovite. Asbestos is a family of minerals occurring naturally in fibrous form. The fibers are all known carcinogens. Until 1990, asbestos was a common ingredient of building materials, including insulation, plaster, and drywall joint compound. It’s now banned, but it’s present in existing homes and offices. The ban caused these structures to incur exogenous technical debt. Photo by Aramgutang courtesy Wikipedia.

Because so much technical debt arises indirectly, controlling its direct formation is insufficient to achieve control. To control technical debt formation, we must track which activities produce it. We must track both direct and indirect effects. Allocating technical debt retirement costs to the activities that brought that debt about is useful. It’s useful even if the allocation doesn’t affect budget authority for those activities. Knowledge about which past activities created technical debt, and how much, is helpful for long-term reduction in the rate of technical debt formation.

When we think of technical debt, we tend to think of activities that produce it relatively directly. We often imagine it as resulting solely from engineering activity, or from decisions not to undertake engineering activity. In either case the activity involved, whether undertaken or not, is activity directly involving the asset that carries—or which will be carrying—the technical debt. This kind of technical debt is endogenous technical debt. The word endogenous comes from the Greek endo– (within or inside) + –genous (related to producing). So endogenous technical debt is that portion of an asset’s debt that comes about from activities or decisions that directly involve the asset.

More about endogenous technical debt in future posts. For now, let’s look more closely at exogenous technical debt, and its policy implications.

Examples of exogenous technical debt

In “Spontaneous generation,” I examined one scenario in which technical debt formation occurs spontaneously—that is, in the absence of engineering activity. Specifically, I noted how the emergence of the HTML5 standard led to the formation of technical debt in some (if not all) existing Web sites. This happened because those sites didn’t exploit capabilities that had become available in HTML5. Moreover, some sites needed rehabilitation to remove emulations of the capabilities of the new standard. Those emulations needed to be replaced with use of facilities in the HTML5 standard. All of these artifacts—including those that existed, and those that didn’t—comprised technical debt. This scenario thus led to the formation of exogenous technical debt.

In a second example, AMUFC, A Made-Up Fictitious Corporation, incurs technical debt when the vendor that supplies the operating system (OS) for AMUFC’s desktop computers announces the date of the end of extended support for the version of the OS in use at AMUFC. Because the end of extended support brings an end to security updates, AMUFC must retire that debt by migrating to the next version of that vendor’s OS before extended support actually ends.

In both examples, the forces that lead to formation of exogenous technical debt are external to the enterprise and the enterprise’s assets. But what makes technical debt exogenous is that the forces that led to its formation are unrelated the engineering work being performed on the asset. This restriction is loose enough to also include technical debt that arises from any change or activity external to the asset, but within the enterprise.

Exogenous technical debt arising from actions within the enterprise

Exogenous technical debt can arise from activities or decisions that take place entirely within the enterprise.

For example, consider the line of mobile devices of AMUFC (A Made-Up Fictitious Corporation). Until this past year, AMUFC has been developing ever more capable devices. These efforts extended its line of offerings at the high end—the more expensive and capable members of the line. But this past quarter, AMUFC developed a low-end member of the line.

As often happens, price constraints for the low-cost device led to innovations. Those innovations could produce considerable savings in manufacturing costs if used all across the line. In effect, the designs of the previously developed higher-end models have incurred exogenous technical debt. The debt is exogenous because the activity that led to debt formation wasn’t performed on the assets that carry the debt. The debt is real, even though the activity that led to debt formation occurred within the enterprise. This kind of exogenous technical debt is asset-exogenous. Exogenous technical debt of the kind that results from activity beyond the enterprise is enterprise-exogenous.

Exogeneity versus endogeneity

For asset-exogenous technical debt, ambiguity between endogeneity and exogeneity can arise. The example above regarding the line of mobile devices produced by AMUFC provides an illustration.

For convenience, call the team that developed one of the high-end devices Team High. Call the team that developed the low-end device Team Low. From the perspective of Team High, the technical debt due to the innovations discovered by Team Low is exogenous. But from the perspective of the VP Mobile Devices, that same technical debt might be regarded as endogenous. The debt can be endogenous at VP level because it’s possible to regard the entire product line as a single asset, and that might actually be the preferred perspective of VP Mobile Devices.

This ambiguity can lead to some nasty toxic conflict. Team High and VP Mobile Devices might attack each other as they try to defend themselves proactively against claims that they are incurring technical debt. Avoiding this kind of conflict requires educating everyone as to the origins of technical debt.

Exogeneity and legacy technical debt

The technical debt portfolio of a given asset can contain a mix a technical debt that arose from various past incidents. In assessing the condition of the asset, it’s useful to distinguish this existing debt from debt that’s incurred as a consequence of any current activity or decisions. Call this pre-existing technical debt legacy technical debt.

The legacy technical debt an asset carries is technical debt associated with the asset, and which existed in any form before undertaking work on that asset. For example, consider planning a project to renovate the hallways and common areas of a high-rise apartment building. Suppose workers discover beneath the existing carpeting a layer of asbestos floor tile. Then Management might decide to remove the tile. In this context, we can regard the floor tile as legacy technical debt. It isn’t directly related to the objectives of the current renovation. But removing it will enhance the safety of future renovations. It will also enable certification of the building as asbestos-free, increase the property value, and reduce the cost of eventual demolition. In this situation asbestos removal is retirement of legacy technical debt. Accounting for it as part of the common-area renovation would be misleading.

Exogeneity is relevant when allocating resources for legacy technical debt retirement efforts. If the debt in question is enterprise-exogenous, we can justifiably budget the effort from enterprise-level accounts. For other cases, other resources become relevant, depending on what actions created the debt. For example, suppose that the technical debt arose from a change in enterprise standards. Then we can justifiably allocate retirement costs to the standard-setting initiative. If the exogenous technical debt arose from innovations in other members of the asset’s product line, we can can justifiably allocate those debt retirement costs to the product line.

Policy insights

Understanding the properties of exogenous technical debt can be a foundation for policy innovations that enhance enterprise agility.

Culture transformation

Widespread understanding of the distinction between exogenous and endogenous technical debt is helpful in controlling interpersonal conflict. For example, it can reduce blaming behavior that targets the engineering teams responsible for developing and maintaining technological assets.

Understanding asset-exogenous technical debt helps non-engineers understand how their actions and decisions can lead to technical debt formation. The concept clarifies the import of their actions even when there is no apparent direct connection between those actions or decisions and the assets in question.

Resource allocation

Data about the technical debt creation effects of enterprise activities is helpful in allocating technical debt retirement costs. For example, suppose that we know all the implications of reorganization, including its impact on internal data about the enterprise itself. Then we can charge data-related activity to the reorganization instead of to general accounts of the Information Technology function. This helps the enterprise understand the true costs of reorganization.

Similarly, data about enterprise-exogenous technical debt helps planners understand how to deploy resources to gather external intelligence about trends that can affect internal assets. Such data is also useful for setting levels of support and participation in industrial standards organizations or in lobbying government officials.

Last words

Knowing the formation history of exogenous technical debt provides useful guidance for those charged with allocating the costs of retiring technical debt or preventing its formation.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Meadows 1997] Donella H. Meadows. “Places to Intervene in a System,” Whole Earth, Winter 1997.

Available: here; Retrieved: June 28, 2018

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Meadows 2008] Donella H. Meadows and Diana Wright. Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green Publishing, 2008.

Order from Amazon

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Feature bias: unbalanced concern for capability vs. sustainability

Last updated on July 7th, 2021 at 09:56 pm

Alaska crude oil production 1990-2015
Alaska crude oil production 1990-2015. This chart [Yen 2015] displays Alaska crude oil produced and shipped through the Trans Alaska Pipeline System (TAPS) from 1990 to 2015. Production had dropped by 75% in that period, and the decline is projected to continue. In January 2018, in response to pressure from Alaskan government officials and the energy industry, the U.S. Congress passed legislation that opened the Arctic National Wildlife Refuge to oil exploration, despite the threat to ecological sustainability that exploration poses. If we regard TAPS as a feature of the U.S. energy production system, we can view its excess capacity as a source of feature bias. It creates pressure on decision makers to add features to the U.S. energy system. Alternatively, they could act to enhance the sustainability of Alaskan and global environmental systems [Wight 2017].

<

Enterprise decision makers affected by feature bias tend to harbor distorted views of the importance of new capability development compared to technical debt management. This tendency is likely due to the customer’s relative sensitivity to features, and relative lack of awareness of sustainability. Whatever the cause, customers tend to be more attracted to features than they are to indicators of sound technical debt management and other product sustainability practices. This tendency puts decision makers at risk of feature bias: unbalanced concern for capability vs. sustainability.

h4>Accounting changes can help

Changes in cost accounting could mitigate feature bias effects by projecting more accurately total MICs based on historical data and sound estimation. I explore possible accounting changes later in this post, and in future posts; meanwhile, let’s explore the causes and consequences of the distorted perspective I’m calling feature bias.

Causes and consequences of feature bias

For products or services offered outside the enterprise, the sales and marketing functions of the enterprise represent the voice of the customer [Gaskin 1991]. But customers are generally unaware of product or service attributes that determine maintainability, extensibility, or cybersecurity. These factors, the sustainability factors, affect the MICs for technical debt. But customers are acutely aware of capabilities—or missing or defective capabilities. Customer comments and requests are therefore unbalanced in favor of capability over sustainability. The sales and marketing functions tend to accurately transmit this unbalanced perspective to decision makers and technologists.

An analogous mechanism prevails with respect to infrastructure and its internal customers. Internal customers tend to be more concerned with capabilities than they are with sustainability of the processes and systems that deliver those capabilities. Thus, pressure from internal customers tends to emphasize capability at the expense of sustainability. The result of this imbalance is pressure to allocate excessive resources to capability enhancement, compared to activities that improve sustainability. And therefore controlling or reducing technical debt and its MICs gets less attention.

Nor is this the only consequence of feature bias. It provides unrelenting pressure for increasing numbers of features, despite the threats to architectural coherence and overall usability that such “featuritis” or “featurism” present. Featurism leads, ultimately, to feature bloat, and to difficulties for users, who can’t find what they need among the clutter of features that are often too numerous to document. For example, in Microsoft Word, many users are unaware that Shift+F5 moves the insertion point and cursor to the point in the active document that was last edited, even if the document has just been freshly loaded into Word. Useful, but obscure.

Feature bias bias

Feature bias, it must be noted, is subject to biases itself. The existing array of features appeals to a certain subset of all potential customers. Because it is that subset that’s most likely to request repair of existing features. And they’re also the most likely to suggest additional features. The pressure for features tends to be biased in favor of the needs of the most vociferous users. That is, there is pressure to evolve to better meet the needs of existing users. That pressure can force to lower priority any efforts toward meeting the needs of other stakeholders or potential stakeholders. These other stakeholders might be even more important to the enterprise than are the existing users. This bias in feature bias presents another risk that can affect decision makers.

Organizations can take steps to mitigate the risks of feature bias. An example of such a measure might be using focus groups to study how educating customers in sustainability issues affects their perspectives relative to feature bias. Educating decision makers about feature bias can also reduce this risk.

At the enterprise scale, awareness of feature bias would be helpful. But awareness alone is unlikely to counter its detrimental effects. These effects include underfunding technical debt management efforts. Eliminating the source of feature bias is extraordinarily difficult, because customers and potential customers aren’t subject to enterprise policy. Feature bias and feature bias bias are therefore givens. To mitigate the effects of feature bias, we must adopt policies that compel decision makers to consider the need to deal with technical debt.

A possible corrective action

One possible corrective action might be improving accounting practices for MICs, based on historical data. For example, there’s a high probability that any project might produce new technical debt. It might be prudent to fund the retirement of that debt in the form of reserves when we fund projects. And if we know that a project has encountered some newly recognized form of technical debt, it might be prudent to reserve resources to retire that debt as soon as possible. Ideas such as these can rationalize resource allocations with respect to technical debt.

These two examples illustrate what’s necessary if we want to mitigate the effects of feature bias. They also illustrate just how difficult such a task will be.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gaskin 1991] Steven P. Gaskin, Abbie Griffin, John R. Hauser, Gerald M. Katz, and Robert L. Klein. “Voice of the Customer,” Marketing Science 12:1, 1-27, 1991.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Meadows 1997] Donella H. Meadows. “Places to Intervene in a System,” Whole Earth, Winter 1997.

Available: here; Retrieved: June 28, 2018

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Meadows 2008] Donella H. Meadows and Diana Wright. Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green Publishing, 2008.

Order from Amazon

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Wight 2017] Philip Wight. “How the Alaska Pipeline Is Fueling the Push to Drill in the Arctic Refuge,” YaleE360, Yale School of Forestry & Environmental Studies, November 16, 2017.

Available: here; Retrieved: February 8, 2018

Cited in:

[Yen 2015] Terry Yen, Laura Singer. “Oil exploration in the U.S. Arctic continues despite current price environment,” Today in Energy blog, U.S. Energy Information Administration, June 12, 2015.

Available: here; Retrieved: February 8, 2018.

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Unrealistic definition of done

Last updated on July 8th, 2021 at 01:20 pm

Many an enterprise culture includes, perhaps tacitly, an unrealistic definition of done for projects. Some enterprise cultures assume definitions of done that fail to adequately acknowledge attributes related to sustainability. For such cultures, technical debt expands inexorably. In most organizations, the definition of done includes meeting the attributes that most internal customers understand and care about. These attributes might not include sustainability [Guo 2011]. Indeed, even among technologists, the definition of done might not enjoy precise consensus [Wake 2002].

Why retiring technical debt isn’t included in “done”

The 2009 Ford Focus SES coupe (North America) engine bay. Its design is “done” in the sense that it’s available to consumers.
The 2009 Ford Focus SES coupe (North America) engine bay. Typical owners can no longer learn how to maintain their own vehicles. Engines have become so complex that even experienced mechanics must train to maintain the engines they work on. Since these vehicles are available for sale to consumers, clearly their manufacturers regard their designs as “done.” But is technical debt a factor in the growing complexity of modern engines? It’s probably present in their software, and it would be most surprising if we found no technical debt in the mechanical design. Photo (cc) Porsche997SBS courtesy Wikimedia.
Internal customers understand less well the attributes of deliverables related to sustainability. It’s therefore perhaps unsurprising that sustainability might not receive the attention it needs. Applying scarce resources to enhance attributes the customer doesn’t understand, and cares about less, will always be difficult.

To gain control of technical debt, we must redefine done to include addressing sustainability of deliverables. Although there may be many ways to accomplish this, none will be easy. Resolution will necessarily involve educating internal customers to understand enough about sustainability to enable them to justify paying for it.

Redefining “done”

The typical definition of done for most projects ensures only that the deliverables meet the requirements. Because requirements usually omit reference to retiring newly incurred nonstrategic technical debt, we often declare projects complete with incremental technical debt still in place. A similar problem prevails with respect to legacy technical debt.

A more insidious form of this problem is intentional shifting of the definition of done. This can happen when the organization has adopted a reasonable definition of done that allows for addressing sustainability. But under severe time pressure, the definition is “temporarily” amended to allow the team to declare the effort complete, even though sustainability issues remain unaddressed.

For most projects, three conditions conspire to create steadily increasing levels of nonstrategic technical debt. First, for most tasks, the definition of done is that the deliverables meet the project objectives, or at least, they meet them well enough. Second, typical project objectives don’t restrict levels of newly incurred nonstrategic technical debt, nor do they demand retirement of incidentally discovered legacy technical debt. Third, budget authority usually terminates upon acceptance of delivery. These three conditions, taken together, restrain engineering teams from immediately retiring any debt they incur. Nor can they retire—or document or report—any legacy technical debt they encounter while fulfilling other requirements.

For example, for one kind of incremental technical debt—what Fowler calls [Fowler 2009] Inadvertent/Prudent (“Now we know how we should have done it”)—the realization that we’ve incurred new debt often occurs after the task is “done.” If budget authority has terminated, there are no resources available—financial or human—to retire that form of technical debt.

Last words

Unless team members document the technical debt they create or encounter, there is risk of lost knowledge. After team members move on to their next assignments the enterprise is likely to lose track of the location and nature of that debt. A more realistic definition of done would enable the team to continue working post-delivery to retire or document any newly incurred nonstrategic technical debt. They could also note any incidentally encountered legacy technical debt. Moreover, teams most likely leave in place any strategic technical debt—technical debt incurred intentionally for strategic reasons. Although the enterprise must eventually address such debt as well, the widespread definition of done doesn’t address it.

Policymakers are well positioned to advocate for the culture transformation needed to redefine done.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gaskin 1991] Steven P. Gaskin, Abbie Griffin, John R. Hauser, Gerald M. Katz, and Robert L. Klein. “Voice of the Customer,” Marketing Science 12:1, 1-27, 1991.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Guo 2011] Yuepu Guo, Carolyn Seaman, Rebeka Gomes, Antonio Cavalcanti, Graziela Tonin, Fabio Q. B. Da Silva, André L. M. Santos, and Clauirton Siebra. “Tracking Technical Debt: An Exploratory Case Study,” 27th IEEE International Conference on Software Maintenance (ICSM), 2011, 528-531.

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[Meadows 1997] Donella H. Meadows. “Places to Intervene in a System,” Whole Earth, Winter 1997.

Available: here; Retrieved: June 28, 2018

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Meadows 2008] Donella H. Meadows and Diana Wright. Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green Publishing, 2008.

Order from Amazon

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

[Wake 2002] Bill Wake. “Coaching Drills and Exercises,” XP123 Blog, June 15, 2002.

Available: here

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Wight 2017] Philip Wight. “How the Alaska Pipeline Is Fueling the Push to Drill in the Arctic Refuge,” YaleE360, Yale School of Forestry & Environmental Studies, November 16, 2017.

Available: here; Retrieved: February 8, 2018

Cited in:

[Yen 2015] Terry Yen, Laura Singer. “Oil exploration in the U.S. Arctic continues despite current price environment,” Today in Energy blog, U.S. Energy Information Administration, June 12, 2015.

Available: here; Retrieved: February 8, 2018.

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Stovepiping can lead to technical debt

Last updated on July 8th, 2021 at 01:18 pm

Stovepiping can lead to technical debt. Actual stovepipes are the tubes that vent exhaust from stoves. These tubes serve as a metaphor for the flow of information in “stovepiped” organizations. In stovepiped organizations, information can flow predominantly (or only) up or down along the parallel chains of command. But information can flow only rarely (or never) from a point in one chain of command across to some other chain of command [Waters 2010]. The stovepipe metaphor is imperfect, in the sense that in actual stovepipes, smoke and fumes rarely flow downwards. By contrast, in organizations, some information does flow down the chains of command. But the metaphor does clarify the problem of limited flow of information. Transferring whatever the organization learns in one metaphorical stovepipe into other metaphorical stovepipes is difficult.

Two forms of stovepiping

The stovepipes in a wood-burning stove in a farm museum
A wood-burning stove in a farm museum in Lower Bavaria (German: Niederbayern). Lower Bavaria is one of the seven administrative regions of Bavaria, Germany. The stovepipe, which is the black tube running upwards from the stove, channels smoke and fumes out of the kitchen into the chimney.
Stovepiping can occur in both organizational structures and in engineered systems. These two forms of stovepiping are intimately related, and both can lead to uncontrolled formation of new technical debt, or increased persistence of existing technical debt.

In organizational structures, stovepiping occurs when elements of different organizational units with similar capabilities act relatively independently. An example is the dispersal of some elements of the IT function out into IT’s customers. When independent organizations have similar technical needs, they’re at risk of generating new technical debt. The debt they generate results from independently implementing technological capabilities that duplicate each other.

Stovepiping occurs in engineering, for example, when the organization manages and maintains independently two distinct technological assets [McGovern 2003]. In that situation, distinct engineering efforts working on those assets might happen to solve the same problem, possibly in two different ways. Then each party might be either ignorant or possibly disparaging, of the other’s efforts.

How stovepiping relates to technical debt

In whichever way duplication of technological capability comes about, it can increase levels of technical debt. Alternatively, it can increase persistence of existing technical debt. These effects happen because the organization might need to execute future maintenance or enhancement efforts multiple times—once for each instance of the technical artifact. That exposes the organization to additional cost, additional load on its staff, and additional risk of creating defects and incurring liability. Compare this situation to one in which all units that need a particular asset share it. Duplication is expensive.

The problem is actually even more worrisome. First, suppose there exists a defect in one version of a technological artifact. The people who are aware of the defect might not realize that another version of the artifact exists. If that second version also has an analogous defect, its defect might go unrecognized for some time, with all the usual attendant negative consequences. Second, suppose there is a necessary extension of the artifact’s capabilities. The maintainers of one version might recognize the need for the extension and implement it. Meanwhile, the maintainers of other versions might not recognize the need for the extension. They might not take action until something bad happens or a possibly urgent need arises. It’s easy to conjure other unfavorable—and costly—scenarios.

Stovepiping in technological systems

In engineering more generally, stovepiping can occur internally in systems, even though only one business unit is involved, and even though the stovepiped artifacts serve purposes invisible to the world outside the system. This can occur whenever there is weak communication between the teams designing or maintaining the portions of the system that host the similar artifacts. For readers familiar with the Apollo XIII incident, the incompatibility of the two carbon dioxide scrubbers in the command module and the lunar excursion module serves as an example of the risks of technical stovepiping.

When distinct business units or functions operate their own engineering or IT organizations, there’s an elevated probability of duplicating technological assets. The same effect can occur when they depend on a shared engineering function but require similar work. This happens when the organizational structure or the timing of the work encourages separate engineering efforts. Engineering or IT functions operated separately under the control of distinct business units or functions can clearly produce duplicated capabilities. However, duplication can also occur when the engineering function is shared across distinct business units or functions. This happens when the actual people and teams performing the work differ for different efforts. And it can happen too when communication is weak between those teams, whether or not the efforts are conducted contemporaneously.

Last words

Because identifying these forms of technical debt after they appear is notoriously difficult, preventing their formation is preferable. Prevention is possible if the enterprise establishes mechanisms that facilitate consultation and sharing among elements of different, separately operated technology development or maintenance functions. In other words, the organization must “break” the stovepipes—no mean feat, politically speaking.

Another challenge, of course, is providing resources for such sharing mechanisms, because preventing technical debt is rarely recognized as a value generator. If it were so recognized, the resources would likely appear. Changes in cost accounting might make such recognition more likely.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gaskin 1991] Steven P. Gaskin, Abbie Griffin, John R. Hauser, Gerald M. Katz, and Robert L. Klein. “Voice of the Customer,” Marketing Science 12:1, 1-27, 1991.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Guo 2011] Yuepu Guo, Carolyn Seaman, Rebeka Gomes, Antonio Cavalcanti, Graziela Tonin, Fabio Q. B. Da Silva, André L. M. Santos, and Clauirton Siebra. “Tracking Technical Debt: An Exploratory Case Study,” 27th IEEE International Conference on Software Maintenance (ICSM), 2011, 528-531.

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[McGovern 2003] James McGovern, Scott W. Ambler, Michael E. Stevens, James Linn, Vikas Sharan, and Elias K. Jo. A Practical Guide to Enterprise Architecture, Upper Saddle River, New Jersey: Prentice Hall PTR, 2003.

Order from Amazon

Cited in:

[Meadows 1997] Donella H. Meadows. “Places to Intervene in a System,” Whole Earth, Winter 1997.

Available: here; Retrieved: June 28, 2018

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Meadows 2008] Donella H. Meadows and Diana Wright. Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green Publishing, 2008.

Order from Amazon

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

[Wake 2002] Bill Wake. “Coaching Drills and Exercises,” XP123 Blog, June 15, 2002.

Available: here

Cited in:

[Waters 2010] Donald Waters. Global Logistics: New Directions In Supply Chain Management, 6th Edition, London: Kogan Page Limited, 2010.

Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Wight 2017] Philip Wight. “How the Alaska Pipeline Is Fueling the Push to Drill in the Arctic Refuge,” YaleE360, Yale School of Forestry & Environmental Studies, November 16, 2017.

Available: here; Retrieved: February 8, 2018

Cited in:

[Yen 2015] Terry Yen, Laura Singer. “Oil exploration in the U.S. Arctic continues despite current price environment,” Today in Energy blog, U.S. Energy Information Administration, June 12, 2015.

Available: here; Retrieved: February 8, 2018.

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

The Dunning-Kruger effect can lead to technical debt

Last updated on July 7th, 2021 at 07:54 pm

Cropped detail from Charles Robert Darwin, a painting by John Collier
Cropped detail from Charles Robert Darwin, a painting by John Collier (1850-1934). The painting was given to the National Portrait Gallery, London, in 1896. Darwin writes, in The Descent of Man (1871): “… ignorance more frequently begets confidence than does knowledge …” which is the essence of the Dunning-Kruger effect. Image courtesy WikiQuote.

The Dunning-Kruger effect [Kruger 1999] can lead to formation or persistence of technical debt in two ways. First, it can cause technologists or their managers to overestimate their ability to maintain the resource focus needed for retiring technical debt in a timely fashion. Second, it can cause senior managers to be reluctant to accede to resource requests of technologists and their managers in support of technical debt management programs.

Kruger and Dunning conducted experiments that yielded results consistent with the following four principles (paraphrasing):

  1. Incompetent individuals, compared to their more competent peers, tend to dramatically overestimate their own ability and performance
  2. Incompetent individuals, compared to their more competent peers, tend to be less able to gain insight into their own true levels of performance
  3. Incompetent individuals can gain insight about their shortcomings, but, paradoxically, this comes about by gaining competence
  4. Incompetent individuals, compared to their more competent peers, are less able to recognize competence when they see it

The first three principles lead to distorted assessments of one’s own capabilities. The fourth principle leads to distorted assessments of the capabilities of others.

How the Dunning-Kruger Effect affects teams

As an example of distorted self-assessment, consider a team or its managers who must undertake retirement of some types of technical debt. And suppose that they must do this in the course of enhancing or repairing an asset. Such a task plan seems at first to offer efficiencies. The engineers can readily make both kinds of changes at one go. Metaphorically, if we must go to the store for milk, we can also pick up bread while we’re there, rather than making two trips.

However, modifying an existing complex technological asset is unlike shopping for bread and milk. The two kinds of modifications—debt retirement and asset enhancement or repair—might seem at first to be separable. Often they are. But if they aren’t separable, and we undertake the two tasks together, testing and debugging can become extremely complicated. The complications arise because of interactions between defects in the two kinds of modifications. Under some circumstances, an experienced team and its managers might be more likely to anticipate these difficulties. An inexperienced team and its managers might be more likely to underestimate the difficulties, as a consequence of the Dunning-Kruger effect. Budget and schedule overruns are possible consequences of underestimating the complexity of the problem.

How the Dunning-Kruger Effect affects decision makers

As an example of the fourth principle above, the Dunning-Kruger effect can cause some decision makers to discount the warnings and resource requests of engineers and their managers. Decision makers who are unsophisticated in matters related to technical debt must nevertheless assess the validity of the resource requests. In making these assessments, these decision makers may be at a disadvantage for a number of reasons. Examples:

  • Decision makers might hold mistaken beliefs about technical debt. For example, many believe that the main causes of technical debt are poor decisions by engineering managers. And others believe that technical debt is the result of slovenly work habits of engineers. Those who hold such beliefs might be reluctant to allocate yet more resources to engineers to address technical debt.
  • If the advocates for resources for technical debt management aren’t fully informed about the strategic direction of the enterprise, their requests might be inconsistent with enterprise strategy. As a result of a cognitive bias [Kahneman 2011] known as the halo effect [Thorndike 1920], decision makers might notice that portions of those proposals don’t take enterprise strategy into account properly. This can cause them to tend to discount valid portions of the technologists’ proposals.
  • Decision makers might be affected by unrealistic optimism [Weinstein 1996], also known as optimism bias. It’s a cognitive bias that can cause them to discount the sometimes-vivid warnings of technologists about the unfavorable consequences of failing to provide technical debt management resources.

Last words

Investigations of organizational behavior relative to technical debt and the Dunning-Kruger effect could be fruitful. For example, what is the degree of correlation between burdens of technical debt and the incidence of rejected or severely curtailed proposals for resources? Also rewarding would be a survey of the nearly 200 known cognitive biases, to determine which of them might be most likely to affect decision-making relative to technical debt.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Distante 2014] Damiano Distante, Alejandra Garrido, Julia Camelier-Carvajal, Roxana Giandini, and Gustavo Rossi. “Business processes refactoring to improve usability in E-commerce applications.” Electronic Commerce Research 14:4 (2014): 497-529.

Available: here; Retrieved: August 23, 2019

Cited in:

[Fowler 1999] Martin Fowler, Kent Beck (Contributor), John Brant (Contributor), William Opdyke, Don Robert, Erich Gamma (Foreword). Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley Professional; first edition (July 8, 1999).

Order from Amazon

Cited in:

[Fowler 2003] Martin Fowler. “TechnicalDebt,” blog entry at MartinFowler.com, 1 October 2003.

Retrieved January 2, 2016, available at here; .

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gaskin 1991] Steven P. Gaskin, Abbie Griffin, John R. Hauser, Gerald M. Katz, and Robert L. Klein. “Voice of the Customer,” Marketing Science 12:1, 1-27, 1991.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Guo 2011] Yuepu Guo, Carolyn Seaman, Rebeka Gomes, Antonio Cavalcanti, Graziela Tonin, Fabio Q. B. Da Silva, André L. M. Santos, and Clauirton Siebra. “Tracking Technical Debt: An Exploratory Case Study,” 27th IEEE International Conference on Software Maintenance (ICSM), 2011, 528-531.

Cited in:

[Kahneman 2011] Daniel Kahneman. Thinking, Fast and Slow. New York: Macmillan, 2011.

Order from Amazon

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Kruger 1999] Justin Kruger and David Dunning. “Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments,” Journal of Personality and Social Psychology, 77:6, 1121-1134, 1999.

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[McGovern 2003] James McGovern, Scott W. Ambler, Michael E. Stevens, James Linn, Vikas Sharan, and Elias K. Jo. A Practical Guide to Enterprise Architecture, Upper Saddle River, New Jersey: Prentice Hall PTR, 2003.

Order from Amazon

Cited in:

[Meadows 1997] Donella H. Meadows. “Places to Intervene in a System,” Whole Earth, Winter 1997.

Available: here; Retrieved: June 28, 2018

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Meadows 2008] Donella H. Meadows and Diana Wright. Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green Publishing, 2008.

Order from Amazon

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Shroyer 2016] Alexander Shroyer. “Refactoring Hardware vs. Software,” Hoosier EE Blog, July 17, 2016.

Available: here; Retrieved: August 22, 2019

Cited in:

[Spence 2018] Ewan Spence. “New MacBook Pro Leak Reveals Apple's Innovative Failure,” Forbes, June 7, 2018.

Available: here; Retrieved: August 22, 2019

Cited in:

[Thorndike 1920] Edward L. Thorndike. “A constant error in psychological ratings,” Journal of Applied Psychology, 4:1, 25-29, 1920. doi:10.1037/h0071663

The first report of the halo effect. Thorndike found unexpected correlations between the ratings of various attributes of soldiers given by their commanding officers. Although the halo effect was thus defined only for rating personal attributes, it has since been observed in assessing the attributes of other entities, such as brands. Available: here; Retrieved: December 29, 2017

Cited in:

[Wake 2002] Bill Wake. “Coaching Drills and Exercises,” XP123 Blog, June 15, 2002.

Available: here

Cited in:

[Waters 2010] Donald Waters. Global Logistics: New Directions In Supply Chain Management, 6th Edition, London: Kogan Page Limited, 2010.

Order from Amazon

Cited in:

[Weinstein 1996] Neil D. Weinstein and William M. Klein. “Unrealistic Optimism: Present and Future,” Journal of Social and Clinical Psychology 15:1, 1-8, 1996. doi:10.1521/jscp.1996.15.1.1

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Wight 2017] Philip Wight. “How the Alaska Pipeline Is Fueling the Push to Drill in the Arctic Refuge,” YaleE360, Yale School of Forestry & Environmental Studies, November 16, 2017.

Available: here; Retrieved: February 8, 2018

Cited in:

[Yen 2015] Terry Yen, Laura Singer. “Oil exploration in the U.S. Arctic continues despite current price environment,” Today in Energy blog, U.S. Energy Information Administration, June 12, 2015.

Available: here; Retrieved: February 8, 2018.

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Show Buttons
Hide Buttons