Degrees of wickedness

Last updated on July 12th, 2021 at 09:23 am

Window blinds with some slats open and some closed, a metaphor for the degrees of wickedness of a wicked problem
Window blinds with some slats open and some closed, a metaphor for the degrees of wickedness of a wicked problem. Think of the slats as the wicked problem criteria of Rittel and Weber. A closed slat is a criterion satisfied; a partially open slat is a criterion satisfied to some extent. A wicked problem has all slats closed; a tame problem has at least one slat at least partially open. When only a few slats are open, the problem isn’t a wicked problem, but finding a solution might be very difficult. The number of open slates corresponds to the degrees of wickedness of most technical debt retirement project design problems. When even one slat is partially open, we can peek through the blinds, and use that information to pry open other slats.

In a recent post I explored conditions that tend to make designing a project to retire technical debt a wicked problem. And in another post I noted some conditions that tend to make designing a project to retire technical debt a super wicked problem. But not all technical debt retirement project design efforts are wicked problems. “Wickedness” can occur in degrees. Designing these projects can be a tame problem, especially if we incurred the technical debt recently. In this post I explore degrees of wickedness in retiring technical debt. I propose a framework for dealing with technical debt retirement project design problems that are less-than-totally wicked.

The degrees of wickedness of a problem

As a quick review, here are the attributes of wicked problems as Rittel and Webber see them [Rittel 1973], rephrased for brevity:

  1. There is no clear problem statement
  2. There’s no way to tell when you’ve “solved” it
  3. Solutions aren’t right/wrong, but good/bad
  4. There’s no ultimate test of a solution
  5. You can’t learn by trial-and-error
  6. There’s no way to describe the set of possible solutions
  7. Every problem is unique
  8. Every problem can be seen as a symptom of another problem
  9. How you explain the problem determines what solutions you investigate
  10. The planner (or designer) is accountable for the consequences of trying a solution

Rittel and Webber held that wicked problems possessed all of these characteristics, but Kreuter, et al., take a different view, which I find compelling [Kreuter 2004]. Their view is that wicked problems and tame problems lie at opposite ends of a spectrum. A problem that satisfies all ten of the criteria would lie at the wicked end of the spectrum; one that satisfies none would lie at the tame end.

The ten criteria aren’t black-and-white

A close examination of Rittel’s and Webber’s ten criteria reveals that they aren’t black-and-white. We can regard each one as occurring in various degrees. For example, consider Criterion 1: “There is no clear problem statement,” which Rittel and Webber express as, “There is no definitive formulation of a wicked problem.” Burge and co-author McCall, who was a student of Rittel, offer this interpretation [Burge 2015]:

Here by the term formulation Rittel means the set of all the information need [sic] to understand and to solve the problem. By definitive, he means exhaustive.

The original language of Rittel and Webber, with the interpretation of Burge and McCall, is black-and-white. But we can imagine problems that satisfy this criterion to varying degrees. That is, one problem formulation might have almost everything needed to understand and solve the problem, while another might have almost none of what’s needed. In some cases, the problem solver might make progress toward a solution by making reasonable assumptions to fill gaps. Or the formulation as given might be incomplete. If so, by working on a solution despite gaps, the missing information might reveal itself, or it might arrive as a result of other research.

A continuum hypothesis

For these reasons, I regard the degree to which a problem satisfies Criterion 1 as residing on a continuum. And I expect that we could find analogous arguments for all ten criteria. This “continuum hypothesis” doesn’t conflict with the definition of a wicked problem. Wickedness still requires that all ten criteria be satisfied absolutely. But how well the problem satisfies the criteria of Rittel and Webber determines its position on the Tame/Wicked spectrum. In other words, as we address the problem of designing a technical debt retirement project, we can consider the degree of wickedness of the problem, not merely whether a problem is wicked.

The degree of a problem’s wickedness provides useful guidance. If a problem clearly satisfies nine of the ten criteria, but not the tenth, according to Rittel and Webber, it would not be a wicked problem. Because solving it might be extraordinarily difficult, we would treat it as wicked with respect to the nine criteria it satisfies. We would use that information to guide our decisions about resource choice and resource allocation. The model of wicked problems provided by Rittel and Webber would be useful, even though the problem itself might not meet their definition.

And so emerges the concept of the dimensionality of wickedness.

The dimensionality of wickedness

We can regard the ten criteria of Rittel and Webber as dimensions in a ten-dimensional space. When we do, our “wickedness spectrum” becomes much richer. Maybe too rich, in the sense that its complexity presents difficulty when we try to think about it. But the concept of dimensionality of wickedness can be useful, if we consider each dimension as having a degree of wickedness. This enables us to choose problem-solving techniques that work well for wicked problems that owe their wickedness to specific dimensions. That is the approach of Kreuter, et al. [Kreuter 2004].

This suggests a framework for designing (or redesigning) technical debt retirement projects:

  • Deal separately (and first) with any parts of the technical debt retirement project design problem that are tame
  • Determine the importance of each one of a set of “nine indicators of wickedness
  • Use that information to determine which of the ten criteria of Rittel and Webber are most relevant to this particular technical debt retirement project design problem
  • Apply established approaches that account for the relevant criteria to formulate a project design

This program is too much for a single post. But I can make a start in my next post with descriptions of the indicators of wickedness. That post includes an examination of the implications of each of these indicators relative to the presence of each of the ten criteria of Rittel and Webber. The next step will be to suggest techniques for technical debt retirement project design problems that meet, to some degree, the criteria of Rittel and Webber.

Buckle up.

References

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

Other posts in this thread

Retiring technical debt can be a super wicked problem

Last updated on July 10th, 2021 at 10:51 am

In my last post I provided a list of attributes of wicked problems [Rittel 1973]. I included the reasons why I feel that designing technical debt retirement projects can be wicked problems. As a review, here are the attributes of wicked problems as Rittel and Webber see them, rephrased for brevity:

  1. Wicked problems have no definitive formulation
  2. Wicked problems have no stopping rule
  3. Solutions to wicked problems aren’t true-or-false; they’re good-or-bad
  4. There is no immediate ultimate test of a solution to a wicked problem
  5. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by
    trial-and-error, every attempt counts significantly
  6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions; nor is there a
    well-described set of permissible operations that we can incorporate into the plan
  7. Every wicked problem is essentially unique
  8. We can regard every wicked problem as a symptom of another problem
  9. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation
    determines the nature of the problem’s resolution
  10. The planner (or designer) has no right to be wrong

Four properties of super wicked bproblems

We can regard a subset of wicked problems as super wicked [Levin 2012]. Levin, et al. list the following four properties of super wicked problems. With each one, I’ve added reasons why planning a technical debt retirement project can qualify as a super wicked problem.

Time is running out

Super wicked problems have inherent timescales. For example, many believe that climate change will become irreversible within 30 years if current practices continue.

Technical debt retirement can have an inherent time scale. For example, Microsoft ended mainstream support for Windows 7 in January 2015. At that time, computers running Windows 7 incurred a technical debt. Yet by September 2018, 46.7% of computers running Windows were still running Windows 7. That was a 0.6% increase from the previous month [Keizer 2018]. At this writing, standard support will end in January 2020. That confers a timescale on this kind of technical debt.

When time runs out for solving wicked problems, the consequences can be severe. With respect to Windows 7, the consequences are more serious than forcing conversion to Windows 10. Some applications running on those machines might be compatible with Windows 10, but some might require udates. And all users of converted machines must learn how to use Windows 10 and any updated or replaced applications. So there’s also a training issue, a learning curve, and period of elevated user error rates.

Letting a time-boxed technical debt remain in place can be financially dangerous. As the retirement window closes, the cost of debt retirement concentrates on a declining number of fiscal quarters. If retirement costs are high enough, the impact of debt retirement on net income can be severe and negative. For enterprises whose securities are publicly traded, this effect can be costly for shareholders. At this writing there are only about five fiscal quarters remaining for Windows 10 conversions. For other technical debts, the number of fiscal quarters available for diluting the costs of retirement might be more—or less.

Those who cause the problem also advocate a solution
Two pilots line up their F/A-22 Raptor behind a tanker
Two pilots line up their F/A-22 Raptor behind a tanker. When Hurricane Michael made landfall on October 10, 2018, it passed over Tyndall Air Force Base. Tyndall has responsibility for air dominance training for F-22s. As Hurricane Michael approached, 33 of the 55 F-22s at Tyndall were repositioned to Wright-Patterson Air Force Base in Ohio [Phillips 2018a]. The remaining aircraft at Tyndall were undergoing maintenance and weren’t operational [Gabriel 2018]. The storm damaged some of them, but they’re believed to be repairable.

Because of climate change [Cook 2016], increases in storm intensity and frequency are likely. Air bases in coastal regions are at risk [Phillips 2018b]. They now constitute a technical debt. Relocating them could be a wicked problem, and possibly a super wicked problem. But if federal policies continue to fail to address climate change, they could prevent relocation. If that happens, the policies themselves represent a technical debt. U.S. Air Force photo by TSgt Ben Bloker, courtesy Wikipedia

The phrase “seek to provide a solution” might be somewhat tactful. I expect that some super wicked problems have the property that those who cause the problem exert some degree of control over what kinds of solutions are acceptable, or even discussible. In many cases, this represents a conflict of interest that can prevent the organization from deploying the more effective options.

That conflict of interest is certainly present in the context of many technical debt retirement projects. Technical debt formation and persistence are due, in part, to a failure to commit resources to retiring it, or, at least, to inhibiting its formation. That failure is the responsibility of those in leadership roles in the enterprise. Typically, these are the same people who must decide to commit resources to retire technical debt in the future.

The central authority needed to address the problem is weak or nonexistent

Again, I find this description unnecessarily limiting. I would prefer a phrasing such as, “The central authority, for whatever reason, chooses to exert, or is unable to exert, its authority in furtherance of solution, or even investigation.” In other words, the central authority need not be weak for it to be a source of difficulty in addressing the super wicked problem. It need only choose not to act. This can happen when those who cause the problem are the people who constitute the central authority, or they capture the central authority, or they capture the function to which the central authority has delegated responsibility for solution.

With respect to technical debt retirement, consider this scenario. At AMUFC, A Made-Up Fictitious Corporation, the sales and marketing functions have repeatedly struggled with the engineering function for shares of budget resources. Engineering has argued repeatedly, and unsuccessfully, that it needs additional resources to address the technical debt that has accumulated in several products. But the CEO is a former VP Sales, and a close friend of the CFO. Together, they have always decided to defer technical debt retirement in favor of new products and enhancements favored by the VP Marketing, by customers, and by investors.

Scenarios like this are common. Enterprise leadership is strong, but not inclined to address the technical debt retirement issue.

Partly as a result, policy responses discount the future irrationally

Irrational discounting of future costs and benefits occurs when policies are deployed that give too much emphasis to producing short-term benefits and/or to avoiding short-term costs or inconveniences. Benefits are pulled in from the future towards the present; costs and inconveniences are pushed out toward the future and deferred. One form of this discounting scheme—one of many—is hyperbolic discounting.

This tendency is one way of distracting attention from the actual problem. It is the principal tactic that enables the persistence of technical debt, and the means by which enterprises repeatedly defer attention to the problem of retiring technical debt.

Both the problem of managing technical debt and the problem of designing technical debt retirement projects, exhibit all of these properties to some degree. It’s likely, in my view, that these problems are super wicked problems.

Intervention strategies for super wicked problems

Levin, et al., recommend four distinct strategies for resolving super wicked problems [Levin 2012]. They are all approaches to devising policies that are difficult to alter, thus committing the organization to a particular path forward.

Lock-in

Lock-in is usually regarded as dysfunctional adherence to a strategy or course of action despite the existence of superior alternatives [Brenner 2011]. It occurs when a policy confers some kind of immediate benefit on a subset of the population. If that benefit is significant, if the population subset would be harmed by alterations of the policy that remove the benefit, and if the subset has enough political power to defend the benefit, the policy will be “locked in” and thus difficult to change. Levin, et al., suggest that this phenomenon can serve a beneficial purpose by protecting a constructive policy, thus preventing its abandonment.

Most technical debt retirement efforts focus solely on retiring the debt. All (or most) of the benefit appears in the form of increased engineering productivity, decreased sources of frustration for engineers, or increased engineering agility. Benefits for non-engineering stakeholders tend to be indirect. To establish policies that exploit lock-in we must craft them so that they provide ongoing, direct benefit to the most politically powerful stakeholders. For example, addressing first the forms of technical debt that are most likely to lead to product innovations that non-engineering stakeholders would value highly could cause those stakeholders to favor further technical debt retirement efforts.

Positive feedback

Exploiting lock-in makes policies durable when people or organizations already supporting the policy derive some kind of increased benefit, leading to others not yet supporting or covered by the policy to decide to support it. This mechanism is sometimes known as a “network effect.” When network effects are present, the value of a product or service increases as the size of the population using it increases [Shapiro 1998].

To exploit network effects when devising technical debt retirement efforts, focus on retiring the kinds of technical debts that confer benefits on stakeholders of platform assets. A platform asset is an asset that supports multiple other assets. Examples: an application development tool suite, a product line architecture, or an enterprise data network. Platform assets that support collaboration communities are more likely to generate network effects.

Increasing Returns

Policies and interventions that enable increasing returns to the population are more likely to be durable than those that offer steady returns. Because people adapt to steady levels of stimuli, policies that produce a change in the context only during the period immediately following initial adoption of those policies are less likely to maintain popular support than are policies that continue to provide increasing returns as long as they’re in place.

But among policies that provide increasing returns, Levin, et al., identify two types. Type I policies, which are less durable, confer their benefits on an existing population of supporters. They don’t cause others to become supporters. Type II policies also confer benefits on supporters, but they do cause others to become supporters. They are thus far more durable than are Type I, because they foster growth in the supporting population.

Framing technical debt retirement projects as individual projects with the objective of retiring a specified kind of technical debt is likely to lead to the enterprise population viewing the effort as a Type I policy at best. But framing each project as a phase of a longer-term effort could position the larger effort as a Type II policy, if the larger effort affects increasing portions of the enterprise population.

Self-reinforcing

Self-reinforcing policies create a dynamic that makes them more durable. Reinforcement can come about for two reasons. It can be a result of increases in the benefits the policy generates, or it can result from increases in the cost of rescinding the policy. In some cases, reinforcement can result from a combination of both effects. As with the strategy of Increasing Returns, there are two types of self-reinforcing policies. Type I self-reinforcing policies focus on maintaining support for the policy within the subset of the population consisting of its original supporters. In analogy with Type II Increasing Returns policies, Type II Self-reinforcing policies affect both the original supporting population and portions of the population not yet affected directly by the policy.

To exploit self-reinforcement, technical debt retirement programs must emphasize retiring debts that have curtailed organizational agility in recognizable ways, or which have prevented introduction of capabilities that the population values. Communicating these objectives is an important part of the program, because self-reinforcing popular support is possible only if the population understands the strategy and how it benefits the enterprise.

Last words

Because of the essential uniqueness of any wicked problem (Proposition 7 of Rittel and Webber), it is futile to attempt to apply as a template any retirement program that worked for some other organization, or for some other portion of a given organization at an earlier time with a different form of technical debt. But these four strategies, implemented carefully and communicated widely and effectively within the organization, can build organizational commitment to a long-term technical debt retirement program, even though retiring technical debt may be a super wicked problem.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

Other posts in this thread

Retiring technical debt can be a wicked problem

Last updated on July 16th, 2021 at 07:38 pm

Prototypes of President Trump’s “border wall.”

Prototypes of President Trump’s “border wall.” Building the wall is an example of a wicked problem. Building prototypes in short segments of the wall is a tame problem. But these are just prototypes of short segments of the wall. They aren’t prototypes of the project.

Prototypes of the wall itself don’t demonstrate the process for taking private property, or how to build construction access roads, or the effects on wildlife, or how the government of Mexico will respond, or how to repair the wall when drug gangs destroy sections of it in isolated regions, or even the effectiveness of the wall. Prototyping works well for tame problems. It helps us project how the finished project will perform, and how difficult completing the project will be. But for wicked problems, prototyping is of limited value. As Rittel observes, prototyping can make the problem worse.

Photo by Mani Albrecht, U.S. Customs and Border Protection Office of Public Affairs, Visual Communications Division.

The theory of wicked problems originated with Horst Rittel in the mid-1960s. He was addressing “that class of problems which are ill-formulated, where the information is confusing, where there are many decision makers and clients with conflicting values, and where the ramifications in the whole system are confusing.” [Churchman 1967] The term wicked isn’t a moral judgment. It suggests the mischievous streak in these problems. Many of them have the property that proposed solutions can lead to conditions even more problematic than the original situation. Is it just me, or are you also thinking, “Ah, technical debt”? In this post, I suggest that retiring technical debt can be a wicked problem. I’ll show how wickedness explains many of the difficulties we associate with retiring forms of technical debt that involve many stakeholders, assets, revenue streams, policies, or strategies.

Introduction

Horst Rittel was a design theorist at the University of California at Berkeley. His interest in wicked problems came about because designers must deal with the interactions between architecture and politics. In today’s technology-dependent enterprises, analogous problems arise when we retire technical debt. When we do, we affect multiple sets of quasi-independent stakeholders.

Applicability to the technical debt problem

In the years since Rittel originated the wicked problem concept, others have extended it. These extensions have led some to regard the concept as inflated and less than useful. But extension or less-than-useful concepts rarely occurs, I take it as an indicator of worth. The focus of this post, then, is applying Rittel’s version of wicked problems to the problem of designing a complex technical debt retirement project.

The wicked problem concept has propagated mostly in the realm of public policy and social planning. Certainly wicked problems abound there. Poverty, crime control, and climate change are examples. But I know of no attempt to explore the wickedness of retiring technical debt in large enterprises, but have a look below and see what you think.

Rittel defines a problem as the discrepancy between the current state of affairs and the “state as it ought to be.” For the purposes of technical debt retirement planning, the state as it ought to be might at times be a bit ambitious. So I take the objective of a technical debt retirement project to be an attempt to resolve the discrepancy between the current state of affairs and some other state that’s more desirable. For the present purpose, then, the problem is designing a technical debt retirement project that converts the current state of an asset to a more desirable state that might still contain technical debt in some form. But in that new state, the asset is in a better configuration.

A note on super-wicked problems

Actually, there is a subset of wicked problems—super-wicked problems—that I think might include some technical debt retirement problems. I address them in the post “Retiring technical debt can be a super wicked problem.”

For now, though, let’s examine the properties of wicked problems. Let’s see how well they match up with the problem of designing technical debt retirement projects.

Attributes of wicked problems

Rittel’s summary of the attributes of wicked problems [Rittel 1973] convinced me that major technical debt retirement projects present wicked problems. Here are those attributes. In what follows, I use Rittel’s term tame problem to refer to a problem that isn’t wicked. (See also [Kreuter 2004])

1. [No Definitive Formulation] Wicked problems have no definitive formulation

For any given tame problem, it’s possible to state it in such a way that it provides the problem-solver all information necessary to solve it. That’s what definitive formulation means. For wicked problems, on the other hand, our understanding of the problem depends on the solution we’re considering. Each candidate solution might potentially require its own understanding of the problem.

When designing a technical debt retirement project, we must fully grasp the impact of the effort on all activities in the enterprise. Each proposed project plan has its own schedule and risk profile. Each proposed project plan affects enterprise activities in its own way. In principle, each candidate approach to the effort affects a different portfolio of enterprise assets in its own unique order. Because examining all possible candidate project plans is impractical, choosing a project plan by seeking an optimal set of effects is also impractical. By the time you’re ready to execute a given project plan, the data supporting your decision might be obsolete.

2. [No Stopping Rule] Wicked problems have no stopping rule

For any given tame problem, solutions have “stopping rules.” Stopping rules of solutions are signatures that indicate clearly that they are indeed solutions. For example, in a chess problem to be solved in N moves, N and checkmate provide a stopping rule. We know how to count to N and the position of checkmate is well defined.

Wicked problems have no stopping rule.

When planning a major technical debt retirement project, we must determine the attributes of the project. The attributes include a task breakdown, a sequence for performing the tasks, a resource array including both human and non-human resources, a risk plan including risk mitigations and risk responses, a revenue stream interruption schedule, and so on. For each such plan, we can estimate the direct and indirect costs to the enterprise. We can project the effects of the plan on market share for every affected product or service. Every plan has these attributes. When we compute them for a given candidate plan, the result doesn’t reveal that we’ve found “the solution.” We will have found only an estimate for that given solution. What we learn by doing this doesn’t reveal whether or not a “better” solution exists.

There is no indicator contained in any given candidate solution that tells us we can “stop” solving the problem. Most often, we just stop when we run out of time for finding solutions. In some cases, we stop when we find just one solution.

3. [Solutions Are Good/Bad] Solutions to wicked problems aren’t true-or-false, but good-or-bad

The criteria for finding solutions to tame problems are unambiguous. For example, if a candidate function satisfies a differential equation, it’s a solution to the equation. The volume of concrete required to pave a section of roadway is a single number, determined by computing the area of roadway and multiplying by the thickness of the roadbed, and subtracting the volume of any reinforcing steel.

The solutions to wicked problems have no such clarity. When evaluating a candidate project plan for retiring a technical debt, we can estimate its cost, the time required, interruptions in revenue streams, and the timing of resource requirements. But determining how “good” that is might be difficult. Much depends on what other demands there might be for those resources or funds. Much also depends on the political power of the people making those demands. No single number measures that.

4. [No Test of Solutions] There is no immediate ultimate test of a solution to a wicked problem

To test a candidate solution to a tame problem, the problem-solving team determines whether the solution meets the requirements set in the tame problem statement. The consequences of implementing the solution are all evident to the problem-solving team. The team has everything it needs to judge the success of the solution.

Not so with wicked problems. Any candidate solution to a wicked problem generates waves of consequences. As these waves propagate, some of the problem’s stakeholders might find the solution unsatisfactory. They’ll report their objections, possibly through politically powerful people or organizations. Because the consequences can be so diverse, the team can’t anticipate all of them. In some cases, the team might have difficulty understanding how the troubles that plague some stakeholders were actually related to the implemented solution. Some undesirable consequences can be far more harmful than any intended benefits are helpful. In other cases, the undesirable consequences might remain undiscovered until long after the solution is in use and operational.

When designing a technical debt retirement project, it’s necessary to determine everything that must be changed, what resources must be assembled to do the work, and what processes might be interrupted, when and for how long. Only rarely, if ever, can we determine all of that with certainty in advance. For that reason, determining that the design of the project is “correct” isn’t possible, except perhaps in the probabilistic sense. We never really know in advance that we’ve found a solution. Most of the time, after execution begins, we must make adjustments along the way, in real time.

5. [No Trial-and-error] Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly

When solving tame problems, we can try candidate solutions without incurring significant penalties. That is, trying a solution might require some effort, and therefore incur a cost. But it doesn’t otherwise affect the ability to find other solutions. Wicked problems are different. Every attempt to “try” a solution leaves traces that can potentially make further solution attempts more difficult, costly, or risky than they would have been if we hadn’t tried that solution. These traces of past solution attempts might also impose constraints on future solutions. Those constraints can effectively transform the wicked problem into a different wicked problem. This property makes trial-and-error approaches undesirable and possibly infeasible. Indices of such undesirability are the half-lives of the traces of attempts to address the problem. A long half-life might mean that the problem solver has only one shot at addressing the problem.

When designing a technical debt retirement project, we sometimes try to “pilot” a potential approach to determine difficulty, costs, feasibility, political issues, or risk profiles. Even when we can revert the asset to its former state after a pilot is completed or suspended, the consequences for stakeholders and for stakeholder operations might not be reversible. When we next try another “pilot,” or perhaps a fully committed retirement project, these stakeholders might be significantly less willing to cooperate. Every attempted solution can thus leave political or financial traces like these, making future attempts riskier and more challenging.

6. [Solutions Are Not Describable] Wicked problems don’t have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that we can incorporate into the plan

In devising solutions to tame problems, one common approach entails first gathering the full set of possibilities. Next, we screen them according to a set of favorability criteria. Reducing the field of possibilities is a useful strategy for finding optimal or acceptable solutions to tame problems.

Wicked problems defy such strategies. Gathering the full set of possible solutions to a given wicked problem can be a wicked problem in itself. We cannot parameterize the set of possible solutions to a wicked problem. We cannot define a finite set of attributes that fully covers the solution space. For these reasons, we can never be certain that the set of candidate solutions is complete.

Candidate designs for technical debt retirement projects present this same quality. We have a dizzying array of choices. In what order should we retire different kinds of technical debts? In what order should we address the debts different assets bear? Can we “refinance” portions of the debt to intermediate forms [Zablah 2015]? What kinds of refactoring should we perform and when? Because options like these are neither denumerable nor parameterizable, we cannot know whether a given set of candidate project designs is complete.

7. [Essential Uniqueness] Every wicked problem is essentially unique

Among tame problems, we can define classes or categories of problems that share a solution method. That is, using the method associated with a given class, we can solve all problems in that class. For example, we can solve all second order linear differential equations with the same method.

Even though we can define classes of wicked problems whose members are in some sense similar, that similarity doesn’t enable us to find a unified solution strategy that works for every member of the class.

So it is with designing technical debt retirement projects. Certainly, the collection of all technical debt retirement projects is a class. But the problem of designing a given retirement project is essentially unique. What “works” for one project in one enterprise in one fiscal year probably won’t work for another project in another enterprise in another fiscal year. It might not even work for another project in that same enterprise in that same fiscal year. Elements of the solution for one project might be useful for another project. But even then, we might need to adapt them to the conditions of that next project.

This essential uniqueness property of technical debt retirement projects collides with a common pattern decision makers use when chartering major efforts. That pattern is reliance on consultants, employees, or contractors who “have demonstrated success and experience with this kind of work.” Because each technical debt retirement project is essentially unique, relying on a history of demonstrated success is a much less viable strategy than it would be with tame problems. Decision makers would do well to keep this in mind when they seek approaches, leaders, and staff for major technical debt retirement efforts: no major technical debt retirement project is like any other.

8. [Problems as Symptoms] We can regard every wicked problem as a symptom of another problem

With tame problems or wicked, we typically begin the search for solutions by inquiring as to the cause of the current condition. When we find the cause or causes, and remove them, we usually find a new problem underlying them. Thus, for wicked problems, what we regarded initially as the problem is thereby converted into a symptom of a newly recognized underlying problem. By repeating this process, we escalate the “level” of the problem we’re addressing. Higher-level problems do tend to be more difficult to resolve, but addressing symptoms, though easier, isn’t a path to ultimate resolution.

Rittel also observes that incremental approaches to resolving wicked problems can be self-defeating. The difficulty arises from the traces left behind by incrementalism, as described in the discussion of the unworkability of trial-and-error strategies. Rittel provides the example of the increase in difficulty of changing processes after we automate them.

To regard the wicked problem of designing a technical debt retirement project as a symptom of a higher-level wicked problem, we must be willing to regard as problems the very things that make the technical debt retirement project design effort a wicked problem. That is, the processes that lead to formation of technical debt, or that enhance its persistence, are themselves wicked problems. For example, one might inquire about how to change the enterprise culture so as to reduce the incidence of technical debt contagion. To undertake major technical debt retirement efforts without first determining what can be done to limit technical debt formation or persistence due to contagion or due to other processes, might be unwise.

9. [No Controlled Experiments] The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution

When addressing tame problems, problem-solving teams can often perform controlled experiments. The general framework of these experiments is as follows. The team forms a hypothesis H as to the cause of the problem, conjecturing a solution. Then assuming H is correct, and given a set of conditions C, they deduce the consequences E that must follow. If any elements of E don’t occur, then H is incorrect. The process repeats until an H’ is found that provides all elements of E. H’ then provides the basis of a solution. Essentially, this is the scientific method.

With wicked problems, the method fails in numerous ways. Foremost among these failure modes is the inability to control C. That is, interventions that might be required to set C to be a desired C0 tend to be impossible. Moreover, even if we can establish C0, the experiments that determine whether E is observed tend to leave the traces discussed in Proposition 5 [No Trial-and-error]. Finally, the determining the presence or absence of the elements of E is usually subjective.

When planning enterprise-scale technical debt retirement projects, as with many projects of similar scale, we believe that we can benefit from running a pilot of our proposed plan, to determine its fitness. These trials are sometimes called “proof of concept” exercises. However, because we cannot control the conditions in which we execute the pilot, we cannot be confident that our interpretation of the results of the pilot will apply to the actual project. Moreover, a small-scale pilot cannot generate some of the effects we most want to observe because they occur only at full scale. These effects include staff shortages, resource contention, and revenue interruption incidents.

10. [100% Accountability] The planner (or designer) has no right to be wrong

In solving tame problems, solvers can experiment with proposed solutions. They make conjectures about what might work, and gather the results of trials to determine how to improve their conjectured solutions. There is no social or legal penalty for failed conjectures.

In solving wicked problems, experiments don’t exist. Any trial solution is a real solution, with real effects on stakeholders and later, real effects on the problem solvers themselves. Problem solvers are accountable for the undesirable consequences of each solution, whether it’s a trial or not.

In planning a technical debt retirement project, any attempt to gather data about how the approach would affect the enterprise could potentially have real, lasting, deleterious effects. The project bears the costs associated with these consequences, if not officially and financially, then politically. The politics of failure can lead to serious consequences for the problem solvers. Any approach that the team deploys, on any scale no matter how small, can potentially create financial problems for the enterprise, and political problems for anyone associated with the technical debt retirement project.

Last words

The fit between wicked problems and technical debt retirement project design looks pretty good to me. But the research on a subset of wicked problems—super wicked problems—is also intriguing. I’ll look at that in my next post. After that, we’ll be ready to examine which approaches to retiring technical debt take these matters into account.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Synergy between the reification error and confirmation bias

Last updated on July 10th, 2021 at 08:53 am

In deciding whether to undertake technical debt retirement projects, organizations risk making inappropriate decisions because of a synergy between the reification error and confirmation bias. Together, these two errors of thought create conditions that make committing appropriate levels of resources difficult. And when organizations do commit resources, they tend to underestimate costs. That underestimate can elevate the chance of failure in technical debt retirement projects.

The reification error and confirmation bias

As explained elsewhere in this blog, the reification error is an error of reasoning in which we treat an abstraction as if it were a real, concrete, physical thing. Because technical debt is an abstraction, we risk committing the reification error when we deal with it. (See “Metrics for technical debt management: the basics”)

Confirmation bias is a cognitive bias that causes us to favor and seek only information that confirms our preconceptions, or to avoid information that disconfirms them. (See “Confirmation bias and technical debt”)

How the reification error affects management

The reification error might be responsible, in part, for a widely used management practice that often appears in the exploratory stages of undertaking projects. Let’s start with an illustration from the physical world.

In the physical world, when we want cherries, we go to a market and check the price per pound or kilo. Then we decide how much we want. If the price is high, we might decide to buy fewer cherries. If the price is low, we might buy more cherries. We have in mind a total cost target, and we adjust the weight of the cherries to meet the target. In the physical world, we can often adjust what we purchase to match our willingness to pay.

Retiring technical debt doesn’t work like that, in part, because technical debt is an abstraction. But we try anyway; here’s how it goes. Management decides to retire a particular class of technical debt. They ask an engineer for an estimate of the cost. Sometimes Management reveals the target they have in mind if they have one; sometimes not. The estimate comes back as Total ± Uncertainty. Management decides that’s too high, or the Uncertainty is too great. They then ask the engineer to find a way to do it for less, or to reduce the Uncertainty.

Management—the “customer” in this scenario—makes this request, in part, based on the belief that adjusting the work is possible. Management hopes that the engineer can adjust the work to meet a (possibly unstated) target, in analogy to buying cherries. That thinking is an example of the reification error. In this dynamic, we rarely take into account the fact that retiring technical debt isn’t exactly like buying cherries.

How confirmation bias affects engineering estimates

Return now to the interaction between Management and the engineer/estimator. The engineer now suspects that Management does have a target in mind. Some engineers might ask what the target is. Some don’t. In any case, the engineer makes a lower estimate, which might still be too high. This process repeats until either Management decides against retiring the debt, or accepts the lowest Total ± Uncertainty.

In adjusting their estimates, engineers have a conflict of interest. That conflict of interest can compromise their objectivity through the action of confirmation bias. For technical debt retirement efforts, engineers are usually highly motivated to gain Management approval of the project. The motivation arises, in part, from the frustrating loss of engineering productivity. And since engineers typically sense that Management approval of the project is contingent on finding an estimate that’s low enough, the engineers have a preconception. That is, engineers have an incentive to convince themselves that Management’s adjustments to budget and schedule are reasonable. Because of the confirmation bias, engineers tend to seek justifications for the adjustments. And they tend to avoid seeking justifications for believing that their adjustments might not be feasible. That’s the confirmation bias in action.

How synergy between the reification error and confirmation bias comes about

Because of the reification error, Management tends to believe that retiring technical debt is a more adjustable activity than it actually is. Because of confirmation bias, engineers tend to believe that Management’s proposed cost and schedule are feasible. Too often, the synergy between the two errors of thinking provides a foundation for disaster.

Why this synergy creates conditions for disaster in technical debt retirement projects

Management usually equates estimates with commitments. Engineers don’t. Management usually forgets or ignores the upside Uncertainty. Typically, when Management accepts an estimate, the engineering team finds that it has made a commitment to deliver the work for the cost Total, with zero upside Uncertainty. Rarely does Management make this explicit. An analogous problem occurs with schedule.

By ignoring the Uncertainty, Management (the buyer) transfers the uncertainty risk to the project team. That strategy might work to some extent with conventional development or maintenance projects, where we can adjust scope and risk before the work begins. But for technical debt retirement projects, this practice creates problems for two reasons.

Adjusting the scope of debt retirement projects is difficult

First, with technical debt retirement we’re less able to adjust scope. To retire a class of technical debt, we must retire it in toto. If we retire only some portion of a class of technical debt, we would leave the asset in a mixed state that can actually increase MICs. So it’s usually best to retire the entirety of any class of technical debt, so as to leave the asset in a uniform state.

Debt retirement efforts are notoriously unpredictable

Second, the work involved in retiring a particular class of technical debt is more difficult to predict than is the work involved in more conventional projects. (See “Useful projections of MPrin might not be attainable”) Often, we must work with older assets, or older portions of younger assets. The people who built them aren’t always available, and documentation can be sparse or unreliable. Moreover, it’s notoriously difficult to predict with accuracy when or for how long affected assets will be out of production. Revenue stream interruptions, which can comprise a significant portion of total costs, can be difficult to schedule or predict. Thus, technical debt retirement projects tend to be riskier than other kinds of projects. They have wider uncertainty bands. Ignoring the Uncertainty, or trying to transfer responsibility for it to the project team, is foolhardy.

A strategy for reducing the effects of this synergy

To intervene in the dynamic between the consequences of the reification error and the consequences of confirmation bias, we must find a way to limit how their consequences can interact. That will curtail the ability of one phenomenon to reinforce the other. This task is well suited for application of Donella Meadows’ concept of leverage points [Meadows 1999]. See “Leverage points for technical debt management.”

In that post, I summarized Meadows’ concepts of using leverage points to alter the behavior of complex systems. One can intervene at one or more of 12 categories of leverage points. These are elements in the system that govern the behavior of the people and institutions that comprise the system. In that post, I sketched the use of Leverage Point #9, Delays, to alter the levels of technical debt in an enterprise.

In what follows I sketch the use of interventions at Leverage Point #8, “The strength of negative feedback loops, relative to the impacts they are trying to correct against.”

Our strategy is this:

A feedback loop that now provides budgetary control in most organizations

One feedback loop at issue in this case, illustrated above, influences managers who might otherwise overrun their budgets. It does so by triggering some sort of organizational intervention when a manager overruns his or her budget. And the feedback loop leads to increases in the size and stature of the portfolios of managers who handle their budgets responsibly.  Presumably, that’s one reason why managers compel estimators to find approaches that cost less. The feedback loop to which managers are exposed causes them to establish another feedback loop involving the engineer/estimator, and later the engineering team. That second loop causes engineers to hold down their estimates, and later to limit actual expenditures.

A diagram of effects analysis

A feedback loop that now provides budgetary control in most organizations
A feedback loop that now provides budgetary control in most organizations.

We can use a diagram of effects [Weinberg 1992] to illustrate the feedback mechanism commonly used to control the performance of managers who are responsible for portfolios of project budgets. In the diagram (above), the oval blobs represent quantities indicated by their respective captions. Each of these quantities is assumed to be measurable, though their precise values and the way we measure them are unimportant for our rather qualitative argument.

What the arrows mean

Notice that arrows connect the blobs. The arrows represent the effect of changes in the value represented by one blob on the value represented by another. The blob at the base of the arrow is the effector quantity. The blob at the point of the arrow is the affected quantity. Thus, the arrow running from the blob labeled “Actual Spend” to the blob labeled “Overspend” expresses the idea that a positive (or negative) change in the amount of actual spending on projects causes a positive (or negative) change in Overspend. When a change in the effector quantity causes a like-signed change in the affected quantity, we say that their relationship is covariant.

Because increases in Budget Authority tend to decrease Overspend, all other things being equal, the relationship between Budget Authority and Overspend is contravariant. We represent a contravariant relationship between the effector quantity and the affected quantity as an arrow with a filled circle on it.

Finally, notice that the arrow from Overspend (effector) to Promotion Probability (affected) has a filled Delta on it. This represents the idea that as Overspend increases, it negatively affects the probability that the manager will be promoted at some point in the future. The Delta indicates a delayed effect; that the Delta is filled indicates a contravariant relationship. (An unfilled Delta would indicate a delayed covariant effect.)

Loops in the diagram of effects

This diagram, which contains a loop connecting Budget Authority, Overspend, and Promotion Probability, has the potential to “run away.” That is, as we go around the loop, we find self-re-enforcement, because the loop has an even number of contravariant relationships. It works as follows:

As Overspend increases, after a delay, the Probability of Promotion decreases. This causes reductions in Budget Authority because, presumably, the organization has reduced faith in the manager’s performance. Reductions in Budget Authority make Overspend more likely, and round and round we go.

Similarly:

As Overspend decreases, after a delay, the Probability of Promotion increases. This causes increases in Budget Authority because, presumably, the organization has increased faith in the manager’s performance. Increases in Budget Authority make Overspend less likely, and round and round we go.

Fortunately, other effects usually intervene when these self-re-enforcing phenomena get too large, but that’s beyond the scope of this argument. For now, all we need observe is that managers who manage their budgets effectively tend to rise in the organization; those who don’t, don’t.

The result is that managers limit spending to avoid overspending their budget authority. And that’s one reason why they push engineers to produce lower estimates for technical debt retirement projects.

How this feedback loop overlooks important drivers of technical debt formation

To break the connection between the managers’ reification error and the engineers’ confirmation bias, our intervention must cause the managers and the engineers to make calculations differently. We can accomplish this by requiring that they consider more than the mere cost of retiring the class of technical debt under consideration. They must estimate the consequences of not retiring that technical debt, and they must also estimate costs beyond the cost of retiring the debt. In what follows, I’ll use the shorthand TDBCR to mean the class of Technical Debt Being Considered for Retirement.

Specifically, estimates for technical debt retirement projects cover only the cost of performing the work required to retire the TDBCR. Management then decides whether, when, and to what extent to commit resources to execute the project. The primary considerations budgetary.

Since the debt retirement project can potentially provide benefits beyond the manager’s own portfolio, failing to undertake the project can have negative consequences. Mnagers who decline to undertake debt retirement projects are responsible for the consequences. But accountability for these decisions is rare. That’s the heart of the problem. So let’s look at some examples of relevant considerations.

Adjustments that would support these feedback loops to gain control of technical debt

In allocating resources for a technical debt retirement project, there are considerations beyond the cost of retiring the debt. A responsible decision is possible only if other kinds of estimates are also available. Here are some examples of the estimates we need:

  • The effects of retiring TDBCR on the cost of executing any other development or maintenance effortsy
  • The effects of retiring TDBCR on revenue and market share for all existing assets that directly produce revenue and which could be affected by retiring TDBCR
  • The revenue that would become available (and timing thereof) from any new products or services that become possible because of retiring TDBCR
  • The effects of retiring TDBCR on the cost of executing other technical debt retirement efforts

And these items might not be related to anything for which the decision maker is responsible. But the feedback loop we now use to influence the decision maker excludes considerations that are affected by the decision maker’s decisions. Until we install feedback loops that cause the decision maker to consider these indirect consequences, or until we make decisions at levels that include these other consequences, the effects of the decision maker’s decisions are uncontrolled, and might not lead to decisions optimal for the enterprise.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Three cognitive biases

Last updated on July 10th, 2021 at 08:49 am

Technical debt arises in enterprise assets through the effects of two classes of drivers: obsolescence and decision-making. When technologies advance, or new technologies arise, or laws or regulations evolve. Existing assets or assets under development can sometimes be left behind. That’s how obsolescence produces technical debt. Debt driven mainly by decision-making is more difficult to describe. But anything that biases decisions away from strictly rational results presents risk. Three cognitive biases likely have strong effects on technical debt formation and persistence.

Cognitive biases affect decision-making

Photo of Daniel Ellsberg, speaking at a press conference in New York City in 1972
Photo of Daniel Ellsberg, speaking at a press conference in New York City in 1972. Best known for his role in releasing the Pentagon Papers, Dr. Ellsberg made important contributions to decision theory while at the RAND Corporation. Photo by Bernard Gotfryd, courtesy U.S. Library of Congress.
Decision-making produces technical debt as the people of the enterprise make choices in design, development, and resource acquisition or allocation. Typically, both obsolescence and decision-making contribute to producing technical debt, though either obsolescence or decision-making might be more important than the other in any given instance.

Managing debt driven principally by obsolescence isn’t difficult, but I’ll leave that topic for another time. For now, let’s focus on decision-making. Already widely accepted is the contribution of engineering decisions to technical debt formation. Indeed, many believe (in my view, incorrectly) that all or most technical debt arises from faulty decisions by engineers. Some engineering decisions are indeed faulty. But the current scale of technical debt is so large that faulty engineering is unlikely to account for it all. Investigating how resource allocation decisions might contribute to technical debt formation is certainly worthwhile.

In this post, I offer three examples illustrating how resource allocation decisions might contribute to technical debt formation and persistence. Each example shows how people make faulty decisions while believing they’re proceeddecision makering objectively and rationally. In each case, what causes the problem is a phenomenon called cognitive bias, though each example in this post illustrates the action of a different cognitive bias.

Loss aversion

Amos Tversky and Daniel Kahneman were the first to identify the cognitive bias known as loss aversion [Kahneman 1984]. It’s the tendency to favor options that avoid losses in preference to options that lead to equivalent or even greater gains. A decision maker affected by loss aversion bias might conclude that not losing $5 is better than finding $5 or even $10. In this way, loss aversion skews decisions in favor of options that enable the enterprise to protect or enhance existing revenue streams. And it skews decisions in this way even if those decisions cause increases in operating expenses. This bias has effect even if the increases in operating expenses exceed the value of whatever revenue that decision protected.

Short term effects of loss aversion

Retiring technical debt usually entails deferring revenue in the short term, for two classes of reasons. First, we must turn the attention of some part of the engineering organization to debt retirement. Assuming that they would have been working on maintaining or enhancing existing products or services, this redirection can lead to reducing or deferring revenue. Second, during the debt retirement operation, some work might require short-term interruptions of revenue streams while the work is underway.

Thus, debt retirement efforts often do reduce revenue—or reduce revenue increases—in the short term. Some decision makers can perceive that effect as a loss.

Long term effects of loss aversion

The long-term effects of debt retirement can be gains, and those gains can be considerable. Typically, by retiring an asset’s technical debt, we reduce the difficulty (read: time required, effort, cost, and risk) of future maintenance and enhancement efforts involving the asset. We also reduce the probability of debt contagion.

Since these long-term effects of debt retirement are ongoing, their impact on the enterprise can be significant. But unless one is familiar with dealing with the consequences of technical debt, recognizing the value of retiring technical debt can be difficult. When loss aversion is in play, intuitive comparisons of the effects of (a) a short-term revenue loss or delay to (b) a long-term benefit of debt retirement favor development and maintenance over retiring technical debt.

Insulating decisions about debt retirement from the effects of loss aversion bias requires objective mathematical modeling of revenue losses and operating cost benefits for all options under consideration. Those models must also account for uncertainty, which makes them inherently ambiguous. And that leads us to consider our next cognitive bias, the ambiguity effect.

The ambiguity effect

The ambiguity effect is a cognitive bias that causes us to prefer options for which the probability of a desirable outcome is relatively better known, over options for which the probability of a desirable outcome is less well known, even if the expected value of that more ambiguous outcome exceeds the expected value of the less ambiguous outcome. The effect was first described by Daniel Ellsberg [Ellsberg 1961].

Consider a choice between allocating resources to new development and allocating resources to technical debt retirement. In most enterprises, decision makers are familiar with new development projects. Likewise, project champions, project sponsors, and project managers are also familiar with new development projects. All parties are less familiar with debt retirement. It’s reasonable to suppose that when confronted with such a choice, decision makers are likely to see debt retirement as carrying with it a probability of positive outcome that is less well known than the probability of a positive outcome for the new development project.

Because of the ambiguity effect, resource allocation decisions are likely to be biased against technical debt retirement, and in favor of maintenance or new development.

But there’s more. Most projects, of any kind, encounter trouble from time to time. When that happens, the urge to reallocate organizational resources can be powerful. Troubled projects might receive more resources if they’re viewed as important to the organization. If so, those resources often come from other projects. The ambiguity effect biases these resource reallocation decisions in a way analogous to initial resource allocation decisions, as described above. In other words, because of the ambiguity effect, when projects encounter trouble, debt retirement projects are less likely to be able to retain previously allocated resources than are maintenance or new development projects.

The availability heuristic

The availability heuristic is a method humans use to evaluate the validity or effectiveness of decisions, concepts, methods, or propositions [Tversky 1973]. According to the heuristic, if we recognize the item being evaluated as familiar, or related to something with which we are familiar, we’re more likely to regard it as valid or workable. And when making comparisons between two alternative decisions, concepts, methods, or propositions, we’re likely to assess more favorably the decision, concept, method, or proposition with which we’re more familiar, all other things being equal.

In organizations where decision makers have more experience evaluating maintenance or development project proposals than they have with technical debt retirement proposals, the availability heuristic acts to reduce the relative assessed favorability of technical debt retirement proposals. It does this in three ways.

Technical debt retirement projects are less familiar

First, in most organizations, technical debt retirement projects are less familiar to decision makers than are maintenance or development projects. On that ground alone the technical debt retirement project proposals are at a disadvantage.

The effects of retiring technical debt are less obvious

But the second effect of the availability heuristic is more important. To grasp the value of working on an asset, we must understand how it will affect the asset’s users. Likewise, to grasp the value of a technical debt retirement project, we must understand how technical debt hampers the enterprise. We must also understand how retiring the technical debt might confer advantages in terms of future engineering efforts. Usually, understanding the consequences of maintenance or development projects is more “available” to decision makers than is understanding the consequences of technical debt retirement projects. Even more dramatic is the difference between understanding the consequences of not funding a maintenance or development project and the consequences of not funding a technical debt retirement project.

Much of the benefit of technical debt retirement is indirect

That is, although there is some direct benefit in terms of the assets from which the debt has been retired, the most dramatic benefits are manifested in projects that follow the debt retirement project, and which depend on the assets that have been relieved of debt. Sometimes, those follow-on projects are known at the time decision makers are considering funding the debt retirement project. Sometimes those follow-on projects have yet to be specified or even recognized. In either case, they are less “available” to decision makers because those follow-on projects are indirect beneficiaries.

These three effects of the availability heuristic cause decisions about resource allocations to tend to favor maintenance or development projects over debt retirement projects.

Mitigating the risks of these three cognitive biases

Over time, as everyone becomes more familiar with technical debt retirement projects, these effects may wane somewhat. But waiting for that to happen isn’t exactly what one might call risk mitigation. For one thing, familiarity grows only if one is motivated and pays attention. As busy as are decision makers in modern organizations, depending on them to actively enhance their own familiarity with technical debt retirement projects is probably not the safest course.

An effective program of actively mitigating the risks of these three cognitive biases probably should focus on four areas.

Familiarity

Do what you can to increase decision maker familiarity with the concept of technical debt, and with the consequences of carrying existing technical debt. Conventional presentation-based training will help, but interactive, experiential training is far more effective. Participants must actually experience the consequences of technical debt in a well-designed and professionally facilitated simulation of a problem-solving task. A faithful simulation would include estimation, changing and ambiguous requirements, and team composition volatility.

Retrospectives

Retrospectives are also known as after-action reviews, post mortems, debriefings, or lessons-learned sessions. They’re meetings that review processes that have just completed pieces of work [Kerth 2001]. Typically, only project team members attend. To maintain psychological safety and to encourage truth telling, enterprise decision makers and supervisors don’t attend, unless the organizational culture includes appropriate safeguards. In any case, a section of the retrospective dedicated to investigating the causes and consequences of technical debt can ensure capture of relevant knowledge and experience.

Mathematical modeling practice

Mathematical modeling is one path to creating a more objective foundation for decisions. It’s essential for improving estimation quality. Also helpful are high quality effort data and metrics data related to the formation and lifetime of technical debt. Reviews of estimates and projections during retrospectives can help improve their quality over time.

Metrics development

Determining the effects of risk mitigation failure provides important guidance for corrective action in risk mitigation. Developing metrics that reveal these failures is therefore essential to managing cognitive bias risk. I’ll be suggesting some valuable metrics in a future post.

Last words

These three cognitive biases are by no means the only cognitive biases that can affect the formation or persistence of technical debt. Of the more than 200 identified cognitive biases, those most likely to be relevant are those that affect decision-making. Watch this space for links to posts about additional cognitive biases and their affects on technical debt formation or persistence.

Other posts relating to cognitive biases

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Accounting for technical debt

Last updated on July 11th, 2021 at 04:57 pm

Accounting for technical debt isn’t the same as measuring it
Accounting for technical debt isn’t the same as measuring it. We usually regard our accounting system as a way of measuring and tracking enterprise financial attributes. We think of those financial attributes as representations of money. Technical debt is different. It isn’t real, and it isn’t a representation of money. It’s a representation of resources. Money is just one of those resources. Money is required to retire technical debt. We use money when we carry technical debt, and when we retire it. But we also use other kinds of resources when we do these things. Sometimes we forget this when we account for technical debt.

We need a high-caliber discussion of accounting for technical debt [Conroy 2012]. It’s a bit puzzling why there’s so little talk of it in the financial community. Perhaps one reason for this is the social gulf between the financial community and the technologist community. But another possibility is the set of pressures compelling technologists to leave technical debt in place and move on to other tasks.

Here’s an example. One common form of technical debt is the kind first described by Cunningham [Cunningham 1992]. Essentially, when we complete a project, we often find that we’ve advanced our understanding of what we actually needed to reach our goals. Because of our advanced understanding, we recognize that we should have taken a different approach. Fowler described this kind of technical debt as, “Now we know how we should have done it.” [Fowler 2009] At this point, typically, we disband the team and move on to other things, leaving the technical debt outstanding, and often, undocumented and soon to be forgotten.

Echo releases and management decision-making

A (potentially) lower-cost approach involves immediately retiring the debt and re-releasing the improved asset. I call this an “echo release.” An echo release is one in which the asset no longer carries the technical debt we just incurred and immediately retired. But echo releases usually offer no immediate, evident advantage to the people and assets that interact with the asset in question. That’s why decision makers have difficulty allocating resources to echo releases.

This problem arises, in part, from the effects of a what I regard as a shortcoming in management accounting systems. Most enterprise management accounting systems track effectively the immediate costs associated with technical debt retirement projects. They do a much less effective job of representing the effects of failing to execute echo releases, or failing to execute debt retirement projects in general. The probable cause of this deficiency is the distributed nature of the MICs—the metaphorical interest charges associated with carrying a particular technical debt. MICs appear in multiple forms: lower productivity, increased time-to-market, lost market share, elevated turnover of technologists, and more (see “MICs on technical debt can be difficult to measure”). Enterprise accounting systems don’t generally represent these phenomena very well.

The cost of not accounting for the cost of not retiring technical debt

Decision makers then adopt the same bias that afflicts the accounting system. In their deliberations regarding resource allocation, they emphasize only the cost of debt retirement. These discussions usually omit from consideration altogether any mention of the cost of not retiring the debt. That cost can be enormous, because it is a continuously recurring periodic charge with no end date. Those costs are the costs of not accounting for the cost of not retiring technical debt.

If we do make long-term or intermediate-term projections of MICs or costs related to echo releases, we do so to evaluate proposals for retiring the debt. Methods vary from proposal to proposal. Few organizations have standard methods for making these projections. And lacking a standard method, comparing the benefits of different debt retirement proposals is difficult. This ambiguity and variability further encourages decision makers to base decisions solely on current costs, omitting consideration of projected future benefits.

Dealing with accounting for technical debt

Relative to technical debt, the accounting practice perhaps most notable for its absence is accounting for outstanding technical debts as liabilities. We do recognize outstanding financial debt. But few balance sheets mention outstanding technical debt. Ignorance of the liabilities outstanding technical debt represents creates an impression that the enterprise has capacity that it doesn’t actually have. That’s why tracking our technical debts as liabilities would alleviate many of the problems associated with high levels of technical debt.

But other shortcomings in accounting practices can create additional problems almost as severe.

Addressing the technical-debt-related shortcomings of accounting systems requires adopting enterprise-standard patterns for debt retirement proposals. Such standards would make possible meaningful comparisons between different technical debt retirement options and between technical debt retirement options and development or maintenance options. One area merits focused and immediate attention: estimating MPrin and estimating MICs.

Standards for estimating MPrin are essential for estimating the cost of retiring technical debt. Likewise, standards for estimating MICs, at least in the short term, are essential for estimating the cost of not retiring technical debt. Because both MPrin and MICs can include contributions from almost any enterprise component, merely determining where to look for contributions to MPrin or MICs can be a complex task. So developing a checklist of potential contributions can help proposal writers develop a more complete and consistent picture of the MICs or MPrin associated with a technical debt. Below are three suggestions of broad areas worthy of close examination.

Revenue stream disruption

Technical debt can disrupt revenue streams either in the course of retirement projects, or when defects in production systems need attention. When those systems are out of production for repairs or testing, revenue capture might undergo short disruptions. Technical debt can extend those disruptions or increase their frequency.

For example, a technical debt consisting of the absence of an automated test can lengthen a disruption while the system undergoes manual tests. Technical debt consisting of misalignment between the testing and production environments can allow defects to slip through. Undetected defects can create new disruptions. Even a short disruption of a high-volume revenue stream can be expensive.

In advance of any debt retirement effort, we can identify some associations between classes of technical debt and certain revenue streams. This knowledge is helpful in estimating the contributions to MICs or MPrin from revenue stream disruption.

Extended time-to-market

Although technologists are keenly aware of productivity effects of technical debt, these effects can be small compared to the costs of extended time-to-market. In the presence of outstanding technical debt, time-to-market expands not only as a result of productivity reduction, but also from resource shortages and resource contention. Extended time-to-market can lead to delays in realizing revenue potential. And it can cause persistent and irreparable reductions in market share. To facilitate comparisons between different technical debt retirement proposals, estimates of these effects are invaluable.

Data flow disruption

All data flow disruptions aren’t created equal. Some data flow processes can detect their own disruptions and backfill as needed. For these flows, the main contribution to MICs or MPrin is delay. And the most expensive of these are delays in receiving or processing orders. Less significant but still important are delays in responding to anomalous conditions. Data flows that cannot detect disruptions are usually less critical. But they nevertheless have costs too. All of these consequences can be modeled and estimated. We can develop standard packages for doing so. And we can apply them repeatedly to MICs or MPrin estimates for different kinds of technical debt.

Last words

Estimates of MICs or MPrin are helpful in estimating the costs of retiring technical debt. They’re also helpful in estimating the costs of not retiring technical debt. In either case, they’re only estimates. They have error bars and confidence limits. The accounting systems we now use have no error bars. That, too, is a shortcoming that must be addressed.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Conroy 2012] Patrick Conroy. “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Metrics for technical debt management: the basics

Last updated on July 11th, 2021 at 02:19 pm

Measuring tools needed to follow a recipe in the kitchen
Measuring tools needed to follow a recipe in the kitchen. The word “measurement” evokes images that relate to our earliest understanding of the word as children. For most of us, that involves determining the attributes of a physical thing. In most cases, the physical thing of most concern is our own body—its height and weight. But we also measure everyday things, as when following recipes. So the strongest associations we have with the word “measurement” involve physical things. “Measuring” attributes of an abstract construct like technical debt can be perilous. We must make allowances for its lack of physicality. Those allowances must guide us as we develop metrics for technical debt management.

Whether it’s wise to use metrics for technical debt management is an open question. But whether it will become a widespread practice does seem to be a settled question. Using metrics for technical debt management now does appear to be inevitable. So let’s explore just what we mean by metrics, and what traps might lie ahead when we use metrics for technical debt management.

The reification error

Skepticism about the effectiveness of using metrics for technical debt management is reasonable. Technical debt isn’t a physically measurable thing. “Measuring” technical debt is therefore susceptible to what psychologists call the reification error [Levy 2009]. Philosophers call it the Fallacy of Misplaced Concreteness [Whitehead 1948].

The logical fallacy of reification occurs when we treat an abstract construct as if it were a concrete thing. Although reification can provide helpful mental shorthand, it can produce costly cognitive errors. For example, advising someone who’s depressed to get more self-esteem is unlikely to work, because self-esteem isn’t something one can order from Amazon, or anywhere else. (I checked; all I could find were books and ebooks) One can enhance self-esteem through counseling, reflection, or many other means, but it isn’t a concrete object one can “get.” Self-esteem is an abstract construct.

Likewise, technical debt is an abstract construct. We can discuss “measuring” it, but attempts to specify measurement procedures will eventually confront the inherently abstract nature of technical debt. Those attempts lead to debates about both the definitions of technical debt and the measurement process.

Metrics inherently require some kind of collection of numeric data. That’s why skepticism about using metrics for technical debt management is a reasonable position. Reasonable or not, though, metrics will be used. We’d best be prepared to use them responsibly. That’s the focus of this post—and a few to come. If we use metrics for technical debt management, how can we do it responsibly? How can we manage the risks of reification?

This post is about metrics in general. In coming posts I’ll apply this line of thinking to specific examples of metrics for managing technical debt, and suggest approaches that could mitigate reification risk.

Foundations for behavior guidance decisions

The objective of technical debt management is support for behavior guidance decisions. Behavior guidance decisions are decisions that guide the behavior and choices of employees. In this case, the goal is controlling the volume of technical debt. Although many frameworks exist for supporting behavior guidance decisions, they generally consist of four elements:

Quantifiers

A quantifier is a specification for a measurement process designed to yield a numeric representation of some attribute of an asset or process. With respect to technical debt, we use quantifiers to prescribe how we produce data that represents the state of technical debt of an asset. We also use quantifiers to generate data that captures other related items, such as budgets, the cost or availability of human effort, revenue flows—almost anything that interacts with the assets whose debt burden we want to control.

An example of a quantifier is the process for estimating the MPrin of a particular kind of technical debt an asset carries. The MPrin quantifier definition includes an explicit procedure for measuring it. That is, it defines a procedure for estimating the size of the MPrin in advance of actually retiring that debt. After retirement, we know its value without estimating, because the MPrin is what we actually spent to complete the retirement.

Measures

A measure is the result of determining the value of a quantifier. For example, we might use a quantifier’s definition to determine how much human effort has been expended on an asset in the past fiscal quarter. Or we might use another quantifier’s definition to determine the current size of the MPrin the asset now carries.

Metrics

A metric is an arithmetic formula expressed in terms of constants and a set of measures. One of the simpler metrics consists of a single ratio of two measures. For example, the metric that captures the average cost of acquiring a new customer in the previous fiscal quarter is the ratio of two measures, namely, the investment made in acquiring new customers, and the number of new customers acquired.

Associated with some metrics is a defined set of actors (actual people) who have authority to take steps to affect the value of the metric in some desirable way. They might also have authority to direct others to take similar steps. Metrics that have defined sets of actors are usually Key Performance Indicators (see below). If more than one individual is a designated actor for a metric, there is a process for resolving differences among the designated actors about what action to take, if any. In some cases, this process is as simple as determining which designated actor has the highest organizational rank.

An example of a technical debt metric

An example of a technical debt metric is the ratio MPrin(i)/MPrin(r). MPrin(i) is the total of incremental technical debt incurred in the given time period. MPrin(r) is the total of MPrin retired in that same time period. In periods during which this ratio exceeds 1.0, the organization is accumulating incremental technical debt faster than it is retiring technical debt. Computing it as a ratio, as opposed to a difference, has the effect of expressing the increases (or decreases) in technical debt portfolio size in units of the size of the debt retired. This enables the organization to take on some incremental technical debt responsibly.

This metric also has the virtue of displaying meaningful trends in an easily recognized way. In this case, a steady upward trend means a steadily increasing debt burden, even if in some time periods the debt doesn’t increase much. In other words, the ratio removes some of the “choppiness” that might plague a metric expressed in terms of absolute values.

Key Performance Indicators

A Key Performance Indicator (KPI) is a metric that provides meaningful insight for guiding business decisions. All KPIs are metrics; not all metrics are KPIs.

The value of a KPI depends on one or more metrics. It represents how successful the business is in reaching a given business objective. A metric, on the other hand, represents only the degree of success in reaching a targeted value for that metric. The relationship between the target value of a metric and any given business objective can be complicated. It can also potentially involve other metrics. For these reasons, metrics that aren’t KPIs are less useful for indicating the degree of success in reaching business objectives.

MPrin(i)/MPrin(r) is a metric that could also serve as a KPI, if the business objective is steady declines in overall technical debt.

Dimensions of measure vs. dimensions of metrics

Some metrics, such as MPrin(i)/MPrin(r), are dimensionless. Their values are pure numbers. And some metrics have dimensions—units of measure. For example, let MPrin(i) be the volume of incremental technical debt a deliverable carries, and let Tdelay be the number of days delivery was late. Consider the metric the metric MPrin(i)·Tdelay. The dimension of this metric is Currency·Days. This metric is particularly interesting, because a common assertion about technical debt is that we incur it as a means of advancing project delivery.

The evidence for this assertion is mostly anecdotal. But actually determining the value of this metric over a number of projects might reveal useful information about the effectiveness of a common strategy. That strategy is to accept incurring technical debt as a means of advancing project delivery. Thus, we would expect to see small time delays associated with increased incremental technical debt. In other words, all projects of similar scale should lie along roughly the same hyperbola in a plot of this metric in a space in which one axis is debt and the other is the number of days of delayed delivery.

Units of measures are often different from the units of the metrics that those measures support. For example, measures of technical debt in code include test coverage, documentation asynchrony, documentation omissions, code duplication, code complexity, dependency cycles, rule violations, or interface violations. The units of these measures are all different.

Last words

To those who must make strategic technical debt management decisions by comparing the costs of retiring different kinds of debt, these detailed measures are awkward to use. MPrin is more directly related to the issues they must address. MPrin provides a unit of comparison among debt retirement options, and between retirement options and available resources. Beyond the level of the particular debt we are retiring, MPrin is the dimension of greatest utility. [Brown 2010]

In a future post I’ll describe the properties of metrics that are needed for technical debt management.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Conroy 2012] Patrick Conroy. “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Legacy debt incurred intentionally

Last updated on July 11th, 2021 at 11:07 am

“The General,” a Civil-War-era locomotive, in Union Station, Chattanooga, Tennessee
“The General,” a Civil-War-era locomotive, in Union Station, Chattanooga, Tennessee. Built in 1855 in Paterson, New Jersey, for the Western & Atlantic Railroad, it is the engine stolen by Union spies in the Great Locomotive Chase. The theft was part of a plan to cripple the Confederate rail network during the American Civil War. The General is now at the Southern Museum of Civil War and Locomotive History in Kennesaw, Georgia. It originally conformed to the southern rail gauge of 5 feet (1,524 mm). But it was converted to the U.S. Standard Gauge of 4 feet 8 1⁄2 inches (1,435 mm) after 1886. Its original construction was therefore legacy debt. If it had been built after the war, it would have comprised legacy debt incurred intentionally. Photo “The General, Union Station, Chattanooga, Tenn.,” Detroit Publishing Co., publisher, ca. 1907. Courtesy U.S. Library of Congress.

Throughout this blog, I’ve been using the terms legacy technical debt and incremental technical debt. Legacy technical debt is debt that existed before we undertook the current project; incremental technical debt is debt we incurred in the course of executing the current project. But there is some incremental technical debt that’s actually legacy debt incurred intentionally.

Reviewing some terminology

As I’ve defined incremental technical debt, it’s any debt we incur in the course of the current work. That definition works well for most incremental technical debt. For example, if we recognize at the end of the project that we should have done something a bit differently, then we’ve incurred incremental technical debt. This is one of the four forms of technical debt Fowler identifies in his 2x2 technical debt matrix [Fowler 2009].

But we must be a bit more careful, because some incremental technical debt is actually legacy debt incurred intentionally.

Legacy technical debt is debt that we incurred earlier, and which we’ve inherited as part of the asset. Sometimes we’re aware of legacy technical debt; sometimes we haven’t actually realized yet that it is indeed technical debt. In any case, the technical artifacts that comprise the legacy technical debt can impose constraints on new development. Unless we retire the legacy debt, however we modify an asset must be compatible with the unmodified parts of the assets as they are.

Sometimes technical debt can be both legacy and incremental

Although the two kinds of technical debt—legacy and incremental—might seem at first to be mutually exclusive, there’s a subset of legacy technical debt that we incur in the course of executing the current project.

Here’s a physical example:

After the United States Civil War, the state of the U.S. rail system was a bit chaotic. Most of the rail lines in the northeast and western regions of the country used standard gauge rail beds: rails that are 1,435 mm (4 feet ​8 1⁄2 inches) apart. Most of the South was using a broader gauge: 1,524 mm (5 feet). These conflicting gauges comprised a legacy technical debt. The combined rail system retired that debt over a two-day period beginning on Monday, May 31, 1886, when all the southern railroads coordinated to convert from a 5-foot gauge to 4 feet 9 inches [Southern Railfan 1966].

In the years immediately before the U.S. rail system retired its legacy debt, expansion or repair of the southern rail network was necessarily compatible with the broader gauge. But the broader gauge was itself legacy technical debt. That new expansion or repair work would thus have comprised newly incurred technical debt that would have also added to the legacy technical debt. Thus, in some situations, some newly incurred technical debt can be legacy technical debt.

Here’s a software example:

A software development team is executing a project to enhance the capabilities of the Marigold product. Marigold is one product in the Garden Flowers personal productivity suite. Unfortunately, the original architecture of Garden Flowers didn’t anticipate the course that the suite has since taken. That architecture now comprises legacy technical debt. However, changing the suite architecture isn’t in the charter of the Marigold team. So they’ll be creating new technical artifacts that are compatible with the current architecture. Someday, some other team will modify or replace what the Marigold team is building now. That will happen when the company revamps or replaces the Garden Flowers architecture. Thus, some of the new technical debt the Marigold team is now incurring will join the legacy technical debt associated with the Garden Flowers architecture.

Moreover, the Marigold team might incur other technical debt in the course of its activities. That might happen if, for example, it fails to complete its task. Or it could happen if the team completes its task in some suboptimal way. In that case it will be incurring incremental technical debt that it probably should retire soon. Thus, in the same project, it would be incurring both (a) purely incremental technical debt, and (b) incremental technical debt that’s also legacy technical debt.

Why legacy debt incurred intentionally matters

Any program of rational technical debt management entails measuring—or at least estimating—the volume of technical debt incurred in the course of executing each project. The goal is to limit the debt incurred, so as to get control of the total technical debt outstanding.

But with legacy technical debt, as in the example above, we can’t always control the debt we incur. In some projects, it’s necessary to incur additional legacy technical debt because the work we do must be compatible with existing assets. We want to limit incremental technical debt, but we can’t always avoid incurring incremental debt that’s also legacy debt.

This distinction is important for both policy formulation and management intervention. For instance, if a team incurs purely non-legacy incremental technical debt, we might want to address it immediately. Or we might commit to addressing it immediately after delivery. Alternatively, suppose we can obtain good data about a particular kind of legacy technical debt that’s growing because of the need to keep new development compatible with existing debt-ridden assets. Then we can use that data to elevate the priority of retiring that legacy debt before it grows even larger.

Last words

So when we ask projects to report their incremental technical debt, we want them to distinguish between legacy debt incurred intentionally, and incremental debt that they incurred for reasons specific to the project. Having data about both kinds of incremental technical debt is a necessity if we want to take appropriate management action to maintain control of the technical debt portfolio.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Conroy 2012] Patrick Conroy. “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Southern Railfan 1966] Southern Railfan. “The Days They Changed the Gauge,” 1966.

Available: here; Retrieved: July 26, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Managing technical debt

Last updated on July 11th, 2021 at 02:56 am

A jumble of jigsaw puzzle pieces. Managing technical debt can be like solving a puzzle.
A jumble of jigsaw puzzle pieces. Managing technical debt can be like solving a puzzle. Where do we begin? With jigsaw puzzles, we usually begin with two assumptions. First, we assume that we have all the pieces. Second, we assume that they fit together to make a coherent whole. These assumptions might not be valid for the puzzle of technical debt in any given organization.

Managing technical debt is something few organizations now do. And fewer do well. Several issues make managing technical debt difficult and they’re discussed elsewhere in this blog. This thread explores tactics for dealing with those issues from a variety of initial conditions. For example, tactics that work well for an organization that already has control of its technical debt, and which wants to keep it under control, might not work at all for an organization that’s just beginning to address a vast portfolio of runaway technical debt. The needs of these two organizations differ. The approaches they must take might then also differ.

What’s in this thread

The first three posts in this thread illustrate the differences among organizations in different stages of developing technical debt management practices. In “Leverage points for technical debt management,” I begin to address the needs of strategists working in an organization just beginning to manage its technical debt. They ask the question, “Where do we begin?” In “Undercounting nonexistent debt items,” I offer an observation about a risk that accompanies most attempts to assess the volume of technical debt. Such assessments are frequently undertaken in organizations at early stages of the technical debt management effort. In “Crowdsourcing debt identification,” I discuss a method for maintaining the contents of a database of technical debt items. Data maintenance is something we might undertake in the context of a more advanced technical debt management program.

Obstacles we must address

Whatever approach is adopted, it must address factors that include technology, business objectives, politics, culture, psychology, and organizational behavior. So what you’ll find in this thread are insights, observations, and recommendations that address one or more of the issues related to these fields. “Demodularization can help control technical debt” considers mostly technical strategies. “Undercounting nonexistent debt items” is an exploration of a psychological phenomenon.  “Leverage points for technical debt management” considers the organization as a system and discusses tactics for altering it. And “Legacy debt incurred intentionally” explores how existing technical debt can grow as long as it remains outstanding.

Accounting issues also play a role. For example, “Metrics for technical debt management: the basics” is a basic discussion of measurement issues. A second example: “Accounting for technical debt” looks into the matter of accounting for technical debt financially. And “Three cognitive biases” is a study of how the way we think about technical debt affects the technical debt portfolio.

Posts in this thread:

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Conroy 2012] Patrick Conroy. “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Southern Railfan 1966] Southern Railfan. “The Days They Changed the Gauge,” 1966.

Available: here; Retrieved: July 26, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Demodularization can help control technical debt

Last updated on July 13th, 2021 at 10:02 am

Two shipping containers resting on a “spine car”
Two shipping containers resting on a “spine car.” Spine cars are a kind of rail car used for shipping containers. The container on the left is a so-called tank container, used for bulk cargo. Various types of tank containers are available for transporting different types of cargo. Bulk cargoes include wine, oils, ammonia, and even cryogenic liquids. The steel frame around the tank provides compatibility with the standard container profile. That profile makes the tank compatible with equipment built to handle standard shipping containers. It functions as an interface between the tank and the container-handling equipment. Photo (cc) Mr Snrub at the English language Wikipedia

Modularity is a standard design approach for complex systems. But because modularity can be be a factor in the accumulation and persistence of technical debt, temporary demodularization can help control technical debt. And longer-term demodularization can reduce the rate of accumulation of technical debt.

Modularization is a standard design approach for complex systems

Since the 1970s, modular design of systems has been de rigueur in both software and hardware. More than that, modularization has demonstrated that it’s an essential feature of maintainable, adaptable, and extensible systems [Parnas 1979] [Sullivan 2001]. And we now understand that modularization is a foundational attribute of loose coupling in systems. Loose coupling enables system designers and maintainers to work in parallel, with independence. Independent parallelism renders systems economical at levels of complexity beyond what is achievable with tighter coupling [Orton 1990].

Eliminating duplication is one reason why modularization reduces maintenance and enhancement costs. Modularization enables system designers to create a single system element that provides needed functionality to many other parts of the system. Because there is then only one copy of the system element that provides that capability, adapting it in response to new needs, or to correct defects, need be done only once.

That’s a big deal. If we provided that capability through multiple system elements, and we needed to adapt it, adaptation would be necessary for each of those elements. Moreover, the multiplicity of elements opens the possibility that we might not perform that adaptation consistently. That could create further problems. Eliminating duplication is a most useful property of modularization.

Modularization provides many other advantages. For example, modularization shortens time-to-market for new capabilities. When extending the system by adding new capability, we sometimes need access to capabilities present in existing modules. We can access those capabilities easily in modularized systems, because those modules are already in a form that makes access easy. We have no need to recreate them for the new capability we’re implementing. They exist, and they’re already tested and ready to go. In this way, modularization shortens time-to-market for new capability.

Modularization has a dark side

It’s a bit of a story to show illustrate the dark side of modularization. Let’s give a name to the modular system element for which we’ve removed all duplicates. Since it’s now unique, I’ll call that modular system element “U.” Any system element that interacts with U is now indirectly coupled to every other system element that interacts with U. And that’s where the trouble comes in.

A scenario that illustrates the problem

When implementing a new capability N, and N needs access to U, we gain the advantages described above. But suppose N needs U to do something a bit differently from what U now does. Sometimes we can extend U to accommodate N without disturbing U’s existing “client base” of system elements that already depend on U. That’s no problem. But suppose what N now needs U to do would disturb U’s client base if we implemented the changes in the way we would do it if we were starting fresh. In that case, we would need to modify all of U’s existing clients. And then we would need to re-test everything. So let’s suppose that we don’t have time or resources to do all that work. We requested the time and resources, but we didn’t receive an approval.

Instead, we found a way to extend U in a less elegant, less maintainable, but still reliable way that doesn’t disturb U’s existing clients, and does meet N’s needs. We do that instead, promising ourselves that we’ll go back someday, when we have approval for the time and resources we need. Then we would “fix” U so that it serves both its existing clients and N in the “correct” way.

A natural question

That’s one form—exactly—of what we call technical debt. In this scenario we’ve illustrated one way in which modularization leads to technical debt formation.

So a natural question arises: would it make sense instead to create a new system element—call it U2—that meets N’s needs, and also meets the needs of U’s existing client base? It would make sense, in many cases, if only they knew about U2 and if only they had the time and resources to make the change. To create such a U2 would be demodularization—that is, a violation of modularity—and that is indeed heresy. It also creates a different technical debt: the obligation to convert U’s clients to become U2 clients someday, and then to delete U. But it might be the right approach.

When would demodularization help?

Under what conditions would demodularization be sensible? Here are three possibilities.

When a new and necessary adaptation is incompatible with existing forms

The scenario above is one situation in which demodularization can help. Demodularization helps when adding new capability, or adapting to a new need, requires a change to a shared module, and that change is incompatible with the existing uses of that module. Demodularization is than a useful technique, provided that we retire with due dispatch the technical debt that results.

When retiring technical debt requires an incompatible adaptation

A second situation arises during technical debt retirement operations. During technical debt retirement, it might be necessary to alter a shared module in a way that would be incompatible with the needs of its existing client base. In that case, the approach used above can be useful. First, create a successor (“U2”) to the original shared module (“U”) in a form that isn’t burdened with the technical debt that’s being retired. Then, at the same time, or over an extended period, convert all the clients of U to use the successor U2. In the meantime, the demodularization comprises a technical debt. When the conversion is complete, the you will have retired the original technical debt. Finally delete the original shared module U, thereby retiring the demodularization.

This approach entails some risk. In the interim period before you retire U, when demodularization is still in place, U and U2 are both in use. If you need changes in U, you might need to replicate them in U2. When that happens, duplication of effort can occur. This approach is useful, though, provided the interim period of demodularization is short compared to the anticipated intervals between incidents that require alterations to both U and U2. There is risk, of course, that the resources committed to finally retiring U might become unavailable after U2 is in place. In that case, the technical debt portfolio will have expanded to no good end. To manage this risk, the artifice of secured technical debt can prove useful.

Partial demodularization helps when adaptations are focused

In some instances, portions of a shared system element—call it “U”—evolve very rapidly, while most of the rest of U remains stable. Technical debt can accumulate rapidly if the element remains unitary—that is, in one piece. However, in some cases we can segregate the rapidly evolving portion of U into a smaller unit—call it “S.” If we provide S as a separate shared system element, those portions of the system that are experiencing rapid evolution can access S separately, without disturbing the system elements that require access only to the stable portions of U.

Such segregation might require a bit of duplication There might be pieces of S that U needs, and which must therefore appear in both U and S. Likewise, there might be pieces of U that S needs, and which must therefore appear in both U and S.

But the segregation might be worthwhile, because changes in S usually require testing S and S’s clients. Testing can be expensive in time and resources. Because test coverage isn’t always 100% (read: test coverage is rarely 100%), changes in S entail some operational risk. Segregating S reduces that risk by protecting U’s clients from changes in S.

Later, when the rapidly evolving S stabilizes, you can re-integrate it into its former residence in U. Until that point, its segregation—and the attendant duplications—might constitute a technical debt.

Last words

Accepting modularization as an inviolable design principle is one cause of unnecessary accumulation of technical debt. It makes retiring legacy technical debt more difficult. Be prepared to violate modularity, but do so judiciously.

References

[Brenner 2011] Richard Brenner. “Indicators of Lock-In: I,” Point Lookout 11:12, March 23, 2011.

Available: here; Retrieved: October 23, 2018.

Cited in:

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Burge 2015] Janet E. Burge and Raymond McCall. “Diagnosing Wicked Problems,” Design Computing and Cognition 14, 2015, 313-326.

Available: here; Retrieved: October 25, 2018

Cited in:

[Churchman 1967] C. West Churchman. “Wicked problems,” Management Science 14:4, 1967, B-141–B-142

Available: here; Retrieved: October 16, 2018

Cited in:

[Conroy 2012] Patrick Conroy. “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cook 2016] John Cook, Naomi Oreskes, Peter T. Doran, William R.L. Anderegg, Bart Verheggen, Ed W. Maibach, J. Stuart Carlton, Stephan Lewandowsky, Andrew G. Skuce, Sarah A. Green, Dana Nuccitelli, Peter Jacobs, Mark Richardson, Bärbel Winkler, Rob Painting, and Ken Rice. “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters 11, 2016, 048002.

Available: here; Retrieved: October 23, 2018

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics, 643-669, 1961.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gabriel 2018] Melissa Gabriel. “Hurricane Michael: Fate of costly stealth fighter jets at Tyndall Air Force Base still unknown,” USA Today: Pensacola News Journal, October 17, 2018.

Available: here; Retrieved: October 23, 2018

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39:4, 341-350, 1984.

Available: here; Retrieved: August 8, 2017

Cited in:

[Keizer 2018] Gregg Keizer. “Windows by the numbers: Windows 10 backtracks, Windows 7 remains resilient,” Computerworld, October 2, 2018.

Available: here; Retrieved: October 18, 2018

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Kreuter 2004] Marshall W. Kreuter, Christopher De Rosa, Elizabeth H. Howze, and Grant T. Baldwin. “Understanding wicked problems: a key to advancing environmental health promotion.” Health Education and Behavior 31:4, 2004, 441-454.

Available: here; Retrieved: October 26, 2018

Cited in:

[Levin 2012] Kelly Levin, Benjamin Cashore, Steven Bernstein, and Graeme Auld. “Overcoming the tragedy of super wicked problems: constraining our future selves to ameliorate global climate change,” Policy Science 45, 2012, 123–152.

Available: here; Retrieved: October 17, 2018

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Order from Amazon

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Orton 1990] J. Douglas Orton and Karl E. Weick. “Loosely Coupled Systems: A Reconceptualization,” The Academy of Management Review, 15:2, 203-223, 1990.

Available: here; Retrieved: July 11, 2018.

Cited in:

[Parnas 1979] David L. Parnas. “Designing Software for Ease of Extension and Contraction,” IEEE Transactions on Software Engineering, vol. SE-5, no. 2, March 1979, 128-138.

Available: here; Retrieved: July 13, 2017

Cited in:

[Phillips 2018a] Dave Phillips. “Tyndall Air Force Base a ‘Complete Loss’ Amid Questions About Stealth Fighters,” The New York Times, October 11, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Phillips 2018b] Dave Phillips. “Exposed by Michael: Climate Threat to Warplanes at Coastal Bases,” The New York Times, October 17, 2108.

Available: here; Retrieved: October 23, 2018

Cited in:

[Rittel 1973] Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory of Planning”, Policy Sciences 4, 1973, 155-169.

Available: here; Retrieved: October 16, 2018

Cited in:

[Shapiro 1998] Carl Shapiro and Hal R. Varian. Information rules: a strategic guide to the network economy. Harvard Business Press, 1998.

Cited in:

[Southern Railfan 1966] Southern Railfan. “The Days They Changed the Gauge,” 1966.

Available: here; Retrieved: July 26, 2018.

Cited in:

[Sullivan 2001] Kevin J. Sullivan, William G. Griswold, Yuanfang Cai, and Ben Hallen. “The structure and value of modularity in software design,” in ACM SIGSOFT Software Engineering Notes, 26:5, 99-108, 2001.

Available: here; Retrieved: July 11, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Order from Amazon

Cited in:

[Zablah 2015] Raul Zablah and Christian Murphy. “Restructuring and Refinancing Technical Debt.” Proceedings of the IEEE 7th International Workshop on Managing Technical Debt (MTD). IEEE, 2015.

Available: here; Retrieved: February 13, 2016

Cited in:

Other posts in this thread

Related posts

Show Buttons
Hide Buttons