Synergy between the reification error and confirmation bias

In deciding whether or not to undertake technical debt retirement projects, organizations are at risk of making inappropriate decisions because of a synergy between the reification error and confirmation bias. Together, these two errors of thought create conditions that make committing appropriate levels of resources difficult. And in those cases in which resources are committed, there is a tendency to underestimate costs, which can lead to an elevated incidence of failures of technical debt retirement projects.

The reification error and confirmation bias

As explained elsewhere in this blog, the reification error is an error of reasoning in which we treat an abstraction such as technical debt as if it were a real, concrete, physical thing, which, of course, it is not. (See “Metrics for technical debt management: the basics”)

And confirmation bias is a cognitive bias that causes us to favor and seek only information that confirms our preconceptions, or to avoid information that disconfirms them. (See “Confirmation bias and technical debt”)

How the reification error affects management

The reification error might be responsible, in part, for a widely used management practice that often appears in the exploratory stages of undertaking projects. Let’s start with an illustration from the physical world.

A feedback loop that now provides budgetary control in most organizations
A feedback loop that now provides budgetary control in most organizations.

In the physical world, when we want cherries, we go to the produce section of the market and check the price per pound or kilo. Then we decide how many pounds or kilos we want. If the price is high, we might decide that fewer cherries will suffice. If the price is low, we might purchase more cherries. We have in mind a total cost target, and we adjust the weight of the cherries to meet the target, given the price. In the physical world, we can often adjust what we purchase to match our ability to pay for it.

Retiring technical debt doesn’t work like that, in part, because technical debt is an abstraction. But we try anyway; here’s how it goes. Management decides to retire a particular class of technical debt, and asks an engineer to work up an estimate of the cost. Sometimes Management reveals the target they have in mind if they have one; sometimes not. The estimate comes back as Total ± Uncertainty. Management decides that’s too high—or the Uncertainty is too great—and asks the engineer to find a way to do it for less, with less Uncertainty, maybe by being clever or doing less.

Management—the “customer” in this scenario—makes this request, in part, based on the belief that it’s possible to adjust the work to meet a (possibly unstated) target, in analogy to buying cherries in the produce department. That thinking is an example of the reification error. In this dynamic, we rarely take into account the fact that retiring technical debt isn’t exactly like buying cherries.

How confirmation bias affects engineering estimates

Now back to the interaction between Management and estimator. The engineer now suspects that Management does have a target in mind. Some engineers might ask what the target is. Some don’t. In any case, the engineer comes back with a lower estimate, which might still be too high. This process repeats until either Management decides against retiring the debt, or accepts the lowest Total ± Uncertainty, hoping for a final cost that isn’t too high.

In adjusting their estimates, engineers have a conflict of interest that can compromise their objectivity through the action of confirmation bias. In the case of technical debt retirement efforts, engineers are usually highly motivated to gain Management approval of the project, because the technical debt in question depresses engineering productivity and induces frustration. And since engineers typically sense that Management approval of the project is contingent on finding an estimate that’s low enough, the engineers have a preconception. That is, they have an incentive to convince themselves that the budget and schedule adjustments they make are reasonable. Because of the confirmation bias, they tend to seek justifications for the belief that lowering costs and tightening schedules is reasonable, while they tend to avoid seeking justifications for believing that their adjustments might not be feasible. That’s the confirmation bias in action.

How synergy between the reification error and confirmation bias comes about

So, because of the reification error, Management tends to believe that the work needed to retire technical debt of a particular kind is more adjustable than it actually is. And because of confirmation bias, engineers tend to believe that they can do the work for a cost and within a schedule that Management is willing to permit. Too often, the synergy between the two errors of thinking provides a foundation for disaster.

Why this synergy creates conditions for disaster in technical debt retirement projects

Management usually interprets estimates as commitments. Engineers do not equate estimates with commitments. Management usually forgets or ignores the upside Uncertainty. So typically, when Management finally accepts an estimate, the engineering team finds that it has made a commitment to deliver the work for the cost Total, with zero upside Uncertainty. Few engineering teams are actually asked to make an explicit commitment to perform the work for a cost Total with zero upside Uncertainty. An analogous problem occurs with schedule.

By ignoring the Uncertainty, Management (the buyer) transfers the uncertainty risk to the project team. That strategy might work to some extent with conventional development or maintenance projects, where we can adjust scope and risk before the work begins. But for technical debt retirement projects, this practice creates problems for two reasons.

Adjusting the scope of debt retirement projects is difficult

First, with technical debt retirement we’re less able to adjust scope. To retire a class of technical debt, we must retire it in toto. If we retire only some portion of a class of technical debt, we would leave the asset in a mixed state that can actually increase MICs. So it’s usually best to retire the entirety of any class of technical debt, so as to leave the asset in a uniform state.

Debt retirement efforts are notoriously unpredictable

Second, the work involved in retiring a particular class of technical debt is more difficult to predict than is the work involved in more conventional projects. (See “Useful projections of MPrin might not be attainable”) Often, we must work with older assets, or older portions of younger assets. The people who built them aren’t always available, and documentation can be sparse or unreliable. Moreover, it’s notoriously difficult to predict with accuracy when affected assets must be temporarily withdrawn from production—and for how long—to support the technical debt retirement effort. Revenue stream interruptions, which can be significant portions of total costs, can be difficult to schedule or predict. Thus, technical debt retirement projects tend to be riskier than other kinds of projects. They have wider uncertainty bands. Ignoring the Uncertainty, or trying to transfer responsibility for it to the project team, is foolhardy.

A strategy for reducing the effects of this synergy

To intervene in the dynamic between the consequences of the reification error and the consequences of confirmation bias, we must find a way to limit how their consequences can interact. That will curtail the ability of one phenomenon to reinforce the other. This task is well suited for application of Donella Meadows’ concept of leverage points [Meadows 1999], which I discussed in an earlier post, “Leverage points for technical debt management.”

In that post, I summarized Meadows’ idea that to alter the behavior of a complex system, one can intervene at one or more of 12 categories of leverage points. These are elements in the system that govern the behavior of the people and institutions that comprise the system. In that post, I sketched the use of Leverage Point #9, Delays, to alter the levels of technical debt in an enterprise.

In this post, I’ll sketch the use of interventions at Leverage Point #8, which Meadows calls, “The strength of negative feedback loops, relative to the impacts they are trying to correct against.”

Our strategy is this:

A feedback loop that now provides budgetary control in most organizations

One feedback loop at issue in this case, illustrated above, provides budgetary control. It influences managers who might otherwise overrun their budgets by triggering some sort of organizational intervention when they do overrun their budgets. And it leads to increases in the portfolios of managers who handle their budgets responsibly.  Presumably, that’s why managers compel estimators to find approaches that cost less. The feedback loop to which managers are exposed causes them to establish another feedback loop involving the engineer/estimator, and later the engineering team, to hold down their estimates, and later their actual expenditures.

We can use a diagram of effects [Weinberg 1992] to illustrate the feedback mechanism commonly used to control the performance of managers who are responsible for portfolios of project budgets. In the diagram, the oval blobs represent quantities indicated by their respective captions. Each of these quantities is assumed to be measurable, though their precise values and the way we measure them is unimportant for our rather qualitative argument.

Notice that arrows connect the blobs. The arrows represent the effect of changes in the value represented by one blob on the value represented by another. The blob at the base of the arrow is the effector quantity. The blob at the point of the arrow is the affected quantity. Thus, the arrow running from the blob labeled “Actual Spend” to the blob labeled “Overspend” expresses the idea that a positive (or negative) change in the amount of actual spending on projects causes a positive (or negative) change in Overspend. When a change in the effector quantity causes a like-signed change in the affected quantity, we say that their relationship is covariant.

Because increases in Budget Authority tend to decrease Overspend, all other things being equal, the relationship between Budget Authority and Overspend is contravariant. We represent a contravariant relationship between the effector quantity and the affected quantity as an arrow with a filled circle on it.

Finally, notice that the arrow from Overspend (effector) to Promotion Probability (affected) has a filled Delta on it. This represents the idea that as Overspend increases, it negatively affects the probability that the manager will be promoted at some point in the future. The Delta indicates a delayed effect; that the Delta is filled indicates a contravariant relationship. (An unfilled Delta would indicate a delayed covariant effect.)

This diagram, which contains a loop connecting Budget Authority, Overspend, and Promotion Probability, has the potential to “run away.” That is, as we go around the loop, we find self-re-enforcement, because the loop has an even number of contravariant relationships. It works as follows:

As Overspend increases, after a delay, the Probability of Promotion decreases. This causes reductions in Budget Authority because, presumably, the organization has reduced faith in the manager’s performance. Reductions in Budget Authority make Overspend more likely, and round and round we go.

Similarly:

As Overspend decreases, after a delay, the Probability of Promotion increases. This causes increases in Budget Authority because, presumably, the organization has increased faith in the manager’s performance. Increases in Budget Authority make Overspend less likely, and round and round we go.

Fortunately, other effects usually intervene when these self-re-enforcing phenomena get too large, but that’s beyond the scope of this argument. For now, all we need observe is that managers who manage their budgets effectively tend to rise in the organization; those who don’t, don’t.

The result is that managers seek to limit spending so as to avoid overspending their budget authority. And that’s one reason why they push engineers to produce lower estimates for technical debt retirement projects.

How this feedback loop overlooks important drivers of technical debt formation

To break the connection between the managers’ reification error and the engineers’ confirmation bias, our intervention must cause the managers and the engineers to make calculations differently. We can accomplish this by requiring that they consider more than the mere cost of retiring the class of technical debt under consideration. They must estimate the consequences of not retiring that technical debt, and they must also estimate costs beyond the cost of retiring the debt. In what follows, I’ll use the shorthand TDBCR to mean the class of Technical Debt Being Considered for Retirement.bcr

Specifically, the estimates that are now typically generated for such projects cover only the cost of performing the work required to retire the TDBCR. It’s then left to Management to decide whether, when, and to what extent to commit resources to execute the project. The primary consideration is the effect on the decision-maker’s budget, and the consequences for achieving the goals for which the decision-maker is responsible.

Since the retirement project can potentially provide benefits beyond the manager’s own portfolio, failing to undertake the project can have negative consequences for which the manager out to be held accountable. That’s the heart of the problem. So let’s look at some examples of considerations that must be taken into account.

Adjustments that would be needed in these feedback loops to gain control of technical debt

In making a resource allocation decision for a technical debt retirement project, there are considerations beyond the cost of retiring the debt. A responsible decision regarding undertaking technical debt retirement projects is possible only if other kinds of estimates are also generated and available. Here are some examples:

  • The effects of retiring TDBCR on the cost of executing any other development or maintenance efforts contemplated or already underway
  • The effects of retiring TDBCR on revenue and market share for all existing assets that directly produce revenue and which could be affected by retiring TDBCR
  • The revenue that would be generated (and timing thereof) by any new products or services that would be enabled by retiring TDBCR
  • The effects of retiring TDBCR on the cost of executing other technical debt retirement efforts

And these items might not be related to anything for which the decision-maker is responsible. That’s the core of the problem we now face: the feedback loop we now use to influence the decision-maker excludes considerations that are affected by the decision-maker’s decisions. Until we install feedback loops that cause the decision-maker to consider these consequences, or until we make decisions at levels that include these other consequences, the effects of the decision-maker’s decisions are uncontrolled, and might not lead to decisions optimal for the enterprise.

References

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

Other posts in this thread

The resilience error and technical debt

Last updated on October 4th, 2018 at 02:09 pm

I’ve mentioned the reification error in a previous post (see “Metrics for technical debt management: the basics”), but I haven’t explored its dual, the resilience error. Let me correct that oversight now.

The future USS Zumwalt (DDG 1000) in sea trials, December 2015
The future USS Zumwalt (DDG 1000) is underway for the first time conducting at-sea tests and trials in the Atlantic Ocean Dec. 7, 2015. The first of the Zumwalt class of US Navy guided missile destroyers, it is designed to be stealthy, and to be supported by a minimal crew. After the program experienced explosive cost growth, the class has been downsized from 32 ships to three, and complement increased from 95 to over 140 to reduce capital costs. The three vessels on order now have significantly reduced missions. As one might expect, the causes of these troubles are much debated. But it’s possible that the resilience error plays a role. Before the first of a new class of ships goes to sea, it exists as an abstraction—a collection of concepts, plans, promises, and technologies, tried and untried. Many elements of this collection have never inter-operated with other elements. The first ship represents the first opportunity to see how all the elements work together. Although troubles often appear even before the ship is fully assembled, anticipating all troubles is extraordinarily difficult.

Reification risk is the risk that an error of reasoning known as the reification error might affect decisions—in this case, decisions regarding technical debt. The reification error [Levy 2009] [Gould 1996] (also called the reification fallacy, concretism, or the fallacy of misplaced concreteness [Whitehead 1948]) is an error of reasoning in which we treat an abstraction as if it were a real, concrete, physical thing. Reification is useful in some applications, such as object-oriented programming and design.

But when we reify in the domain of logical reasoning, troubles can arise. For example, we can encounter trouble when we think of “measuring” technical debt. Strictly speaking, we cannot measure technical debt. It isn’t a real, physical thing that can be measured. What we can do is estimate the cost of retiring technical debt, but estimates are only approximations. And in the case of technical debt, the approximations are usually fairly rough—they have wide uncertainty bands. That’s one way for trouble to enter the scene. When we regard the estimate as if it were a measurement, we tend to think of it as more certain than it actually is. Technical debt retirement projects then overrun their budgets and schedules, and chaos reigns.

For example, if we think we’ve measured the MPrin of a class of technical debt, rather than that we’ve estimated it, we’re more likely to believe that one measurement will suffice, and that it will be valid for a long time (or indefinitely). On the other hand, if we think we’ve estimated the MPrin of a class of technical debt, we’re more likely to believe that obtaining a second independent estimate would be wise, and that the estimate we do have might not be valid for long. These are just some of the consequences of the reification error.

The resilience error

If the reification error is risky because it entails regarding an abstraction as a real, physical thing, we might postulate the existence of a resilience error that’s risky because it entails regarding an abstraction as more resilient, pliable, adaptable, or extensible than it actually is.

When we commit the resilience error with respect to an abstraction, we adopt the belief—usually without justification, and possibly outside our awareness—that if we make changes in the abstraction without fully investigating the consequences of those changes, we can be certain that the familiar properties of the abstraction we modified will apply, suitably modified, to the new form of the abstraction.  Or we assume incorrectly that the abstraction will accommodate any changes we make to its environment.

Sometimes we benefit when we modify abstractions; usually we encounter unintended and unpleasant consequences. For example, unless we examine our modifications carefully, it’s possible that the implications of a modification might conflict with one or more of the fundamental assumptions of the abstraction.

Examples of the resilience error

Perhaps a (ahem) concrete example will illustrate. Consider the steel hull of an ocean liner. We can manufacture it more cheaply if we can devise a way to use less steel. So one approach to that goal is to remove a small portion of the bottom of the hull, say, a circular hole one meter in diameter. We send some people into the ship to do the work, and they return with panicky reports of water coming in. But the ship seems fine, so we reject the reports. Even a day later, all seems well. But by the end of the second day, the trouble is obvious. The ship is sinking.

The problem in our example is that the circular hole in the hull violated a fundamental assumption about how ship hulls work: they work by keeping all water out of the ship. We had extended the idea of hull to make it lighter, but in doing so, we encountered some unintended consequences because our extension violated a fundamental property of hulls.

Now for a less fanciful example.

Consider the fictitious company Alpha Properties LLC, which manages small condominium associations (from 25 to 100 units). Things have been going swimmingly at Alpha Properties, and they’ve decided to expand to handle large condominium associations. Their financial accounting software  has worked well, and their employees have become quite expert in its use. Alpha management has heard good reports from other management companies that deal with large client associations. So Alpha decides to use the same software for its larger accounts too. But things don’t work out so well.

The software is fine, but the processes used by the staff are cumbersome and slow. For example, setting up a new association requires too much manual data entry. For a 100-unit association, client setup wasn’t a burden, but for a 900-unit association the problem is just unmanageable.

This is a fine example of the resilience error. When we make this error, we fail to appreciate how an abstraction can encapsulate assumptions that make for difficulties when we try to extend it or apply it in a new or altered context. In this example, Alpha’s data flow processes are the abstraction. The context is signing up a new client association. When the context (signing up a large new client) is different, it violates an internal assumption of the abstraction (the data flow process for signing up a new client).

How the resilience error leads to technical debt

In many cases, the resilience error is at the heart of the causes of technical debt. It works like this. We have an asset that works perfectly well for one set of applications or in one set of contexts. We want to apply that asset in a new way, which might (or might not) require some minor extensions. When we try it, we find that the asset incorporates some assumptions about the application or the context, and one or more of those assumptions are violated by the new application or the new context. Scrambling, we find some quick fixes that can get things working again, but those fixes usually aren’t well designed or easily maintained. The result is a trail of technical debt.

Acquiring companies is like that. Before the acquisition, we think we’ll be able to merge the IT operations to save some expenses in operations. When we actually try it, though, merging them proves to be far more expensive than we imagined. Ah, the resilience error.

What makes this situation so difficult is that often we’re unable to anticipate what assumptions we might be about to violate. That’s why we make the resilience error.

Spotting difficulties with adapting to new applications and new contexts isn’t so difficult with physical entities. For example, we can see in advance that a square peg won’t fit into a round hole. But with abstractions, we can’t always see the problems in advance. Piloting, prototypes, games, and simulations can help us avoid some trouble, but not all.

References

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

Other posts in this thread

Three cognitive biases

Last updated on September 16th, 2018 at 03:58 pm

Technical debt arises in enterprise assets through the effects of two classes of drivers: obsolescence and decision-making. When technologies advance, or new technologies arise, or laws or regulations evolve or are introduced, existing assets or assets under development can sometimes be left behind. That’s how obsolescence produces technical debt. Debt driven mainly by decision-making is more difficult to describe, but anything that biases decisions away from strictly rational results presents risk. Three cognitive biases likely have strong effects on technical debt formation and persistence.

Photo of Daniel Ellsberg, speaking at a press conference in New York City in 1972
Photo of Daniel Ellsberg, speaking at a press conference in New York City in 1972. Best known for his role in releasing the Pentagon Papers, Dr. Ellsberg made important contributions to decision theory while at the RAND Corporation. Photo by Bernard Gotfryd, courtesy U.S. Library of Congress.
Decision-making produces technical debt as the people of the enterprise make choices in design, development, and resource acquisition or allocation. Typically, both obsolescence and decision-making contribute to producing any particular instance of technical debt, though either obsolescence or decision-making might be more important than the other in any given instance.

Managing debt driven principally by obsolescence isn’t difficult, but I’ll leave that topic for another time. For now, let’s focus on decision-making. Already widely accepted is the contribution of engineering decisions to technical debt formation. Indeed, many believe that all — or most — technical debt arises as a result of faulty decisions by engineers. While some engineering decisions are indeed faulty, the current scale of technical debt is so large as to create doubt about the idea that the only decisions contributing to technical debt formation are engineering decisions. Investigating how resource allocation decisions might contribute to technical debt formation is certainly worthwhile.

In this post, I propose three examples illustrating how resource allocation decisions might contribute to technical debt formation and persistence. Each example illustrates how people make faulty decisions while believing they’re proceeding objectively and rationally. In each case, what causes the problem is a phenomenon called cognitive bias, though each example in this post illustrates the action of a different cognitive bias.

Loss aversion

The cognitive bias known as loss aversion, first identified by Amos Tversky and Daniel Kahneman [Kahneman 1984], is the tendency to prefer options that avoid losses to options that lead to gains that are equivalent or even greater. A decision-maker affected by loss aversion bias might conclude that it’s better to not lose $5 than to find $5 or even $10. In this way, loss aversion skews decisions so as to favor options that enable the enterprise to protect or enhance existing revenue streams, even if that decision causes increases in operating expenses. And this bias has effect even if the increases in operating expenses exceed the value of whatever revenue that decision protected.

Retiring technical debt usually entails deferring revenue in the short term, for two classes of reasons. First, we must turn the attention of some part of the engineering organization to debt retirement, instead of whatever they were doing. Assuming that they would have been working on maintaining or enhancing existing products or services, this redirection of their attention can lead to reducing or deferring revenue. Second, during the debt retirement operation, some work might require short-term interruptions of revenue streams, for the purpose of installing or testing the assets that are being revised for debt retirement purposes.

Thus, debt retirement efforts often do reduce revenue — or reduce revenue increases — in the short term, and some decision-makers can perceive that effect as a loss.

However, the long-term effects of debt retirement can be gains, and those gains can be considerable. Typically, by retiring an asset’s technical debt, we reduce the difficulty (read: time required, effort, cost, and risk) of future maintenance and enhancement efforts involving the asset. We also reduce the probability of debt contagion.

Since these long-term effects of debt retirement are ongoing, their impact on the enterprise can be significant. But unless one is experienced with dealing with the consequences of technical debt, recognizing the value of retiring technical debt can be difficult. When loss aversion is in play, intuitive comparisons of the effects of (a) a short-term revenue loss or delay to (b) a long-term benefit of debt retirement are biased in favor of not retiring technical debt.

Insulating decisions about debt retirement from the effects of loss aversion bias requires objective mathematical modeling of revenue losses and operating cost benefits for all options under consideration. Those models must also account for uncertainty, which makes them inherently ambiguous. And that leads us to consider our next cognitive bias, the ambiguity effect.

The ambiguity effect

The cognitive bias known as the ambiguity effect causes us to prefer options for which the probability of a desirable outcome is relatively better known, over options for which the probability of a desirable outcome is less well known, even if the expected value of that more ambiguous outcome exceeds the expected value of the less ambiguous outcome. The effect was first described by Daniel Ellsberg [Ellsberg 1961].

Consider a choice between allocating resources to a new development project and allocating resources to a technical debt retirement project. In most enterprises, decision-makers are familiar with new development projects. Likewise, project champions, project sponsors, and project managers are also familiar with new development projects. All parties are less familiar with debt retirement. It’s reasonable to suppose that when confronted with such a choice, decision-makers are likely to see debt retirement as carrying with it a probability of positive outcome that is less well known than the probability of a positive outcome for the new development project.

Because of the ambiguity effect, resource allocation decisions are likely to be biased against technical debt retirement, and in favor of maintenance or new development.

But there’s more. Most projects, of any kind, encounter trouble from time to time. When that happens, the urge to reallocate organizational resources can be powerful. Troubled projects might receive more resources if they’re viewed as important to the organization. If so, those resources often come from other projects. The ambiguity effect biases these resource reallocation decisions in a way analogous to initial resource allocation decisions, as described above. In other words, because of the ambiguity effect, when projects encounter trouble, debt retirement projects are less likely to be able to retain previously allocated resources than are maintenance or new development projects.

The availability heuristic

The availability heuristic is a method humans use to evaluate the validity or effectiveness of decisions, concepts, methods, or propositions [Tversky 1973]. According to the heuristic, if we recognize the item being evaluated as familiar, or related to something with which we are familiar, we’re more likely to regard it as valid or workable. And when making comparisons between two alternative decisions, concepts, methods, or propositions, we’re likely to assess more favorably the decision, concept, method, or proposition with which we’re more familiar, all other things being equal.

In organizations where decision-makers have more experience evaluating maintenance or development project proposals than they have with technical debt retirement proposals, the availability heuristic acts to reduce the relative assessed favorability of technical debt retirement proposals. It does this in three ways.

First, in most organizations, technical debt retirement projects are less familiar to decision-makers than are maintenance or development projects. On that ground alone the technical debt retirement project proposals are at a disadvantage.

But the second effect of the availability heuristic is more important, because the effect of the availability heuristic extends to the consequences of the decision, concept, method, or proposition under consideration. To grasp the value of a maintenance or development project, one must understand how it will affect the users of the assets being developed or maintained. Likewise, to grasp the value of a technical debt retirement project, one must understand how the presence of the technical debt hampers the enterprise in its attempts to achieve its objectives. On must also understand how retiring the technical debt might confer advantages in terms of future engineering efforts. Usually, understanding the consequences of maintenance or development projects is more “available” to decision-makers than is understanding the consequences of technical debt retirement projects. Even more dramatic is the difference between understanding the consequences of not funding a maintenance or development project and the consequences of not funding a technical debt retirement project.

Finally, much of the benefit of a technical debt retirement project is indirect. That is, although there is some direct benefit in terms of the assets from which the debt has been retired, the most dramatic benefits are manifested in projects that follow the debt retirement project, and which depend on the assets that have been relieved of debt. Sometimes, those follow-on projects are known at the time decision-makers are considering funding the debt retirement project. Sometimes those follow-on projects have yet to be specified or even recognized. In either case, they are less “available” to decision-makers because those follow-on projects are indirect beneficiaries.

These three effects of the availability heuristic cause decisions about resource allocations to tend to favor maintenance or development projects over debt retirement projects.

Mitigating the risks of these three cognitive biases

Over time, as everyone becomes more familiar with technical debt retirement projects, these effects may wane somewhat. But waiting for that to happen isn’t exactly what one might call risk mitigation. For one thing, familiarity grows only if one is motivated and pays attention. As busy as are decision-makers in modern organizations, depending on them to actively enhance their own familiarity with technical debt retirement projects is probably not the safest course.

An effective program of actively mitigating the risks of these three cognitive biases probably should focus on four areas.

Familiarity

Do what you can to increase decision-maker familiarity with the concept of technical debt, and with the consequences of carrying existing technical debt. Conventional presentation-based training will help, but interactive, experiential training is far more effective. Participants must actually experience the consequences of technical debt in a well-designed and professionally facilitated simulation of a problem-solving task. A faithful simulation should include estimation, changing and ambiguous requirements, and team composition volatility.

Retrospectives

Retrospectives (also known as after-action reviews, post mortems, debriefings, or lessons-learned sessions) are meetings convened to review processes that just completed pieces of work [Kerth 2001]. Typically, attendance is restricted to the project team members. To maintain psychological safety and to encourage truth telling, attendance by enterprise decision-makers is not recommended, unless the organizational culture includes appropriate safeguards. In any case, a section of the retrospective dedicated to investigating the causes and consequences of technical debt in the context of the current project can ensure capture of relevant knowledge and experience.

Mathematical modeling practice

Mathematical modeling is one path to creating a more objective foundation for decisions. It’s essential for improving estimation quality. Also helpful are high quality effort data and metrics data related to the formation and lifetime of technical debt. Reviews of estimates and projections during retrospectives can help improve their quality over time.

Metrics development

Determining the effects of risk mitigation failure provides important guidance for corrective action in risk mitigation. Developing metrics that reveal these failures is therefore essential to managing cognitive bias risk. I’ll be suggesting some valuable metrics in a future post.

Last words

These three cognitive biases are by no means the only cognitive biases that can affect the formation or persistence of technical debt. Of the more than 200 identified cognitive biases, those most likely to be relevant are those that affect decision-making. Watch this space for links to posts about additional cognitive biases and their affects on technical debt formation or persistence.

Other posts relating to cognitive biases

References

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics (1961): 643-669.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39(4), 341-350 (1984).

Available: here; Retrieved: August 8, 2017

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

Other posts in this thread

Accounting for technical debt

Last updated on September 16th, 2018 at 03:28 pm

With all the talk of technical debt these days, it’s a bit puzzling why there’s so little talk in the financial community about how to go about accounting for technical debt [Conroy 2012]. Perhaps one reason for this is the social gulf that exists between the financial community and the community most keenly aware of the effects of technical debt — the technologists. But another possibility is the variety of mechanisms compelling technologists to leave technical debt in place and move on to other tasks.

Accounting for technical debt isn’t the same as measuring it
Accounting for technical debt isn’t the same as measuring it. We usually regard our accounting system as a way of measuring and tracking the financial attributes of the enterprise. As such, we think of those financial attributes as representations of money. Technical debt is different. It isn’t real, and it isn’t a representation of money — it’s a representation of resources. Money is just one of those resources. Money is required to retire technical debt, and money is consumed when we carry technical debt, but other kinds of resources are also required. Sometimes we forget that when we account for technical debt.

Here’s an example. One common form of technical debt is the kind first described by Cunningham [Cunningham 1992]. Essentially, when we complete a project, we often find that we’ve advanced our understanding of what was actually needed to accomplish our goals. And we’ve advanced our understanding to such an extent that we recognize that we should have taken a different approach. Fowler described this kind of technical debt as, “Now we know how we should have done it.” [Fowler 2009] At this point, typically, we disband the team and move on to other things, leaving the technical debt outstanding, and often, undocumented and soon to be forgotten.

A (potentially) lower-cost approach involves immediate retirement of the debt and a re-release of the asset — an “echo release” — in which the asset no longer carries the technical debt we just incurred and immediately retired. But because echo releases usually offer no immediate, evident advantage to the people and assets that interact with the asset in question, decision-makers have difficulty allocating resources to echo releases.

This problem is actually due, in part, to the effects of a shortcoming in management accounting systems. Most enterprise management accounting systems track effectively the immediate costs associated with technical debt retirement projects. They do a much less effective job of representing the effects of failing to execute echo releases, or failing to execute debt retirement projects in general. The probable cause of this deficiency is the distributed nature of the MICs — the metaphorical interest charges associated with carrying a particular technical debt. The MICs appear in multiple forms: lower productivity, increased time-to-market, loss of market share, elevated voluntary turnover in the ranks of technologists, and more (see “MICs on technical debt can be difficult to measure”). These phenomena are poorly represented in enterprise accounting systems.

Decision-makers then adopt the same bias that afflicts the accounting system. In their deliberations regarding resource allocation, they emphasize only the cost of debt retirement, often omitting from consideration altogether any mention of the cost of not retiring the debt, which can be ongoing.

If we do make long-term or intermediate-term projections of costs related to carrying technical debt or costs related to debt retirement releases, we do so in the context of a cost/benefit computation as part of the proposal to retire the debt. Methods vary from proposal to proposal. Many organizations lack a standard method for making these projections. And because there’s no standard method for estimating costs, comparing the benefits of different debt retirement proposals is difficult. This ambiguity and variability further encourages decision-makers to base their decisions solely on current costs, omitting consideration of projected future benefits.

Dealing with accounting for technical debt

Relative to technical debt, the accounting practice perhaps most notable for its absence is accounting for outstanding technical debts as liabilities. We do recognize outstanding financial debt as such, but few balance sheets — even those for internal use only — mention outstanding technical debt. Ignorance of the liabilities imposed by outstanding technical debt can cause decision-makers to believe that the enterprise has capacity and resources that it doesn’t actually have. Many of the problems associated with high levels of technical debt would be alleviated more readily if we began to track our technical debts as liabilities — even if we did so for internal purposes only.

But other shortcomings in accounting practices can create additional problems almost as severe.

Addressing the technical-debt-related shortcomings of accounting systems requires adopting enterprise-wide patterns for proposals, which gives decision-makers meaningful comparisons between different technical debt retirement options and between technical debt retirement options and development or maintenance options. One area merits focused and immediate attention: estimating MPrin and estimating MICs.

Standards for estimating MPrin are essential for estimating the cost of retiring technical debt. Likewise, standards for estimating MICs, at least in the short term, are essential for estimating the cost of not retiring technical debt. Because both MPrin and MICs can include contributions from almost any enterprise component, merely determining where to look for contributions to MPrin or MICs can be a complex task. So developing a checklist of potential contributions can help proposal writers develop a more complete and consistent picture of the MICs or MPrin associated with a technical debt. Below are three suggestions of broad areas worthy of close examination.

Revenue stream disruption

Technical debt can disrupt revenue streams either in the course of being retired, or when defects in production systems need attention. When those systems are taken out of production for repairs or testing, revenue capture might undergo short disruptions. Burdens of technical debt can extend those disruptions, or increase their frequency.

For example, a technical debt consisting of the absence of an automated test can lengthen a disruption while the system undergoes manual tests. A technical debt consisting of a misalignment between a testing environment and the production environment can allow defects in a repair or enhancement to slip through, creating new disruptions as those new defects get attended to. Even a short disruption of a high-volume revenue stream can be expensive.

Some associations between classes of technical debt and certain revenue streams can be discovered and defined in advance of any debt retirement effort. This knowledge is helpful in estimating the contributions to MICs or MPrin from revenue stream disruption.

Extended time-to-market

Although technologists are keenly aware of productivity effects of technical debt, these effects can be small compared to the costs of extended time-to-market. In the presence of outstanding technical debt, time-to-market expands not only as a result of productivity reduction, but also from resource shortages and resource contention. Extended time-to-market can lead to delays in realizing revenue potential, and persistent and  irreparable reductions in market share. To facilitate comparisons between different technical debt retirement proposals, estimates of these effects should follow standard patterns.

Data flow disruption

All data flow disruptions are not created equal. Some data flow processes can detect their own disruptions and backfill when necessary. For these flows, the main consequence that might contribute to MICs or MPrin is delay. And the most expensive of these are delays in receipt of orders, delays in processing orders, and delays in responding to anomalous conditions. Data flows that cannot detect disruptions are usually less critical, but they nevertheless have costs too. All of these consequences can be modeled and estimated, and we can develop standard packages for doing so that we can apply repeatedly to MICs or MPrin estimates for different kinds of technical debt.

Last words

Estimates of MICs or MPrin are helpful in estimating the costs of retiring technical debt. They’re also helpful in estimating the costs of not retiring technical debt. In either case, they’re only estimates, and as such, they have error bars and confidence limits. The accounting systems we now use have no error bars. That, too, is a shortcoming that must be addressed.

References

[Conroy 2012] Patrick Conroy. (2012). “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics (1961): 643-669.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39(4), 341-350 (1984).

Available: here; Retrieved: August 8, 2017

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

Other posts in this thread

Metrics for technical debt management: the basics

Last updated on September 16th, 2018 at 05:13 pm

Whether it is wise to use metrics for technical debt management is an open question. But whether it will become a widespread practice does seem to be a settled question. Using metrics for technical debt management now does appear to be inevitable. So let’s explore just what we mean by “metrics,” and what traps might lie ahead when we use them for technical debt management.

Measuring tools needed to follow a recipe in the kitchen
Measuring tools needed to follow a recipe in the kitchen. The word “measurement” evokes images that relate to our earliest understanding of the word as children. For most of us, that involves determining the attributes of a physical thing. In most cases, the physical thing of most concern is our own body — its height and weight. But we also measure everyday things, as when following recipes. So the strongest associations with the word “measurement” involve physical things. “Measuring” attributes of an abstract construct like technical debt can be perilous, unless we make allowances for its lack of physicality.

Skepticism about the effectiveness of using metrics for technical debt management is reasonable, because technical debt isn’t a physically measurable thing. “Measuring” technical debt is therefor susceptible to what psychologists call the reification error [Levy 2009] or what philosophers call the Fallacy of Misplaced Concreteness [Whitehead 1948].

The logical fallacy of reification occurs when we treat an abstract construct as if it were a concrete thing. Although reification can provide helpful mental shorthand, it can produce costly cognitive errors. For example, advising someone who’s depressed to get more self-esteem is unlikely to work, because self-esteem isn’t something one can order from Amazon, or anywhere else. One can enhance self-esteem through counseling, reflection, or many other means, but it isn’t a concrete object one can “get.” Self-esteem is an abstract construct.

Technical debt is likewise an abstract construct. We can discuss “measuring” it, but attempts to specify measurement procedures will eventually confront the inherently abstract nature of technical debt, leading to debates about both definitions and the measurement process.

Metrics inherently require some kind of measurement. That’s why skepticism about using metrics for technical debt management is a reasonable position. Reasonable or not, though, metrics will be used. We’d best be prepared to use them responsibly. That’s the focus of this post — and a few to come. If we use metrics for technical debt management, how can we do it responsibly? How can we manage the risks of reification?

This post is about metrics in general. In coming posts I’ll apply this line of thinking to specific examples of metrics for managing technical debt, and suggest approaches that could mitigate reification risk.

Foundations for behavior guidance decisions

The objective of technical debt management is support for behavior guidance decisions — decisions that guide the behavior and choices of employees so as to control the volume of technical debt. Although many frameworks exist for supporting behavior guidance decisions, they generally consist of four elements:

Quantifiers

A quantifier is a specification for a measurement process designed to yield a numeric representation of some attribute of an asset or process. With respect to technical debt, we use quantifiers to prescribe how we produce data that represents the state of technical debt of an asset. We also use quantifiers to generate data that captures other related items, such as budgets, the cost or availability of human effort, revenue flows — almost anything that interacts with the assets whose debt burden we want to control.

An example of a quantifier is the process for estimating the MPrin of a particular kind of technical debt borne by an asset. The MPrin quantifier definition includes an explicit procedure for measuring it — that is, for estimating the size of the MPrin in advance of actually retiring that debt. After retirement, we know its value without estimating, because the MPrin is what we spent to complete the retirement.

Measures

A measure is the result of determining the value of a quantifier. For example, we might use a quantifier’s definition to determine how much human effort has been expended on an asset in the past fiscal quarter. Or we might use another quantifier’s definition to determine the current size of the MPrin carried by the asset.

Metrics

A metric is an arithmetic formula expressed in terms of constants and a set of measures. One of the simpler metrics consists of a single ratio of two measures. For example, the metric that captures the average cost of acquiring a new customer in the previous fiscal quarter is the ratio of two measures, namely, the investment made in acquiring new customers, and the number of new customers acquired.

Associated with some metrics is a defined set of actors (actual people) who are authorized to take steps — or who are authorized to direct others to take steps — designed to affect the value of the metric in some desirable way. Metrics that have defined sets of actors are usually Key Performance Indicators (see below). If more than one individual is a designated actor for a metric, a process is defined to resolve differences among the designated actors about what action to take, if any. In some cases, this process is as simple as determining which designated actor has the highest organizational rank.

An example of a technical debt metric is the ratio MPrin(i)/MPrin(r) = the total of incremental technical debt (MPrin(i)) incurred in the given time period divided by the total of MPrin retired (MPrin(r)) in that same time period. In periods during which this ratio exceeds 1.0, the organization is accumulating incremental technical debt faster than it is retiring technical debt. Computing it as a ratio, as opposed to a difference, has the effect of expressing the increases (or decreases) in technical debt portfolio size in units of the size of the debt retired. This enables the organization to take on some incremental technical debt responsibly.

This metric also has the virtue of displaying meaningful trends in an easily recognized way. In this case, a steady upward trend means a steadily increasing debt portfolio, even if in some time periods the debt doesn’t increase much. In other words, the ratio removes some of the “choppiness” that might plague a metric expressed in terms of absolute values.

Key Performance Indicators

A Key Performance Indicator (KPI) is a metric that provides meaningful insight that’s used to guide business decisions. All KPIs are metrics; not all metrics are KPIs.

A KPI is derived from one or more metrics. It represents how successful the business is in accomplishing a given business objective.  A metric, on the other hand, represents only the degree of success in reaching a targeted value for that metric. Because the relationship between the target value of a metric and any given business objective can be complicated, and potentially involve other metrics, metrics that aren’t KPIs are of less value in indicating the degree of success in achieving business objectives.

MPrin(i)/MPrin(r) is a metric that could also serve as a KPI, if the business objective is to achieve steady declines in overall technical debt.

Dimensions of measure vs. dimensions of metrics

Some metrics, such as MPrin(i)/MPrin(r), are dimensionless. Their values are pure numbers. And some metrics have dimensions — units of measure. For example, consider the metric MPrin(i)*Tdelay, where MPrin(i) is the volume of incremental technical debt borne by the deliverable, and Tdelay is the number of days after the target date that the project was delivered. The dimension of this metric is Currency*Days. This metric is particularly interesting, because a common assertion about technical debt is that we incur it as a means of advancing project delivery. Because the evidence for this assertion is mostly anecdotal, actually determining the value of this metric over a number of projects might reveal useful information about the effectiveness of the strategy of incurring technical debt as a means of advancing project delivery. Thus, we would expect to see small time delays associated with increased incremental technical debt. In other words, all projects of similar scale should lie along the same hyperbola in a plot of this metric in a space in which one axis is debt and the other is the number of days after the target date that the project was delivered.

Units of measures are often different from the units of the metrics that those measures support. For example, measures of technical debt in code include test coverage, documentation asynchrony, documentation omissions, code duplication, code complexity, dependency cycles, rule violations, or interface violations. The units of these measures are all different.

To those who must make strategic technical debt management decisions by comparing the costs of retiring different kinds of debt, these detailed measures are awkward to use. MPrin is more directly related to the issues they must address. MPrin provides a unit of comparison among debt retirement options, and between retirement options and available resources. Beyond the level of the particular debt being considered for retirement, MPrin is the dimension of greatest utility. [Brown 2010]

In a future post I’ll describe the properties of metrics that are needed for technical debt management.

References

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Conroy 2012] Patrick Conroy. (2012). “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics (1961): 643-669.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39(4), 341-350 (1984).

Available: here; Retrieved: August 8, 2017

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

Other posts in this thread

Legacy debt incurred intentionally

Throughout this blog, I’ve been using the terms legacy technical debt and incremental technical debt. Legacy technical debt is debt that existed before we undertook the current project; incremental technical debt is debt we incurred in the course of executing the current project. But there is some incremental technical debt that’s actually legacy debt incurred intentionally.

The locomotive known as “The General,” in Union Station, Chattanooga, Tennessee
The locomotive known as “The General,” in Union Station, Chattanooga, Tennessee. Built in 1855 in Paterson, New Jersey, for the Western & Atlantic Railroad, it’s best known as the engine stolen by Union spies in the Great Locomotive Chase, as part of a plan to cripple the Confederate rail network during the American Civil War. The General is preserved at the Southern Museum of Civil War and Locomotive History in Kennesaw, Georgia. It was originally built to conform to the southern rail gauge of 5 ft (1,524 mm), but it was converted to the U.S. Standard Gauge of 4 ft 8 1⁄2 in (1,435 mm) after 1886. Its original construction amounted to legacy debt. If it had been built after the war, it would have comprised legacy debt incurred intentionally. Photo “The General, Union Station, Chattanooga, Tenn.,” Detroit Publishing Co., publisher, ca. 1907. Courtesy U.S. Library of Congress.
As I’ve defined incremental technical debt, it’s any debt we incur in the course of the current work. That definition works well for most incremental technical debt. For example, if we recognize at the end of the project that we should have done something a bit differently, we’ve incurred incremental technical debt. This is one of the four forms of technical debt identified by Fowler in his 2x2 technical debt matrix [Fowler 2009].

But we must be a bit more careful, because some incremental technical debt is actually legacy debt incurred intentionally.

Legacy technical debt is debt that was incurred earlier, and which we’ve inherited as part of the asset. Sometimes we’re aware of legacy technical debt; sometimes we haven’t actually realized yet that it is indeed technical debt. In any case, the technical artifacts that comprise the legacy technical debt can impose constraints on any new development. Unless we retire the legacy debt, however we modify an asset must be compatible with the assets as they are.

Sometimes technical debt can be both legacy and incremental

Although the two kinds of technical debt — legacy and incremental — might seem at first to be mutually exclusive, there’s a subset of legacy technical debt that can be incurred in the course of executing the current project.

Here’s a physical example:

After the United States Civil War, the state of the U.S. rail system was a bit chaotic. Most of the rail lines in the northeast and western regions of the country used what is called standard gauge rail beds: rails that are separated by 1,435 mm (4 ft ​8 1⁄2 in). Most of the South was using a broader gauge: 1,524 mm (5 ft). These conflicting gauges comprised a legacy technical debt. The debt was finally retired over a two-day period beginning on Monday, May 31, 1886, when all the southern railroads coordinated to convert from a 5-foot gauge to 4 feet 9 inches [Southern Railfan 1966].

In the years immediately before the legacy debt was retired, any expansion or repair of the southern rail network that was compatible with the broader gauge, which was about to be retired, would have added to — or maintained — the legacy technical debt. It would thus have comprised newly incurred technical debt that would have also added to the legacy technical debt. Thus, in some situations, some newly incurred technical debt can be legacy technical debt.

Here’s a software example:

A software development team is engaged in a project to enhance the capabilities of the Marigold product, which is one product in the Garden Flowers personal productivity suite. Unfortunately, the original architecture of the suite didn’t anticipate the course that the suite has since taken, and it now comprises legacy technical debt. However, because changing the suite architecture isn’t in the charter of the Marigold enhancement team, they’ll be creating new technical artifacts that are compatible with the current architecture, but which will someday be modified or replaced when the Garden Flowers architecture is revamped or replaced. Thus, some of the new technical debt now being incurred by the Marigold team will be added to the legacy technical debt associated with the Garden Flowers architecture.

Moreover, the Marigold team might incur other technical debt in the course of its activities, if, for example, it fails to complete its task, or completes it in some suboptimal way. In that case it will be incurring incremental technical debt that it probably should retire soon after (if not before) delivery of the Marigold enhancements. Thus, in the same project, it would be incurring both (a) purely incremental technical debt, and (b) incremental technical debt that’s also legacy technical debt.

Why legacy debt incurred intentionally matters

Any program of rational technical debt management entails measuring — or at least estimating — the volume of technical debt incurred in the course of executing each project. The goal is to limit the debt incurred, so as to get control of the total technical debt outstanding.

But with legacy technical debt, as in the example above, we can’t always control the debt we incur. In some projects, it’s necessary to incur additional legacy technical debt because the work we do must be compatible with existing assets. We want to limit incremental technical debt, but we can’t always avoid incurring incremental debt that’s also legacy debt.

This distinction is important for both policy formation and management intervention. For instance, if purely non-legacy incremental technical debt is incurred, we might want to address it immediately, or perhaps commit to addressing it immediately after delivery. Alternatively, if we can obtain good data about a particular kind of legacy technical debt that’s growing because of the need to keep new development compatible with existing debt-ridden assets, we can use that data to elevate the priority of retiring the legacy debt before it grows even larger.

So when we ask projects to report their incremental technical debt, we want them to distinguish between legacy debt incurred intentionally, and incremental debt that was incurred for reasons specific to the project. Having data about both kinds of incremental technical debt is a necessity if we want to take appropriate management action to maintain control of the technical debt portfolio.

References

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Conroy 2012] Patrick Conroy. (2012). “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics (1961): 643-669.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39(4), 341-350 (1984).

Available: here; Retrieved: August 8, 2017

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Southern Railfan 1966] Southern Railfan. “The Days They Changed the Gauge,” 1966.

Available: here; Retrieved: July 26, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

Other posts in this thread

Managing technical debt

Last updated on October 9th, 2018 at 04:33 pm

Managing technical debt is something few organizations now do, and fewer do well. Several issues make managing technical debt difficult and they’re discussed elsewhere in this blog. This thread explores tactics for dealing with those issues from a variety of initial conditions. For example, tactics that work well for an organization that already has control of its technical debt, and which wants to keep it under control, might not work at all for an organization that’s just beginning to address a vast portfolio of runaway technical debt. The needs of these two organizations differ. The approaches they must take might then also differ.

A jumble of jigsaw puzzle pieces. Managing technical debt can be like solving a puzzle.
A jumble of jigsaw puzzle pieces. Where do we begin? With these puzzles, we usually begin with two assumptions: (a) we have all the pieces, and (b) they fit together to make coherent whole. These assumptions might not be valid for the puzzle of technical debt in any given organization.

The first three posts in this thread illustrate the differences among organization in different stages of developing technical debt management practices. In “Leverage points for technical debt management,” I begin to address the needs of strategists working in an organization just beginning to manage its technical debt, and asking the question, “Where do we begin?” In “Undercounting nonexistent debt items,” I offer an observation about a risk that accompanies most attempts to assess the volume of outstanding technical debt. Such assessments are frequently undertaken in organizations at early stages of the technical debt management effort. In “Crowdsourcing debt identification,” I discuss a method for maintaining the contents of a database of technical debt items. Data maintenance is something that might be undertaken in the context of a more advance technical debt management program.

Whatever approach is adopted, it must address factors that include technology, business objectives, politics, culture, psychology, and organizational behavior. So what you’ll find in this thread are insights, observations, and recommendations that address one or more of the issues related to these fields. “Demodularization can help control technical debt” considers mostly technical strategies. “Undercounting nonexistent debt items” is an exploration of a psychological phenomenon.  “Leverage points for technical debt management” considers the organization as a system and discusses tactics for altering it. And “Legacy debt incurred intentionally” explores how existing technical debt can grow as long as it remains outstanding.

Accounting issues also play a role. “Metrics for technical debt management: the basics” is a basic discussion of measurement issues. “Accounting for technical debt” looks into the matter of accounting for technical debt financially. And “Three cognitive biases” is a study of how technical debt is affected by the way we think about it.

Posts in this thread:

References

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Conroy 2012] Patrick Conroy. (2012). “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics (1961): 643-669.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39(4), 341-350 (1984).

Available: here; Retrieved: August 8, 2017

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Southern Railfan 1966] Southern Railfan. “The Days They Changed the Gauge,” 1966.

Available: here; Retrieved: July 26, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

Other posts in this thread

Demodularization can help control technical debt

Last updated on July 24th, 2018 at 07:26 pm

Modularity is a widely accepted design approach for complex systems. But because modularity can be implicated in the accumulation and persistence of technical debt, temporary demodularization can help control technical debt, and longer-term demodularization can reduce the rate of accumulation of technical debt.

Modularization is a widely used approach to complex system design

Two shipping containers resting on a “spine car”
Two shipping containers resting on a “spine car,” a kind of rail car used for shipping containers. The container on the left is a so-called tank container, used for bulk cargo. Various types of tank containers are available for transporting different types of cargo, such as wine, oils, ammonia, and even cryogenic liquids. The steel frame around the tank provides compatibility with the standard container profile, which makes the tank compatible with equipment built to handle standard shipping containers. This functions as an interface between the tank and the container-handling equipment. Photo (cc) Mr Snrub at the English language Wikipedia

Since the 1970s, modular design of systems has been de rigueur in both software and hardware. More than that, modularization has been demonstrated to be an essential feature of maintainable, adaptable, and extensible systems [Parnas 1979] [Sullivan 2001]. And we now understand that modularization is a foundational attribute of loose coupling in systems, which enables system designers and maintainers to work in parallel, with independence, on system elements, rendering systems economical at levels of complexity beyond what is achievable with tighter coupling [Orton 1990].

Eliminating duplication is one reason why modularization reduces maintenance and enhancement costs. Modularization enables system designers to create a single system element that provides needed functionality to other parts of the system. Because there is then only one system element that provides that capability, adapting it in response to a new need, or to correct a defect, need be done only once.

That’s a big deal. If that capability were provided in multiple system elements, adaptation would be necessary for each of those elements. Moreover, the multiplicity of elements opens the possibility that adaptation might not be performed consistently, which could create further problems. Eliminating duplication is a most useful property of modularization.

Modularization provides many other advantages. For example, modularization shortens time-to-market for new capabilities. When extending the system by adding new capability, we sometimes need access to capabilities present in existing modules. In modularized systems, since those modules are already in a form that permits invocation by system components, we can access them easily. We have no need to recreate them for the new capability we’re implementing. They exist, and they’re already tested and ready to go. In this way, modularization shortens time-to-market for new capability.

Modularization has a dark side

It’s a little bit of a story to show how the dark side of modularization works, so let’s give a name to the modular system element whose duplicates have been excised. Since it’s now unique, I’ll call that modular system element “U.” Any system element that interacts with U is now indirectly coupled to every other system element that interacts with U. And that’s where the trouble comes in.

When we’re implementing a new capability N, and N needs access to U, we gain the advantages described above. But suppose N needs U to do something just a little bit unusual — a little bit differently from what U now does. Sometimes we can extend U in ways that accommodate N without disturbing U’s existing “client base” — the system elements that are already interacting with U. There’s no problem then. But let’s suppose that what N now needs U to do would disturb U’s client base if we implement the changes in the “correct,” most elegant way — the way we would do it if we were starting fresh. Sadly, in that case, all of U’s existing clients would have to be modified, and then re-tested. So let’s suppose that we don’t have time or resources to do all that work. We requested them, but we were denied.

So instead, we found a way to extend U in a less elegant, less maintainable, but still reliable way that doesn’t disturb U’s existing clients, and does meet N’s needs. We do that instead, promising ourselves that we’ll go back someday, when we’re granted the time and resources, and “fix” U so that it serves both its existing clients and N in the “correct” way.

That’s one form — exactly — of what we call technical debt. In this scenario we’ve illustrated one way in which modularization leads to technical debt formation.

So a natural question arises: would it make sense instead to create a new system element — call it U2 — that meets N’s needs, and also meets the needs of U’s existing client base, if only they knew about U2 and could be altered to use U2? My proposed answer to that question is: “Yes it would, in many cases.” To create such a U2 would be demodularization — that is, a violation of modularity — and that is indeed heresy. It also creates a different technical debt: the obligation to convert U’s clients to become U2 clients someday, and then to delete U. But it might be the right approach.

When would demodularization help?

Under what conditions would demodularization be sensible? Here are three possibilities.

When a new and necessary adaptation is incompatible with existing forms

The scenario above is one situation in which demodularization can help. Demodularization helps when adding new capability, or adapting to a new need, requires a change to a shared module, and that change is incompatible with the existing uses of that module. Demodularization is than a useful technique, provided that the technical debt that results is retired with due dispatch.

When retiring technical debt requires an incompatible adaptation

A second situation arises during technical debt retirement operations. During technical debt retirement, it might be necessary to alter a shared module in a way that would be incompatible with the needs of its existing client base. In that case, the approach used above can be useful. First, create a successor (“U2”) to the original shared module (“U”) in a form that isn’t burdened with the technical debt that’s being retired. Then, at the same time, or over an extended period, convert all the clients of U to use the successor U2. In the meantime, the demodularization comprises a technical debt. When the conversion is complete, the original technical debt will have been retired. Finally delete the original shared module U, thereby retiring the technical debt that consisted of the demodularization.

This approach entails some risk. In the interim period before U is retired, when demodularization is still in place, changes to both U and U2 might be required. When that happens, duplication of effort can occur. This approach is useful, though, provided the interim period of demodularization is short compared to the anticipated intervals between incidents that require alterations to U and U2. There is risk, of course, that the resources committed to finally retiring U might become unavailable after U2 is in place. In that case, the technical debt portfolio will have been expanded to no good end. To manage this risk, the artifice of secured technical debt can prove useful.

Partial demodularization helps when adaptations are focused

In some instances, portions of a shared system element — call it “U” — evolve very rapidly, while most of the rest of U remains stable. Technical debt can accumulate rapidly if the element remains unitary — that is, in one piece. However, in some cases we can segregate the rapidly evolving portion of U into a smaller unit — call it “S.” If we provide S as a separate shared system element, those portions of the system that are experiencing rapid evolution can access S separately, without disturbing the system elements that require access only to the stable portions of U.

Such segregation might require a bit of duplication, because there might be pieces of S that are needed by U, and which must therefore be duplicated in U. Likewise, there might be pieces of U that are needed in S, and which must therefore be duplicated in S.

But the segregation might be worthwhile, because changes in S usually require testing S and S’s clients. Testing can be expensive in time and resources, and because test coverage isn’t always 100% (read: test coverage is rarely 100%), changes in S entail some operational risk. Segregating S reduces that risk by protecting U’s clients from changes in S.

Later, when the rapidly evolving S stabilizes, it can be re-integrated into its former residence in U. Until that point, its segregation — and the attendant duplications — might constitute a technical debt.

Conclusion

Accepting modularization as an inviolable design principle is one cause of unnecessary accumulation of technical debt. It makes retiring legacy technical debt more difficult. Be prepared to violate modularity, but do so judiciously.

References

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Conroy 2012] Patrick Conroy. (2012). “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics (1961): 643-669.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39(4), 341-350 (1984).

Available: here; Retrieved: August 8, 2017

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Orton 1990] J. Douglas Orton and Karl E. Weick. “Loosely Coupled Systems: A Reconceptualization,” The Academy of Management Review, 15:2, 203-223, 1990.

Available: here; Retrieved: July 11, 2018.

Cited in:

[Parnas 1979] Parnas, David L. “Designing Software for Ease of Extension and Contraction,” IEEE Transactions on Software Engineering, vol. SE-5, no. 2, March 1979, 128-138.

Available: here; Retrieved: July 13, 2017

Cited in:

[Southern Railfan 1966] Southern Railfan. “The Days They Changed the Gauge,” 1966.

Available: here; Retrieved: July 26, 2018.

Cited in:

[Sullivan 2001] Kevin J. Sullivan, William G. Griswold, Yuanfang Cai, and Ben Hallen. “The structure and value of modularity in software design,” in ACM SIGSOFT Software Engineering Notes, 26:5, 99-108, 2001.

Available: here; Retrieved: July 11, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

Other posts in this thread

Related posts

Crowdsourcing debt identification

I have often expressed the view that the people of the organization know where much of their technical debt is, or they can find it fairly quickly. To exploit this resource, what’s needed is a systematic method for gathering what they know to produce a database that can serve as a starting point for further investigation. We might call this part of the debt identification process “crowdsourcing debt identification.”

A crowd
A crowd. Crowds are powerful when they coordinate their actions.

When an organization first undertakes to manage its technical debt, one of the many initial tasks is identifying its existing technical debt. There are tools for executing some of this task, at least for software assets, and they are useful. But because they’re in an early stage of development, and because many non-software assets also carry technical debt, human assistance is required. And that’s the place where crowdsourcing can help.

For example, if you ask engineers for examples of technical debt in the assets they work on regularly, they can rattle off a few examples without hesitation. But a few days later, while working on whatever task has focus that day, they’ll realize that they could have mentioned another painful item. And they’ll want to report it. Gathering that kind of information is very helpful to the debt identification effort. That’s crowdsourcing in action.

But investment is required for crowdsourcing to be effective. We must educate the people who will be doing the reporting, and we must give them tools to make reporting quick and easy.

Reporting issues

Crowdsourcing debt identification will produce a stream of “incident reports” by Debt Reporters (DRs) that must be interpreted by people we might call Debt Report Administrators (DRAs), who then recast the reports for later investigation by experts in the assets involved. Common difficulties that add to workload of DRAs include:

Inconsistent definitions of technical debt

Lack of uniformity in understanding what technical debt is and isn’t can cause DRs to report as potential debt items some artifacts that aren’t manifestations of technical debt, or worse, they might fail to report items that are.

Only education of the DRs about the organizational definition of technical debt can enhance consistency.

Repeated reporting of previously reported debt items

Unaware that an item has been previously reported, DRs might file reports unnecessarily. Some of these duplications are easily identified, but if the language used in the report is different enough, identifying duplicates can take time.

We can reduce duplication by making available descriptions of previously reported items in multiple forms.

Inconsistent descriptions of debt items

DRA must be able to recognize when two different DRs use different language to describe the same debt item. If they do not, then the debt report database will contain an unrecognized duplication.

The asset expert must then address this situation.

Failure to report known debt items

Some people, pressed by the urgency of their “own work,” might not report debt items they know about, or might hurriedly file low-quality reports. A high incidence of this behavior is an indicator of a deeper organizational issue: namely, that some people do not regard technical debt management as a worthy activity.

Tracking report quality and report frequency is one way to determine how much regard the people of the organization have for the debt management effort.

Report format and content

The act of reporting a potential technical debt item must not be burdensome — it must be easy. A Web-based form is a minimum. Users must be able to prefill some fields common to all their reports, and save the result as a template. Fields they might want to prefill include their personal identity and the asset identity. DRs might need several templates, depending upon the number of assets with which they interact. Switching from one template to another must also be easy.

Several authors have proposed report templates, Below is one due to Foganholi, et al. [Foganholi 2015]. (TD is technical debt)

IDTD identification number
DateDate of TD identification
ResponsiblePerson or role who should fix this TD item
TypeDesign, documentation, defect, testing, or other type of debt
ProjectName of project or software application
LocationList of files/classes/methods or documents/pages involved
DescriptionDescribes the anomaly and possible impacts on future maintenance
Estimated principalHow much work is required to pay off this TD item on a three-point scale: High/Medium/Low
Estimated interest amountHow much extra work will need to be performed in the future if this TD item is not paid off now on a three-point scale: High/Medium/Low
Estimated interest probabilityHow likely is it that this item, if not paid off, will cause extra work to be necessary in the future on a three-point scale: High/Medium/Low
IntentionalYes/No/Don’t Know
Fixed byPerson or role who really fix this TD item
Fixed dateDate of TD conclusion
Realized principalHow much work was required to pay off this TD item on a three-point scale: High/Medium/Low
Realized interest amountHow much extra work was needed to be performed if this TD item was not paid off at moment of detection, on a three-point scale: High/Medium/Low

While this template might be useful for tracking the technical debt item, it contains fields that aren’t needed for crowdsourcing debt identification. A simplified template for crowdsourcing debt identification might look like this:

Identifying Report TitleYour identifier for this report
DateDate of report (prefilled)
TypeDrop down menu of debt types, including “other”
ProjectName of the project sponsoring the work which led to your observation of the debt item
Location of debt itemList of assets involved, including specific location within complex assets
DescriptionDescribe the debt item including
  • Whether your current effort has created it and if so, how

  • Possible impact on present or future maintenance or enhancement efforts

  • Whether it has led to, or is a result of, contagion

  • How it’s affecting your work
IntentionalYes/No/Don’t Know
Asset experts then receive these reports and take one or more of the following actions:
  • Seek further information from the DR.
  • Reject the report as not involving technical debt. (Rejection data is used to assess the effectiveness of the education program)
  • Attach the report to a new or existing debt item, incorporating relevant information from the report into the debt item’s data.

What the asset experts produce, which contains information like that suggested by Foganholi, et al. will be the basis of further analysis and eventual retirement of the debt item.

Conclusions

Investment in ease-of-use for the reporting process is essential for at least three reasons:
  • The reporting responsibility might  be seen as an addition burden beyond the current workload.
  • In many organizations, reporting on technical debt might be seen as a secondary responsibility.
  • Unless technical debt retirements rapidly become common occurrences, reporting might be seen as a waste of effort. The reporting itself must therefore be easy.

These phenomena all exert negative pressure on report quality and tend to suppress report frequency. Ease-of-use can mitigate these effects.

References

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Conroy 2012] Patrick Conroy. (2012). “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics (1961): 643-669.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Foganholi 2015] Lucas Borante Foganholi, Rogério Eduardo Garcia, Danilo Medeiros Eler, Ronaldo Celso Messias Correia, and Celso Olivete Junior. “Supporting technical debt cataloging with TD-Tracker tool,” Advances in Software Engineering 2015 (2015): 4.

Available: here; Retrieved: July 7, 2018

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39(4), 341-350 (1984).

Available: here; Retrieved: August 8, 2017

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Orton 1990] J. Douglas Orton and Karl E. Weick. “Loosely Coupled Systems: A Reconceptualization,” The Academy of Management Review, 15:2, 203-223, 1990.

Available: here; Retrieved: July 11, 2018.

Cited in:

[Parnas 1979] Parnas, David L. “Designing Software for Ease of Extension and Contraction,” IEEE Transactions on Software Engineering, vol. SE-5, no. 2, March 1979, 128-138.

Available: here; Retrieved: July 13, 2017

Cited in:

[Southern Railfan 1966] Southern Railfan. “The Days They Changed the Gauge,” 1966.

Available: here; Retrieved: July 26, 2018.

Cited in:

[Sullivan 2001] Kevin J. Sullivan, William G. Griswold, Yuanfang Cai, and Ben Hallen. “The structure and value of modularity in software design,” in ACM SIGSOFT Software Engineering Notes, 26:5, 99-108, 2001.

Available: here; Retrieved: July 11, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

Other posts in this thread

Related posts

Undercounting nonexistent debt items

Last updated on August 15th, 2018 at 01:31 pm

People and companies are developing technologies for assessing the nature and volume of technical debt borne by enterprise assets. The key word is developing. Some tools do exist, and they can be helpful, but they can’t do it all. So most assessments also rely on surveys and interviews of engineers and their managers. But these tools have limitations, too. Among these limitations is undercounting nonexistent debt items in surveys about technical debt.

Sherlock Holmes and Doctor Watson, in an illustration by Sidney Paget
Sherlock Holmes and Doctor Watson, in an illustration by Sidney Paget, captioned “Holmes gave me a sketch of the events.” The illustration was originally published in 1892 in The Strand magazine to accompany a story called “The Adventure of Silver Blaze” by Sir Arthur Conan Doyle. It’s in this story that the following dialog occurs:

Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”

Gregory: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

From this, Holmes deduces that the dog’s master was the villain. This is an example of looking for what is not there, and failing to notice it is an example of absence blindness.

Original book illustration, courtesy Wikimedia Commons.

It’s well known that survey results can exhibit biases. Collectively, these biases are known as response biases [Furnham 1986]. Sources of response bias include phrasing of questions, the demeanor of the interviewer, the desires of the participants to be good experimental subjects, attempts by subjects to respond with the “right answers,” selection of subjects, and more. These sources of bias are real, and we must address them when we design surveys.

But I have in mind here a set of biases more specific to technical debt. For example, when we ask subjects for examples of technical debt, they’re more likely to recall and provide examples of artifacts that exist than they are to provide examples of artifacts that don’t exist. This happens because of a cognitive bias called selection bias. The effect isn’t intentional, and it can dramatically skew results.

Selection bias is an example of a cognitive bias. In this case, selection bias acts to skew the data in such a way as to interfere with proper randomization, which ensures that the sample obtained doesn’t accurately represent the actual population of technical debt artifacts. Specifically, the data will tend to under-represent technical debt artifacts that don’t exist. Related phenomena are absence blindness and survivorship bias.

For example, regression testing is an essential step used in refactoring systems. When regression tests are unavailable, and we try to refactor a system to retire some of its technical debt, we can’t be certain that we haven’t changed something important. And so, when a survey isn’t designed to mitigate the effects of selection bias, we can expect the probability of noting any missing regression tests to be elevated.

Mitigating the risk of undercounting nonexistent debt items

It’s helpful for surveys to include questions that specifically ask subjects to report technical debt items that don’t exist, but which would be helpful if they did exist — like missing regression tests. Even more helpful: conduct brainstorming sessions for engineers in which the goal is to list missing artifacts, tools, or processes that comprise technical debt precisely because they’re missing.

References

[Brown 2010] Nanette Brown, Yuanfang Cai, Yuepu Guo, Rick Kazman, Miryung Kim, Philippe Kruchten, Erin Lim, Alan MacCormack, Robert Nord, Ipek Ozkaya, Raghvinder Sangwan, Carolyn Seaman, Kevin Sullivan, and Nico Zazworka. “Managing Technical Debt in Software-Reliant Systems,” in Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research 2010, New York: ACM, 2010, 47-51.

Available: here; Retrieved: July 30, 2018

Cited in:

[Conroy 2012] Patrick Conroy. (2012). “Technical Debt: Where Are the Shareholders' Interests?,” IEEE Software, 29, 2012, p. 88.

Available: here; Retrieved: August 15, 2018.

Cited in:

[Cunningham 1992] Ward Cunningham. “The WyCash Portfolio Management System.” Addendum to the Proceedings of OOPSLA 1992. ACM, 1992.

Cited in:

[Ellsberg 1961] Daniel Ellsberg. "Risk, ambiguity, and the Savage axioms." The quarterly journal of economics (1961): 643-669.

Available: here; Retrieved: August 17, 2018.

Cited in:

[Foganholi 2015] Lucas Borante Foganholi, Rogério Eduardo Garcia, Danilo Medeiros Eler, Ronaldo Celso Messias Correia, and Celso Olivete Junior. “Supporting technical debt cataloging with TD-Tracker tool,” Advances in Software Engineering 2015 (2015): 4.

Available: here; Retrieved: July 7, 2018

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Fowler 2009] Martin Fowler. “Technical Debt Quadrant.” Martin Fowler (blog), October 14, 2009.

Available here; Retrieved January 10, 2016.

Cited in:

[Furnham 1986] Adrian Furnham. “Response bias, social desirability and dissimulation,” Personality and Individual Differences 7:3, 385-400, 1986.

Cited in:

[Gould 1996] Stephen Jay Gould. The mismeasure of man (Revised & Expanded edition). W. W. Norton & Company, 1996.

Order from Amazon

Cited in:

[Kahneman 1984] Daniel Kahneman, Amos Tversky, and Michael S. Pallak. “Choices, values, and frames,” American Psychologist 39(4), 341-350 (1984).

Available: here; Retrieved: August 8, 2017

Cited in:

[Kerth 2001] Norman L. Kerth. Project Retrospectives: A Handbook for Team Reviews. New York: Dorset House, 2001.

Order from Amazon

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[Levy 2009] David A. Levy, Tools of Critical Thinking: Metathoughts for Psychology (second edition). Long Grove, Illinois: Waveland Press, Inc., 2009.

Cited in:

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Orton 1990] J. Douglas Orton and Karl E. Weick. “Loosely Coupled Systems: A Reconceptualization,” The Academy of Management Review, 15:2, 203-223, 1990.

Available: here; Retrieved: July 11, 2018.

Cited in:

[Parnas 1979] Parnas, David L. “Designing Software for Ease of Extension and Contraction,” IEEE Transactions on Software Engineering, vol. SE-5, no. 2, March 1979, 128-138.

Available: here; Retrieved: July 13, 2017

Cited in:

[Southern Railfan 1966] Southern Railfan. “The Days They Changed the Gauge,” 1966.

Available: here; Retrieved: July 26, 2018.

Cited in:

[Sullivan 2001] Kevin J. Sullivan, William G. Griswold, Yuanfang Cai, and Ben Hallen. “The structure and value of modularity in software design,” in ACM SIGSOFT Software Engineering Notes, 26:5, 99-108, 2001.

Available: here; Retrieved: July 11, 2018.

Cited in:

[Tversky 1973] Amos Tversky and Daniel Kahneman. "Availability: A heuristic for judging frequency and probability." Cognitive Psychology 5:2, 207-232, 1973.

Available: here; Retrieved: August 9, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in:

[Whitehead 1948] Alfred North Whitehead. Science and the Modern World. New York: Pelican Mentor (MacMillan), 1948 [1925].

Cited in: