Thinking in Systems: A Primer

Introduction: The Systems Lens

Managers are not confronted with problems that are independent of each other, but with dynamic situations that consist of complex systems of changing problems that interact with each other. I call such situations messes… . Managers do not solve problems, they manage messes. —RUSSELL ACKOFF, 1 operations theorist

The answer clearly lies within the Slinky itself. The hands that manipulate it suppress or release some behavior that is latent within the structure of the spring. That is a central insight of systems theory. Once we see the relationship between structure and behavior, we can begin to understand how systems work, what makes them produce poor results, and how to shift them into better behavior patterns.

So, what is a system? A system is a set of things—people, cells, molecules, or whatever—interconnected in such a way that they produce their own pattern of behavior over time. The system may be buffeted, constricted, triggered, or driven by outside forces. But the system’s response to these forces is characteristic of itself, and that response is seldom simple in the real world

Because of feedback delays within complex systems, by the time a problem becomes apparent it may be unnecessarily difficult to solve.

According to the competitive exclusion principle, if a reinforcing Feedback Loop rewards the winner of a competition with the means to win further competitions, the result will be the elimination of all but a few competitors

A diverse system with multiple pathways and redundancies is more stable and less vulnerable to external shock than a uniform system with little diversity.

time in linear, logical order. Systems happen all at once. They are connected not just in one direction, but in many directions simultaneously. To discuss them properly, it is necessary somehow to use a language that shares some of the same properties as the phenomena under discussion

talk about why everyone or everything in a system can act dutifully and rationally, yet all these well-meaning actions too often add up to a perfectly terrible result. And why things so often happen much faster or slower than everyone thinks they will. And why you can be doing something that has always worked and suddenly discover, to your great disappointment, that your action no longer works. And why a system might suddenly, and without warning, jump into a kind of behavior you’ve never seen before.

The behavior of a system cannot be known just by knowing the elements of which the system is made.

One. The Basics

I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated. —POUL ANDERSON1

A system* is an interconnected set of elements that is coherently organized in a way that achieves something. If you look at that definition closely for a minute, you can see that a system must consist of three kinds of things: elements, interconnections, and a function or purpose.

Is there anything that is not a system? Yes—a conglomeration without any particular interconnections or function. Sand scattered on a road by happenstance is not, itself, a system.

A system is more than the sum of its parts. It may exhibit adaptive, dynamic, goal-seeking, self-preserving, and sometimes evolutionary behavior.

Many of the interconnections in systems operate through the flow of information. Information holds systems together and plays a great role in determining how they operate

The best way to deduce the system’s purpose is to watch for a while to see how the system behaves

Purposes are deduced from behavior, not from rhetoric or stated goals

The word function is generally used for a nonhuman system, the word purpose for a human one, but the distinction is not absolute, since so many systems have both human and nonhuman elements.

System purposes need not be human purposes and are not necessarily those intended by any single actor within the system. In fact, one of the most frustrating aspects of systems is that the purposes of subunits may add up to an overall behavior that no one wants.

Keeping sub-purposes and overall system purposes in harmony is an essential function of successful systems.

The least obvious part of the system, its function or purpose, is often the most crucial determinant of the system’s behavior.

A stock is the foundation of any system. Stocks are the elements of the system that you can see, feel, count, or measure at any given time. A system stock is just what it sounds like: a store, a quantity, an accumulation of material or information that has built up over time

A stock is the memory of the history of changing flows within the system.

Stocks change over time through the actions of a flow.

If you understand the dynamics of stocks and flows—their behavior over time—you understand a good deal about the behavior of complex systems. And if you have had much experience with a bathtub, you understand the dynamics of stocks and flows.

It is in a state of dynamic equilibrium—its level does not change, although water is continuously flowing through it

From it you can deduce several important principles that extend to more complicated systems:

• As long as the sum of all inflows exceeds the sum of all outflows, the level of the stock will rise. • As long as the sum of all outflows exceeds the sum of all inflows, the level of the stock will fall. • If the sum of all outflows equals the sum of all inflows, the stock level will not change; it will be held in dynamic equilibrium at whatever level it happened to be when the two sets of flows became equal

A stock can be increased by decreasing its outflow rate as well as by increasing its inflow rate

Stocks generally change slowly, even when the flows into or out of them change suddenly. Therefore, stocks act as delays or buffers or shock absorbers in systems.

The time lags imposed by stocks allow room to maneuver, to experiment, and to revise policies that aren’t working

Stocks allow inflows and outflows to be decoupled and to be independent and temporarily out of balance with each other.

Not all systems have Feedback Loop. Some systems are relatively simple open-ended chains of stocks and flows. The chain may be affected by outside factors, but the levels of the chain’s stocks don’t affect its flows.

This kind of stabilizing, goal-seeking, regulating loop is called a balancing feedback loop, so I put a B inside the loop in the diagram. Balancing feedback loops are goal-seeking or stability-seeking. Each tries to keep a stock at a given value or within a range of values. A balancing feedback loop opposes whatever direction of change is imposed on the system. If you push a stock too far up, a balancing loop will try to pull it back down. If you shove it too far down, a balancing loop will try to bring it back up.

Balancing feedback loops are equilibrating or goal-seeking structures in systems and are both sources of stability and sources of resistance to change

The presence of a feedback mechanism doesn’t necessarily mean that the mechanism works well.

Reinforcing feedback loops are self-enhancing, leading to exponential growth or to runaway collapses over time. They are found whenever a stock has the capacity to reinforce or reproduce itself.

The second kind of feedback loop is amplifying, reinforcing, self-multiplying, snowballing—a vicious or virtuous circle that can cause healthy growth or runaway destruction. It is called a reinforcing feedback loop, and will be noted with an R in the diagrams. It generates more input to a stock the more that is already there (and less input the less that is already there). A reinforcing feedback loop enhances whatever direction of change is imposed on it.

Because we bump into reinforcing loops so often, it is handy to know this shortcut: The time it takes for an exponentially growing stock to double in size, the “doubling time,” equals approximately 70 divided by the growth rate (expressed as a percentage). Example: If you put $100 in the bank at 7% interest per year, you will double your money in 10 years (70 ÷ 7 = 10). If you get only 5% interest, your money will take 14 years to double.

You’ll be thinking not in terms of a static world, but a dynamic one. You’ll stop looking for who’s to blame; instead you’ll start asking, “What’s the system?” The concept of feedback opens up the idea that a system can cause its own behavior

Two. A Brief Visit to the Systems Zoo

The … goal of all theory is to make the … basic elements as simple and as few as possible without having to surrender the adequate representation of … experience. —Albert Einstein,1 physicist

A Stock with Two Competing Balancing Loops—a Thermostat You already have seen the “homing in” behavior of the goal-seeking balancing feedback loop—the coffee cup cooling. What happens if there are two such loops, trying to drag a single stock toward two different goals?

The information delivered by a feedback loop—even nonphysical feedback—can only affect future behavior; it can’t deliver a signal fast enough to correct behavior that drove the current feedback. Even nonphysical information takes time to feedback into the system.

Many economic models make a mistake in this matter by assuming that consumption or production can respond immediately, say, to a change in price. That’s one of the reasons why real economies tend not to behave exactly like many economic models.

A stock-maintaining balancing feedback loop must have its goal set appropriately to compensate for draining or inflowing processes that affect that stock. Otherwise, the feedback process will fall short of or exceed the target for the stock.

What happens when a reinforcing and a balancing loop are both pulling on the same stock? This is one of the most common and important system structures. Among other things, it describes every living population and every economy.

Dominance is an important concept in systems thinking. When one loop dominates another, it has a stronger impact on behavior. Because systems often have several competing feedback loops operating simultaneously, those loops that dominate the system will determine the behavior

Complex behaviors of systems often arise as the relative strengths of feedback loops shift, causing first one loop and then another to dominate behavior.

there are questions you need to ask that will help you decide how good a representation of reality is the underlying model.

• Are the driving factors likely to unfold this way? (What are birth rate and death rate likely to do?) • If they did, would the system react this way? (Do birth and death rates really cause the population stock to behave as we think it will?) • What is driving the driving factors? (What affects birth rate? What affects death rate?)

Dynamic systems studies usually are not designed to predict what will happen. Rather, they’re designed to explore what would happen, if a number of driving factors unfold in a range of different ways

Model utility depends not on whether its driving scenarios are realistic (since no one can know that for sure), but on whether it responds with a realistic pattern of behavior.

The central question of economic development is how to keep the reinforcing loop of capital accumulation from growing more slowly than the reinforcing loop of population growth—so that people are getting richer instead of poorer.4 It

Systems with similar feedback structures produce similar dynamic behaviors.

A delay in a balancing feedback loop makes a system likely to oscillate.

And then the well-intentioned fixer pulls the lever in the wrong direction! This is just one example of how we can be surprised by the counterintuitive behavior of systems when we start trying to change them

Things would go better if, instead of decreasing her response delay from three days to two, she would increase the delay from three days to six, as illustrated in Figure 36

Delays are pervasive in systems, and they are strong determinants of behavior. Changing the length of a delay may (or may not, depending on the type of delay and the relative lengths of other delays) make a large change in the behavior of a system

And we are aware that some delays can be powerful policy levers. Lengthening or shortening them can produce major changes in the behavior of systems.

Growth in a constrained environment is very common, so common that systems thinkers call it the “limits-to-growth” archetype.

In physical, exponentially growing systems, there must be at least one reinforcing loop driving the growth and at least one balancing loop constraining the growth, because no physical system can grow forever in a finite environment.

A quantity growing exponentially toward a constraint or limit reaches that limit in a surprisingly short time.

the larger the stock of initial resources, the more new discoveries, the longer the growth loops elude the control loops, and the higher the capital stock and its extraction rate grow, and the earlier, faster, and farther will be the economic fall on the back side of the production peak.

Renewable Stock Constrained by a Renewable Stock—a Fishing Economy

In all these cases, there is an input that keeps refilling the constraining resource stock (as shown in Figure 42). We

Nonrenewable resources are stock-limited. The entire stock is available at once, and can be extracted at any rate (limited mainly by extraction capital). But since the stock is not renewed, the faster the extraction rate, the shorter the lifetime of the resource.

Renewable resources are flowlimited. They can support extraction or harvest indefinitely, but only at a finite flow rate equal to their regeneration rate. If they are extracted faster than they regenerate, they may eventually be driven below a critical threshold and become, for all practical purposes, nonrenewable.

I’ve shown three sets of possible behaviors of this renewable resource system here:

• overshoot and adjustment to a sustainable equilibrium, • overshoot beyond that equilibrium followed by oscillation around it, and • overshoot followed by collapse of the resource and the industry dependent on the resource.

Which outcome actually occurs depends on two things. The first is the critical threshold beyond which the resource population’s ability to regenerate itself is damaged. The second is the rapidity and effectiveness of the balancing feedback loop that slows capital growth as the resource becomes depleted.

The trick, as with all the behavioral possibilities of complex systems, is to recognize what structures contain which latent behaviors, and what conditions release those behaviors—and, where possible, to arrange the structures and conditions to reduce the probability of destructive behaviors and to encourage the possibility of beneficial ones

Three. Why Systems Work So Well

Why do systems work so well? Consider the properties of highly functional systems—machines or human communities or ecosystems—which are familiar to you. Chances are good that you may have observed one of three characteristics: resilience, self-organization, or hierarchy

Placing a system in a straitjacket of constancy can cause fragility to evolve. —C. S. Holling,2 ecologist

Resilience is a measure of a system’s ability to survive and persist within a variable environment. The opposite of resilience is brittleness or rigidity

There are always limits to resilience.

And, conversely, systems that are constant over time can be unresilient. This distinction between static stability and resilience is important. Static stability is something you can see; it’s measured by variation in the condition of a system week by week or year by year. Resilience is something that may be very hard to see, unless you exceed its limits, overwhelm and damage the balancing loops, and the system structure breaks down. Because resilience may not be obvious without a whole-system view, people often sacrifice resilience for stability, or for productivity, or for some other more immediately recognizable system property.

Just-in-time deliveries of products to retailers or parts to manufacturers have reduced inventory instabilities and brought down costs in many industries. The just-in-time model also has made the production system more vulnerable, however, to perturbations in fuel supply, traffic flow, computer breakdown, labor availability, and other possible glitches

I think of resilience as a plateau upon which the system can play, performing its normal functions in safety. A resilient system has a big plateau, a lot of space over which it can wander, with gentle, elastic walls that will bounce it back, if it comes near a dangerous edge. As a system loses its resilience, its plateau shrinks, and its protective walls become lower and more rigid, until the system is operating on a knife edge, likely to fall off in one direction or another whenever it makes a move. Loss of resilience can come as a surprise, because the system usually is paying much more attention to its play than to its playing space. One day it does something it has done a hundred times before and crashes.

Systems need to be managed not only for productivity or stability, they also need to be managed for resilience—the ability to recover from perturbation, the ability to restore or repair themselves

This capacity of a system to make its own structure more complex is called self-organization. You see self-organization in a small, mechanistic way whenever you see a snowflake, or ice feathers on a poorly insulated window, or a supersaturated solution suddenly forming a garden of crystals. You see self-organization in a more profound way whenever a seed sprouts, or a baby learns to speak, or a neighborhood decides to come together to oppose a toxic waste dump.

Like resilience, self-organization is often sacrificed for purposes of short-term productivity and stability. Productivity and stability are the usual excuses for turning creative human beings into mechanical adjuncts to production processes

Systems often have the property of self-organization—the ability to structure themselves, to create new structure, to learn, diversify, and complexify. Even complex forms of self-organization may arise from relatively simple organizing rules—or may not.

Complex systems can evolve from simple systems only if there are stable intermediate forms. The resulting complex forms will naturally be hierarchic. That may explain why hierarchies are so common in the systems nature presents to us. Among all possible complex forms, hierarchies are the only ones that have had the time to evolve

In hierarchical systems relationships within each subsystem are denser and stronger than relationships between subsystems

When hierarchies break down, they usually split along their subsystem boundaries. Much can be learned by taking apart systems at different hierarchical levels—cells or organs, for example—and studying them separately. Hence, systems thinkers would say, the reductionist dissection of regular science teaches us a lot. However, one should not lose sight of the important relationships that each subsystem to the others and to the higher levels of the hierarchy, or one will be in for surprises.

When a subsystem’s goals dominate at the expense of the total system’s goals, the resulting behavior is called suboptimization

Just as damaging as suboptimization, of course, is the problem of too much central control.

Hierarchical systems evolve from the Bottom up. The purpose of the upper layers of the hierarchy is to serve the purposes of the lower layers.

Four. Why Systems Surprise Us

Everything we think we know about the world is a model

Our models usually have a strong congruence with the world

However, and conversely, our models fall far short of representing the world fully. That is why we make mistakes and why we are regularly surprised

Systems fool us by presenting themselves—or we fool ourselves by seeing the world—as a series of events

We are less likely to be surprised if we can see how events accumulate into dynamic patterns of behavior

The behavior of a system is its performance over time—its growth, stagnation, decline, oscillation, randomness, or evolution.

When a systems thinker encounters a problem, the first thing he or she does is look for data, time graphs, the history of the system. That’s because long term behavior provides clues to the underlying system structure. And structure is the key to understanding not just what is happening, but why.

System structure is the source of system behavior. System behavior reveals itself as a series of events over time

Systems thinking goes back and forth constantly between structure (diagrams of stocks, flows, and feedback) and behavior (time graphs). Systems thinkers strive to understand the connections

between the hand releasing the Slinky (event) and the resulting oscillations (behavior) and the mechanical characteristics of the Slinky’s helical coil (structure

Nonlinearities are important not only because they confound our expectations about the relationship between action and response. They are even more important because they change the relative strengths of feedback loops. They can flip a system from one mode of behavior to another

Many relationships in systems are nonlinear. Their relative strengths shift in disproportionate amounts as the stocks in the system shift. Nonlinearities in feedback systems produce shifting dominance of loops and many complexities in system behavior.

When we think in terms of systems, we see that a fundamental misconception is embedded in the popular term “side-effects.”… This phrase means roughly “effects which I hadn’t foreseen or don’t want to think about.”… Side-effects no more deserve the adjective “side” than does the “principal” effect. It is hard to think in terms of systems, and we eagerly warp our language to protect ourselves from the necessity of doing so. —Garrett Hardin,5 ecologist

Clouds stand for the beginnings and ends of flows. They are stocks—sources and sinks—that are being ignored at the moment for the purposes of simplifying the present discussion. They mark the boundary of the system diagram. They rarely mark a real boundary, because systems rarely have real boundaries

The lesson of boundaries is hard even for systems thinkers to get. There is no single, legitimate boundary to draw around a system. We have to invent boundaries for clarity and sanity; and boundaries can produce problems when we forget that we’ve artificially created them.

There are no separate systems. The world is a continuum. Where to draw a boundary around a system depends on the purpose of the discussion—the questions we want to ask.

Ideally, we would have the mental flexibility to find the appropriate boundary for thinking about each new problem. We are rarely that flexible. We get attached to the boundaries our minds happen to be accustomed to.

It was with regard to grain that Justus von Liebig came up with his famous “law of the minimum.” It doesn’t matter how much nitrogen is available to the grain, he said, if what’s short is phosphorus. It does no good to pour on more phosphorus, if the problem is low potassium

This concept of a limiting factor is simple and widely misunderstood. Agronomists assume, for example, that they know what to put in artificial fertilizer, because they have identified many of the major and minor nutrients in good soil

At any given time, the input that is most important to a system is the one that is most limiting

To shift attention from the abundant factors to the next potential limiting factor is to gain real understanding of, and control over, the growth process.

Any physical entity with multiple inputs and outputs is surrounded by layers of limits.

There always will be limits to growth. They can be self-imposed. If they aren’t, they will be system-imposed

Delays are ubiquitous in systems. Every stock is a delay. Most flows have delays—shipping delays, perception delays, processing delays, maturation

When there are long delays in feedback loops, some sort of foresight is essential. To act only when a problem becomes obvious is to miss an important opportunity to solve the problem.

Bounded rationality means that people make quite reasonable decisions based on the information they have. But they don’t have perfect information, especially about more distant parts of the system. Fishermen don’t know how many fish there are, much less how many fish will be caught by other fishermen that same day

Seeing how individual decisions are rational within the bounds of the information available does not provide an excuse for narrow-minded behavior. It provides an understanding of why that behavior arises. Within the bounds of what a person in that part of the system can see and know, the behavior is reasonable. Taking out one individual from a position of bounded rationality and putting in another person is not likely to make much difference. Blaming the individual rarely helps create a more desirable outcome.

The bounded rationality of each actor in a system may not lead to decisions that further the welfare of the system as a whole.

Five. System Traps … and Opportunities

We call the system structures that produce such common patterns of problematic behavior archetypes. Some of the behaviors these archetypes manifest are addiction, drift to low performance, and escalation

the primary symptom of a balancing feedback loop structure is that not much changes, despite outside forces pushing the system. Balancing loops stabilize systems; behavior patterns persist.

This is the systemic trap of “fixes that fail” or “policy resistance.” You see this when farm programs try year after year to reduce gluts, but there is still overproduction. There are wars on drugs, after which drugs are as prevalent as ever. There

Policy resistance comes from the bounded rationalities of the actors in a system, each with his or her (or “its” in the case of an institution) own goals. Each actor monitors the state of the system with regard to some important variable—income or prices or housing or drugs or investment—and compares that state with his, her, or its goal. If there is a discrepancy, each actor does something to correct the situation. Usually the greater the discrepancy between the goal and the actual situation, the more emphatic the action will be.

Such resistance to change arises when goals of subsystems are different from and inconsistent with each other

In a policy-resistant system with actors pulling in different directions, everyone has to put great effort into keeping the system where no one wants it to be. If any single actor lets up, the others will drag the system closer to their goals, and farther from the goal of the one who let go. In

The alternative to overpowering policy resistance is so counterintuitive that it’s usually unthinkable. Let go. Give up ineffective policies. Let the resources and energy spent on both enforcing and resisting be used for more constructive purposes

The most effective way of dealing with policy resistance is to find a way of aligning the various goals of the subsystems, usually by providing an overarching goal that allows all actors to break out of their bounded rationality

When various actors try to pull a system stock toward various goals, the result can be policy resistance. Any new policy, especially if it’s effective, just pulls the stock farther from the goals of other actors and produces additional resistance, with a result that no one likes, but that everyone expends considerable effort in maintaining. THE WAY OUT Let go. Bring in all the actors and use the energy formerly expended on resistance to seek out mutually satisfactory ways for all goals to be realized—or redefinitions of larger and more important goals that everyone can pull toward together.

But in this system, there is a distinction between the actual system state and the perceived state. The actor tends to believe bad news more than good news. As actual performance varies, the best results are dismissed as aberrations, the worst results stay in the memory. The actor thinks things are worse than they really are. And to complete this tragic archetype, the desired state of the system is influenced by the perceived state. Standards aren’t absolute. When perceived performance slips, the goal is allowed to slip. “Well, that’s about all you can expect.” “Well, we’re not doing much worse than we were last year.” “Well, look around, everybody else is having trouble too.” The balancing feedback loop that should keep the system state at an acceptable level is overwhelmed by a reinforcing feedback loop heading downhill. The lower the perceived system state, the lower the desired state.

THE TRAP: DRIFT TO LOW PERFORMANCE Allowing performance standards to be influenced by past performance, especially if there is a negative bias in perceiving past performance, sets up a reinforcing feedback loop of eroding goals that sets a system drifting toward low performance. THE WAY OUT Keep performance standards absolute. Even better, let standards be enhanced by the best actual performances instead of being discouraged by the worst. Use the same structure to set up a drift toward high performance

I’ll raise you one” is the decision rule that leads to escalation. Escalation comes from a reinforcing loop set up by competing actors trying to get ahead of each other.

When the state of one stock is determined by trying to surpass the state of another stock—and vice versa—then there is a reinforcing feedback loop carrying the system into an arms race, a wealth race, a smear campaign, escalating loudness, escalating violence. The escalation is exponential and can lead to extremes surprisingly quickly. If nothing is done, the spiral will be stopped by someone’s collapse—because exponential growth cannot go on forever. THE WAY OUT The best way out of this trap is to avoid getting in it. If caught in an escalating system, one can refuse to compete (unilaterally disarm), thereby interrupting the reinforcing loop. Or one can negotiate a new system with balancing loops to control the escalation.

archetype called “success to the successful.” This system trap is found whenever the winners of a competition receive, as part of the reward, the means to compete even more effectively in the future

THE TRAP: SUCCESS TO THE SUCCESSFUL If the winners of a competition are systematically rewarded with the means to win again, a reinforcing feedback loop is created by which, if it is allowed to proceed uninhibited, the winners eventually take all, while the losers are eliminated. THE WAY OUT Diversification, which allows those who are losing the competition to get out of that game and start another one; strict limitation on the fraction of the pie any one winner may win (antitrust laws); policies that level the playing field, removing some of the advantage of the strongest players or increasing the advantage of the weakest; policies that devise rewards for success that do not bias the next round of competition.

This trap is known by many names: addiction, dependence, shifting the burden to the intervenor. The structure includes a stock with in-flows and out-flows. The stock can be physical (a crop of corn) or meta-physical (a sense of well-being or self-worth). The stock is maintained by an actor adjusting a balancing feedback loop—either altering the in-flows or outflows. The actor has a goal and compares it with a perception of the actual state of the stock to determine what action to take

THE TRAP: SHIFTING THE BURDEN TO THE INTERVENOR Shifting the burden, dependence, and addiction arise when a solution to a systemic problem reduces (or disguises) the symptoms, but does nothing to solve the underlying problem. Whether it is a substance that dulls one’s perception or a policy that hides the underlying trouble, the drug of choice interferes with the actions that could solve the real problem. If the intervention designed to correct the problem causes the self-maintaining capacity of the original system to atrophy or erode, then a destructive reinforcing feedback loop is set in motion. The system deteriorates; more and more of the solution is then required. The system will become more and more dependent on the intervention and less and less able to maintain its own desired state. THE WAY OUT Again, the best way out of this trap is to avoid getting in. Beware of symptom-relieving or signal-denying policies or practices that don’t really address the problem. Take the focus off short-term relief and put it on long term restructuring.

Rule beating is usually a response of the lower levels in a hierarchy to overrigid, deleterious, unworkable, or ill-defined rules from above. There are two generic responses to rule beating. One is to try to stamp out the self-organizing response by strengthening the rules or their enforcement—usually giving rise to still greater system distortion. That’s the way further into the trap

THE TRAP: RULE BEATING Rules to govern a system can lead to rule beating—perverse behavior that gives the appearance of obeying the rules or achieving the goals, but that actually distorts the system. THE WAY OUT Design, or redesign, rules to release creativity not in the direction of beating the rules, but in the direction of achieving the purpose of the rules

If the desired system state is good education, measuring that goal by the amount of money spent per student will ensure money spent per student. If the quality of education is measured by performance on standardized tests, the system will produce performance on standardized tests. Whether either of these measures is correlated with good education is at least worth thinking about.

In the early days of family planning in India, program goals were defined in terms of the number of IUDs implanted. So doctors, in their eagerness to meet their targets, put loops into women without patient approval. These examples confuse effort with result, one of the most common mistakes in designing systems around the wrong goal.

GNP is a measure of throughput—flows of stuff made and purchased in a year—rather than capital stocks, the houses and cars and computers and stereos that are the source of real wealth and real pleasure. It could be argued that the best society would be one in which capital stocks can be and used with the lowest possible throughput, rather than the highest.

Seeking the wrong goal, satisfying the wrong indicator, is a system characteristic almost opposite from rule beating. In rule beating, the system is out to evade an unpopular or badly designed rule, while giving the appearance of obeying it. In seeking the wrong goal, the system obediently follows the rule and produces its specified result—which is not necessarily what anyone actually wants. You have the problem of wrong goals when you find something stupid happening “because it’s the rule.” You have the problem of rule beating when you find something stupid happening because it’s the way around the rule. Both of these system perversions can be going on at the same time with regard to the same rule.

THE TRAP: SEEKING THE WRONG GOAL System behavior is particularly sensitive to the goals of feedback loops. If the goals—the indicators of satisfaction of the rules—are defined inaccurately or incompletely, the system may obediently work to produce a result that is not really intended or wanted. THE WAY OUT Specify indicators and goals that reflect the real welfare of the system. Be especially careful not to confuse effort with result or you will end up with a system that is producing effort, not result.

Six. Leverage Points—Places to Intervene in a System

people deeply involved in a system often know intuitively where to find leverage points, more often than not they push the change in the wrong direction.

The world’s leaders are correctly fixated on economic growth as the answer to virtually all problems, but they’re pushing with all their might in the wrong direction.

Counterintuitive—that’s Forrester’s word to describe complex systems. Leverage points frequently are not intuitive. Or if they are, we too often use them backward, systematically worsening whatever problems we are trying to solve

  1. Numbers—Constants and parameters such as subsidies, taxes, standards

Numbers, the sizes of flows, are dead last on my list of powerful interventions. Diddling with the details, arranging the deck chairs on the Titanic. Probably 90—no 95, no 99 percent—of our attention goes to parameters, but there’s not a lot of leverage in them

If the system is chronically stagnant, parameter changes rarely kick-start it. If it’s wildly variable, they usually don’t stabilize it. If it’s growing out of control, they don’t slow it down

When I became a landlord, I spent a lot of time and energy trying to figure out what would be a “fair” rent to charge. I tried to consider all the variables, including the relative incomes of my tenants, my own income and cash-flow needs, which expenses were for upkeep and which were capital expenses, the equity versus the interest portion of the mortgage payments, how much my labor on the house was worth, etc. I got absolutely nowhere. Finally I went to someone who specializes in giving money advice. She said: “You’re acting as though there is a fine line at which the rent is fair, and at any point above that point the tenant is being screwed and at any point below that you are being screwed. In fact, there is a large gray area in which both you and the tenant are getting a good, or at least a fair, deal. Stop worrying and get on with your life

stocks that are big, relative to their flows, are more stable than small ones.

There’s leverage, sometimes magical, in changing the size of buffers. But buffers are usually physical entities, not easy to change. The acid absorption capacity of eastern soils is not a leverage point for alleviating acid rain damage. The storage capacity of a dam is literally cast in concrete. So I haven’t put buffers very high on the list of leverage points. 10. Stock-and-Flow Structures—Physical systems and their nodes of intersection

The only way to fix a system that is laid out poorly is to rebuild it, if you can

Physical structure is crucial in a system, but is rarely a leverage point, because changing it is rarely quick or simple

  1. Delays—The lengths of time relative to the rates of system changes

A system just can’t respond to short-term changes when it has long term delays. That’s why a massive central-planning system, such as the Soviet Union or General Motors, necessarily functions poorly.

  1. Balancing Feedback Loops—The strength of the feedbacks relative to the impacts they are trying to correct

One of the big mistakes we make is to strip away these “emergency” response mechanisms because they aren’t often used and they appear to be costly. In the short term, we see no effect from doing this. In the long term, we drastically narrow the range of conditions over which the system can survive.

This great system was invented to put self-correcting feedback between the people and their government. The people, informed about what their elected representatives do, respond by voting those representatives in or out of office. The process depends on the free, full, unbiased flow of information back and forth between electorate and leaders. Billions of dollars are spent to limit and bias and dominate that flow of clear information. Give the people who want to distort market-price signals the power to influence government leaders, allow the distributors of information to be self-interested partners, and none of the necessary balancing feedbacks work well. Both market and democracy erode.

The strength of a balancing feedback loop is important relative to the impact it is designed to correct.

  1. Reinforcing Feedback Loops—The strength of the gain of driving loops

Reinforcing feedback loops are sources of growth, explosion, erosion, and collapse in systems. A system with an unchecked reinforcing loop ultimately will destroy itself.

Reducing the gain around a reinforcing loop—slowing the growth—is usually a more powerful leverage point in systems than strengthening balancing loops, and far more preferable than letting the reinforcing loop run.

  1. Information Flows—The structure of who does and does not have access to information

In

Missing information flows is one of the most common causes of system malfunction. Adding or restoring information can be a powerful intervention, usually much easier and cheaper than rebuilding physical infrastructure.

There is a systematic tendency on the part of human beings to avoid accountability for their own decisions. That’s why there are so many missing feedback loops—and why this kind of leverage point is so often popular with the masses, unpopular with the powers that be, and effective, if you can get the powers that be to permit it to happen (or go around them and make it happen anyway).

5

Rules—Incentives, punishments, constraints

  1. Self-Organization—The power to add, change, or evolve system structure

He, she, or it just has to write marvelously clever rules for self-organization

The intervention point here is obvious, but unpopular. Encouraging variability and experimentation and diversity means “losing control.

  1. Goals—The purpose or function of the system

I said a while back that changing the players in the system is a low-level intervention, as long as the players fit into the same old system. The exception to that rule is at the top, where a single player can have the power to change the system’s goal

  1. Paradigms—The mind-set out of which the system—its goals, structure, rules, delays, parameters—arises

Systems modelers say that we change paradigms by building a model of the system, which takes us outside the system and forces us to see it whole. I say that because my own paradigms have been changed that way.

That is to keep oneself unattached in the arena of paradigms, to stay flexible, to realize that no paradigm is “true,” that every one, including the one that sweetly shapes your own worldview, is a tremendously limited understanding of an immense and amazing universe that is far beyond human comprehension. It

Seven. Living in a World of Systems

They are likely to assume that here, in systems analysis, in interconnection and complication, in the power of the computer, here at last, is the key to prediction and control

Our first comeuppance came as we learned that it’s one thing to under stand how to fix a system and quite another to wade in and fix

We ran into another problem. Our systems insights helped us understand many things we hadn’t understood before, but they didn’t help us understand everything. In fact, they raised at least as many questions as they answered

Systems thinking makes clear even to the most committed technocrat that getting along in this world of complex systems requires more than technocracy.

Living successfully in a world of systems requires more of us than our ability to calculate. It requires our full humanity—our rationality, our ability to sort out truth from falsehood, our intuition, our compassion, our vision, and our morality.2

We can’t control systems or figure them out. But we can dance with them!

Get the Beat of the System

Before you disturb the system in any way, watch how it behaves

kayaking

Watching what really happens, instead of listening to peoples’ theories of what happens, can explode many careless causal hypotheses

And finally, starting with history discourages the common and distracting tendency we all have to define a problem not by the system’s actual behavior, but by the lack of our favorite solution

Expose Your Mental Models to the Light of Day

You don’t have to put forth your mental model with diagrams and equations, although doing so is a good practice. You can do it with words or lists or pictures or arrows showing what you think is connected to what

Remember, always, that everything you know, and everything everyone knows, is only a model. Get your model out there where it can be viewed

Honor, Respect, and Distribute Information

If I could, I would add an eleventh commandment to the first ten: Thou shalt not distort, delay, or withhold information

Use Language with Care and Enrich It with Systems Concepts

The first step in respecting language is keeping it as concrete, meaningful, and truthful as possible—part of the job of keeping information streams clear. The second step is to enlarge language to make it consistent with our enlarged understanding of systems

Pay Attention to What Is Important, Not Just What Is Quantifiable

Pretending that something doesn’t exist if it’s hard to quantify leads to faulty models. You’ve already seen the system trap that comes from setting goals around what is easily measured, rather than around what is important.

Make Feedback Policies for Feedback Systems

You can imagine why a dynamic, self-adjusting feedback system cannot be governed by a static, unbending policy. It’s easier, more effective, and usually much cheaper to design policies that change depending on the state of the system.

Go for the Good of the Whole

Remember that hierarchies exist to serve the bottom layers, not the top. Don’t maximize parts of systems or subsystems while ignoring the whole. Don’t, as Kenneth Boulding once said, go to great trouble to optimize something that never should be done at all. Aim to enhance total systems properties, such as growth, stability, diversity, resilience, and sustainability—whether they are easily measured or not.

Listen to the Wisdom of the System

Locate Responsibility in the System

Intrinsic

That’s a guideline both for analysis and design. In analysis, it means looking for the ways the system creates its own behavior.

And sometimes blaming or trying to control the outside influence blinds one to the easier task of increasing responsibility within the system

responsibility” means that the system is designed to send feedback about the consequences of decision making directly and quickly and compellingly to the decision makers. Because the pilot of a plane rides in the front of the plane, that pilot is intrinsically responsible. He or she will experience directly the consequences of his or her decisions

These few examples are enough to get you thinking about how little our current culture has come to look for responsibility within the system that generates an action, and how poorly we design systems to experience the consequences of their actions.

Stay Humble— Stay a Learner

The thing to do, when you don’t know, is not to bluff and not to freeze, but to learn. The way you learn is by experiment—or, as Buckminster Fuller put it, by trial and error, error, error. In a world of complex systems, it is not appropriate to charge forward with rigid, undeviating directives. “Stay the course” is only a good idea if you’re sure you’re on course

Celebrate Complexity

There’s something within the human mind that is attracted to straight lines and not curves, to whole numbers and not fractions, to uniformity and not diversity, and to certainties and not mystery. But

Expand Time Horizons

One of the worst ideas humanity ever had was the interest rate, which led to the further ideas of payback periods and discount rates, all of which provide a rational, quantitative excuse for ignoring the long term.

The longer the operant time horizon, the better the chances for survival

In a strict systems sense, there is no long term, short-term distinction. Phenomena at different time-scales are nested within each other

Defy the Disciplines

Seeing systems whole requires more than being “interdisciplinary,” if that word means, as it usually does, putting together people from different disciplines and letting them talk past each other. Interdisciplinary communication works only if there is a real problem to be solved, and if the representatives from the various disciplines are more committed to solving the problem than to being academically correct. They will have to go into learning mode. They will have to admit ignorance and be willing to be taught, by each other and by the system.

Expand the Boundary of Caring

it means expanding the horizons of caring. There are moral reasons for doing that, of course. And if moral arguments are not sufficient, then systems thinking provides the practical reasons to back up the moral ones.

Don’t Erode the Goal of Goodness

Don’t weigh the bad news more heavily than the good. And keep standards absolute