In systems theory, the whole is seldom greater than the sum of the parts. The global economy, like any open system can rapidly deteriorate if driven by uncontrolled feedback loops. In theory, federal banks work like your home thermostat – the system becomes too hot, start the A/C (increase rates, cut money supply); the system cools too quickly, turn on the furnace (decrease rates, inject funds). Such negative feedback (pushing against the trend) keeps a closed system operating within a comfortable range.
Businesses drive to push inefficiency out of their internal system using elaborate computer modeling to removing slack. This creates close-coupled systems, like airline schedules, which are prone to catastrophic failure when hit by external perturbations larger than the slack. O’Hare closes due to a storm and within hours planes and crews stuck in the wrong locations cause delays at other airports producing a chain reaction which takes 2 days to unsnarl.
Fracture critical design
Thomas Fisher, dean of the College of Design at the University of Minnesota began to think about the causes of catastrophic failure after collapse of the I-35 bridge in Minneapolis. He identifies the issue as what engineers call “fracture-critical” designs where lack of redundancy, interconnectedness, efficiency drivers, and sensitivity to stress produce systems with no resilience. He cites the work of “ecologists Lance Gunderson and C. S. Holling. Panarchy explains that human and natural systems move in continuous adaptive cycles, and that exponential growth in connectedness and efficiency actually makes systems less and less resilient, inevitably leading to collapse and then return to a state of greater resilience, with fewer connections and less efficiency.”
Fisher reminds us that much of our current economic and political infrastructure has resulted from fracture critical business principles. We are indeed in a period of exponential growth in connectedness, which has resulted in world-wide catastrophic failures. Finance, housing, peak oil, pollution are all fracture critical systems under exponentially increasing stress. The question is no longer will there be failures; only when, where and how big. Like engineers, societies can learn from their mistakes. Good engineers no longer design the cheapest bridge to build, they look at total cost of ownership, which includes continuing maintenance, ongoing improvements and even a factor for insurance costs against the risk of catastrophic failure. We only need to read the handwriting on the wall and act now.
Cascading failures, such as regional power blackouts, occur when a problem in one part of a massive system (the electric power grid) induces similar problems in other components. Exacerbating the dangers from such positive feedback is the lemming-like nature of business trends (the greater fool principle). There are always buyers as prices reach record highs (bubbles, irrational exuberance) and then panicked over-selling on a big drop.
Braking run-away systems
Ultimately, real world business systems are open and chaotic. Anticipate meltdowns when designing systems and build in restraints.
- Decoupling – put the slack (inefficiency, redundancy) back in the system; design in backup
- Early warnings – model for bigger perturbations; monitor trends and sharp changes in trends (look for stress)
- Negative feedback controls – predefine fixes for risky trends; curb exposure early (remove risky interconnectedness)
- Counter-cyclical opportunism – identify what is working, do more of that; think long-term