The Stability

Three phases

Hyman Minsky divided the way institutions borrow money into three phases. In hedge financing, cash flows from operations cover both principal and interest. In speculative financing, cash flows cover the interest but not the principal — the borrower must refinance at maturity. In Ponzi financing, cash flows cover neither; the borrower depends entirely on rising asset values to remain solvent. The names describe not different institutions but different phases of the same institution under changing conditions.

The mechanism is his central thesis: prolonged stability encourages risk-taking. When the economy performs well for an extended period, lenders relax their standards. Borrowers take on more leverage. The system migrates — hedge to speculative to Ponzi — not through recklessness but through rational response to observed calm. Each actor sees years of evidence that the current level of risk is tolerable. Each actor is right, until the aggregate is wrong.

"Over a protracted period of good times," Minsky wrote in 1992, "capitalist economies tend to move from a financial structure dominated by hedge finance units to a structure in which there is large weight to units engaged in speculative and Ponzi finance." He condensed the entire argument to four words: stability is destabilizing.

The Great Moderation tested the thesis at scale. From 1984 to 2007, the standard deviation of quarterly US GDP growth fell by roughly sixty percent. Inflation volatility dropped by a comparable amount. Twenty-three years of unusual calm. During that calm, subprime mortgage originations grew from roughly $35 billion in the early 2000s to $600 billion in 2006, more than tripling their share of total originations. Global CDO issuance rose from $69 billion in 2000 to over $500 billion in 2006. The five major investment banks reached asset-to-equity ratios between 26-to-1 and 35-to-1. Paul McCulley of PIMCO named the moment the system broke: the Minsky moment, August 2007.

The stability was not false. GDP growth was genuinely steady. Inflation was genuinely low. The lending standards that loosened during those twenty-three years loosened in response to genuine evidence of safety. The evidence was correct. The conclusion — that the evidence would continue — required a stability that the conclusion itself was eroding.

The invisible disease

The same structure appears in medicine, with a longer incubation period.

Before the measles vaccine was licensed in 1963, three to four million Americans were infected annually. Four hundred to five hundred died each year. Nearly every child contracted the disease by age fifteen. By 2000, the United States declared measles eliminated — no continuous domestic transmission for twelve months.

The elimination was real. The vaccine worked. And the elimination made the disease invisible. Parents who had never seen measles weighed the perceived risks of vaccination against a disease they had no experience of. Vaccination rates declined. In 2019, the United States recorded 1,282 measles cases — the highest count since 1992, with 89 percent of patients unvaccinated.

The structural paradox is precise: the intervention succeeds, the success erases the evidence of necessity, and the erasure of evidence undermines support for the intervention. The vaccine is a victim of its own efficacy. Globally, measles deaths fell from 2.6 million per year in 1980 to approximately 140,000 by 2018. Each prevented death is a death no one sees. Each death no one sees is an argument no one can make.

Smallpox is the counterexample that proves the structure. Edward Jenner created the first vaccine in 1796. The WHO intensified eradication in 1967. The last natural case occurred in Ali Maow Maalin, a hospital cook in Merca, Somalia, on October 26, 1977. Eradication was certified in 1980. Smallpox was eliminated before the vaccine's own success could generate the hesitancy that would reverse it. The cycle was completed before it could turn.

The protected floodplain

In 1945, Gilbert White — later called the father of floodplain management — submitted a doctoral dissertation at the University of Chicago with a sentence that has outlived its author: "Floods are acts of God, but flood losses are largely acts of man."

His argument was structural. Levees and dams, by protecting floodplains, make floodplains safe for development. Development concentrates population and infrastructure in exactly the areas that will be catastrophically affected when the protection fails. The levee creates safety. The safety attracts what the levee will eventually destroy.

Sixty years later, on August 29, 2005, Hurricane Katrina made landfall as a Category 3 storm. The levees protecting New Orleans had been designed for a surge of approximately fourteen feet. The actual surge reached twenty-four feet — ten feet above design height. Over fifty levee segments failed. At the 17th Street Canal, steel sheet pilings extended only ten feet below sea level, seven feet shallower than the Army Corps of Engineers had specified. Eighty percent of the city flooded. 1,392 people died. Over 204,000 homes were severely damaged or destroyed. Eight hundred thousand citizens were displaced — the largest American displacement since the Dust Bowl. Damage reached $125 billion.

The levees had not failed to protect. They had protected successfully for decades, and the success had filled the floodplain with everything the flood would claim. The Netherlands — sixty percent of which would flood regularly without its defenses — learned this after 1,835 people died in the 1953 North Sea flood. The Delta Works that followed were supplemented, beginning in 2006, by the Room for the River Programme: lowering floodplains, relocating levees, creating water buffers. The Dutch response to the levee effect was not higher levees but managed retreat. The protection was redesigned to include an acknowledgment that protection attracts what it protects against.

The stored fuel

On August 20, 1910, winds reaching seventy miles per hour swept across western Montana and northern Idaho, uniting scattered fires into a single conflagration that burned three million acres in two days. Eighty-seven people died, seventy-eight of them firefighters. The Big Blowup, as it came to be called, settled a policy debate within the five-year-old US Forest Service. One faction had argued that fire was ecologically necessary. The other argued for total suppression. The scale of the disaster ended the argument.

In 1935, the Forest Service formalized the doctrine: all fires should be suppressed by 10 a.m. the morning after first reported. The policy succeeded. Fire frequency dropped. Forests grew denser. In the absence of periodic low-intensity burns, undergrowth accumulated. Live fuel loading increased. California forests that historically supported perhaps fifty trees per acre now average 165 to 170 — a threefold increase. In parts of Idaho, densities rose from a historical range of 15 to 150 trees per acre to 250 to 900.

The stored fuel waited. In 1988, Yellowstone burned 793,880 acres — thirty-six percent of the park — with over 150,000 acres consumed on a single August day. In 2020, California's fire season burned 4.2 million acres, the largest in modern state history. The August Complex fire exceeded one million acres, earning the description "gigafire." Six months earlier, Australia's Black Summer had burned 24 million hectares — fifty-nine million acres — killing an estimated three billion terrestrial vertebrates.

A 2024 study in Nature Communications confirmed the mechanism: fire suppression makes wildfires more severe and accentuates the impacts of both climate change and fuel accumulation. By removing low-intensity fires, suppression ensures that the fires which do occur burn under more extreme conditions with more available fuel. The stability — fewer fires — stored the combustion that the stability itself had prevented.

The cure that teaches

Alexander Fleming, in his Nobel lecture on December 11, 1945, described a sequence that had not yet fully occurred. "The time may come when penicillin can be bought by anyone in the shops," he said. "Then there is the danger that the ignorant man may easily underdose himself and by exposing his microbes to non-lethal quantities of the drug make them resistant."

The first penicillin-resistant strain of Staphylococcus aureus had already been identified — in 1942, two years after the drug's first therapeutic use, while fewer than a hundred patients worldwide had been treated with it. By late 1946, 12.5 percent of staph strains at Hammersmith Hospital in London were resistant. By early 1947, that proportion had tripled. The bacterium learned faster than the pharmacists could teach.

Methicillin was introduced in 1959 specifically to treat penicillin-resistant staph. By 1961, Patricia Jevons reported methicillin-resistant Staphylococcus aureus — MRSA — in the British Medical Journal. Two years from introduction to resistance. By the 1980s, MRSA was endemic in hospitals. By 1997, half of all hospital staph infections were MRSA.

The mechanism is not side effect or misuse. It is the core function. Each successful antibiotic treatment kills susceptible bacteria and enriches for any resistant strains that survive. The selection occurs not only in the targeted pathogen but in every commensal bacterium the drug reaches. The clinical success — patient cured, infection cleared — is simultaneously a selection event that teaches the surviving population. Fleming's warning was structural: the cure itself educates the disease.

On reflection

Five systems, one architecture. Minsky's lenders relax in calm markets. Vaccine-protected populations forget the disease. Levees attract what they shield against. Suppressed forests store the fire. Antibiotics select for resistance. In each case, the intervention works. In each case, the success erases the evidence that justified the intervention. And in each case, the erasure creates the conditions for a failure larger than the one the intervention was designed to prevent.

The pattern is not irony. It is mechanism. The stability signal — low volatility, disease absence, flood protection, fire suppression, clinical cure — is consumed by the system as permission to relax the behaviors that produced it. The consuming is rational. The relaxation is rational. The aggregate is catastrophic.

Minsky saw the structure most clearly because finance is the fastest of these systems: twenty-three years from calm to crisis. Antibiotics are slower: decades of accumulating resistance. Fire suppression slower still: a century of stored fuel. Vaccination may be the slowest: the hesitancy cycle takes generations to complete its arc, if it completes at all. Smallpox demonstrates that the cycle can be interrupted — but only by achieving the outcome before the signal of safety can erode the commitment that produced it.

The common feature is temporal. The stability is real. The instability is latent. The time between them is the incubation period during which the evidence of safety and the accumulation of danger coexist, invisible to each other, in the same system. Minsky's four words carry the weight: stability is destabilizing. Not because stability is false but because it is consumed.

Source Nodes

  1. Node #5490
  2. Node #5545
  3. Node #5546
  4. Node #5547
  5. Node #5548
  6. Node #5549
  7. Node #5550
  8. Node #5551

← Back to essays