The Seashell Press
Deadly Decisions

Table of Contents

Chapter One

Chapter One

False Knowledge

It was nearly midnight on April 14, 1912, when the White Star liner Titanic, racing along on its maiden voyage across the North Atlantic, struck an iceberg many times larger than the ship itself. We now know from extraordi­nary new photographs that the seam along the starboard hull was ripped open by the impact. The ocean poured in, breaching the double hull, flooding the mail room, the squash court, and the third-class berths. It spilled over the “watertight” bulkheads to snuff out the giant steam boilers and pull the ship down in one of the greatest disasters of modern times. In the wake of this tragedy, the British government ordered that ice patrols commence along the major shipping lanes to guard against another disaster. They should have called for truth patrols instead.

There was nothing surprising about icebergs in the North Atlantic. All day the Titanic had received detailed information about the ice ahead, delivered directly to Captain Edward Smith and to other officers on the bridge. Two hours before the incident, the Mesaba sent the Titanic a warning of icebergs at 42° 25'N 50° 14'W, almost precisely where the accident later occurred. At least one report was given to Ismay, presi­dent of the White Star Line. Eager to set a new record for transatlantic crossing, Ismay calmly stuffed the note in his pocket. In spite of frequent and specific warnings, he made the decision to let the ship race ahead at full speed.

Ismay believed three things to be true: first, he knew from his own experience with other ships that the lookout would give him actual sightings of any iceberg in time to steer around it. Second, his team of engineers assured him that even if the Titanic struck an iceberg submerged or otherwise difficult to see, the ship would not sink. And third, if there should be an accident of any kind, there was a tight and mutually supportive community of ships nearby that would come to his aid. A truth patrol would have noted that he was wrong on all counts.

The night was not dark and stormy, but clear, cloudless, and spangled with stars; conditions for timely warning seemed excellent. But the lookout actually assigned to watch for ice couldn't see that well. Frederick Fleet had not been given an eye exam in five years and, in spite of his frequent requests, had not been provided binoculars. More important, the unprecedented size and speed of the Titanic were so great that, unlike other ships in Ismay’s experience, this one could not be stopped or significantly turned in less than a mile with engines full astern. Even a lookout with excellent vision could not have spotted the iceberg at that distance.

Nor was it unsinkable. The Titanic was built to remain afloat even with its first four watertight compartments flooded, but the new design had not been tested. Ismay even overruled the worried engineers who built the ship, cutting the number of lifeboats from thirty-two to sixteen, the minimum required by the British Board of Trade. He later observed that the reason the Titanic carried any lifeboats at all was so they could rescue passengers and crew from other ships.

And finally, the Titanic was for all practical purposes alone. Ten miles away, the Leyland liner Californian had slowed down in the dangerous ice field when third officer Charles Victor Groves saw the lights from a large passenger liner racing up from the east. As he watched, the liner seemed to stop as if in trouble. He went to the radio room, found that the operator had gone to bed, and tried to operate the radio himself but was un­able to make contact.

For the next two hours, Groves and his fellow officers studied the strange behavior of the Titanic. She seemed to float awkwardly on the sea, firing white rockets normally considered a signal of distress. When they reported this to Stanley Lord, captain of the Californian, he told them to try to make contact by Morse lamp, but this proved unsuccessful. Finally, when it seemed that the lights of the nearby ship were beginning to disappear, the officers went again to Captain Lord, who was lying down in his cabin. “Were they all white rockets?” he asked. And hearing that they were, he went back to sleep.

At the conclusion of the British inquiry, Lord Mersey, chairman of the inquiry committee, wrote: “When she first saw the rockets, the Californian could have pushed through the ice to the open water without any serious risk and so have come to the assistance of the Titanic. Had she done so, she might have saved many if not all of the lives that were lost.

Why didn’t Lord take the distress signals seriously? He testified later that he believed the Titanic was “unsinkable,” that the distress signals seemed ambiguous, and that he was in a dangerous ice field himself. His officers concluded from the same evidence that they were witnessing a disaster of unprecedented proportions, but they stood at the rail and kept their silence.

It is part of the Titanic legend that Robert Sarnoff, the twenty-one-year-old employee of Marconi Wireless Telegraph Company, stayed at his primitive radio for seventy-two hours, receiving and passing on the names of those lost at sea; the first major demonstration of the revolutionary new wireless telegraphy. It seems ironic that we should find there, at the birth of the technology that epitomizes our age, the specter of an ancient and enduring problem. The Titanic sank because of multiple failures to manage information correctly: failure by Captain Smith to heed warnings, failure by Bruce Ismay to credit evidence that was contrary to his ambitions, failure by Captain Lord of the Californian to act in the face of doubt. The Titanic sank because of false knowledge.

The errors that surrounded the sinking of the Titanic are not unusual. A number of recent disasters on a comparable scale indicate that the problem may be growing worse. Early warnings are often brushed aside by men and women in the thrall of their own dreams. Critical information is frequently delayed or lost in organizations blundering through a crisis. And at the last moment, when the situation could be saved, communication systems fail and dissenters often fall silent.

Every new age begins with a drum roll of novel disasters and ends in a fog of nostalgia. One of the unexpected problems of the Industrial Age, for example, was finding low-cost labor to come in from the farms and cottages to tend the mills, a task that precipitated bloody riots and decades of dissent. Now, with the onset of the Information Age, our information systems seem to offer new power over a complex world, but then those systems fail us. The warning is late, the message is confusing, the signal never gets through. Failures are attributed to some cause over which we can comfortably claim no control: equipment malfunction, system complexity, inadequate training, bad weather. Even our new disasters are understood in terms of old conditions.

It is customary in business management literature to say that, in such cases, the decision system failed. The one responsible for the final evaluation of alternatives was distracted, deluded, indecisive, unable to handle all the data, or emotionally ill-equipped for the stress. And when none of these conditions can be confirmed with certainty, those who rake through the ruins of an accident say he made the best decision he could, given the information available to him at the time. No one is to blame. The solution is to build more decision support systems, get more data, make better presentations, order extra computer checks.

But the reality is otherwise. Some of the information, though wrong, looks right. It fits neatly with the rest of the data. It comes from a trusted source. It has been reviewed and approved by several hierarchies of analysts. The danger is that we live in a world where the evidence and analysis always makes sense, but where, through personal and social processes we scarcely understand, the information gathered by the organization has come to include false knowledge. Everyone is to blame.

Our ability to determine the accuracy of information is increasingly inadequate. Smart men and women in the best organizations, surrounded by data, make the wrong decisions. We race ahead like Ismay on the bridge, charmed by the possibilities but betrayed by the facts. We are vaguely aware of the need for speedy and relevant information but insensitive to the limits of cognition and almost entirely ignorant about how information is distorted as it travels through an organization. We cling to the idea that a reasonable man in a position of authority should make these complex decisions alone, and we ignore the fact that the whole community—the engineers, the lookout, the signalman, the radio operator, the officers on the deck of the Californian—have already narrowed his choices and set the future in motion. We are trying to manage communities, businesses, and nations on the basis of garbled reports and unreliable message systems without having achieved the ability to test for truth—not as some philosophical matter, but as the basis for action in a complex world.

Even direct observation can often be misleading. Beyond a certain speed and level of complexity, the world we observe is a false one. The Titanic was effectively traveling blind: by the time Fleet spotted the iceberg, it was already too late to turn the ship. The collision had occurred downstream in time and nothing in his power could then undo it. Until that moment, a man’s ability to observe nature had been approximately equal to his power to react. He could run from a volcano, dodge a screaming artillery shell, and steer a ship through a storm. But new technology has changed the rules. Some armies in history have marched faster than their supply trains and starved to death on the eve of victory. Fighter pilots joke that beyond the speed of sound there is no point looking out the window: all you see is the past. At Mach 2 another fighter coming directly at you appears as a speck a mile away, but within one second—faster than many pilots can react—the planes have passed each other or collided.

A hundred years ago, the truth was easier to establish. One could visit the mills, go to the market, see the laboratory, and reach a conclusion based on personal experience. Information was a record of market transactions, a notebook of observations from the mill, or a report of distant events. It could be confirmed by personal observation. But now, as organizations expand to undertake broader and more specialized tasks, financial trading systems have become the marketplace, software development is the mill floor, reports are the events. And the decision center of an organization is often too far from the relevant reality to check for accuracy. The real world we need to observe is deep in the reactor core, in the night sky on the other side of the world, or in the purchasing behavior of millions who will buy the product. We discover too late that our observations are not timely enough, or complete enough, or accurate enough to be relied upon. We design products for consumers we have never met. We prescribe medicine we have not tested ourselves for patients we scarcely know to treat illnesses diagnosed by others. We aim our missiles at nations whose language we cannot read or speak and we are betting our future health and productivity on genetic devices and nanotechnology we can no longer see or feel.

MEDICINE, MONEY, AND WAR: UNTHINKABLE

How shall we test the quality of complex information? How was Ismay to decide the truth of the message “There are icebergs ahead?” In the middle of a modern management situation, it is rarely possible to consult some long-standing authority. Events change quickly, and authority has a habit of hiding in generalities. Nor can we always measure the truth of a message by comparing it rationally to other information available. Too often the other information comes from the same source. The liar’s art, after all, is to weave from ambiguous and dissonant data a “reasonable tale.” It was certainly contrary to Ismay’s style—and to the style of his times—to gather his officers together and evaluate the problem. And if he had, what good would it have done? The officers might have been more cautious but they were not better informed.

Consider three fields where information is the primary reality—medicine, finance, and international relations.

In the field of medicine, the inability to see errors in the patient’s diagnosis or treatment has led to a startling increase in patient deaths. In 2000, the Journal of the American Medical Association published the results of a study showing 225,000 deaths a year as a result of medical error: 12,000 from unnecessary surgery, 7,000 from medication errors in the hospital, 20,000 from other hospital errors, and 80,000 from infections occurring while the patient was hospitalized. One hundred and six thousand deaths resulted from “non-error” negative effects of drugs.5 The total number of deaths caused by medical error in the United States is roughly equal to the number of casualties from two jumbo jets crashing every day.

It is possible that doctors are trained to be more attentive to the problems of false knowledge than engineers, lawyers, financial analysts and intelligence professionals--as we shall see. In modern medicine, truth is their life’s work. But the mistakes they make are more visible, and the results more personal. A study published in 2008 by the New England Healthcare Institute concluded that 8.8 percent of hospital patients in a sample of eight Massachusetts hospitals suffered from “preventable adverse drug reactions,” ranging from a change in respiratory rate to a fever or a seizure to anaphylactic shock. “Conservative estimates show that nationwide, adverse drug events result in more than 770,000 hospital injuries and deaths each year.

And that’s in the hospital, where a team of professionals works within the discipline of protocol, where outcomes are measured, and where insurance companies have a special interest in preventing mistakes. This doesn’t take into account the errors committed by individual physicians treating patients in their own offices. Dr. Vincent DeVita, director of the National Cancer Institute at the US National Institutes of Health, said that of the 462,000 cancer deaths in the United States that year, at least 20 percent occurred because the doctor didn’t have—or wouldn’t use—information that was immediately available to him or her. In each case, the doctor reached a conclusion based on experience and prescribed the indicated therapy. In each case, the patient died. DeVita went further: the number of deaths “could be cut by as much as 50 percent” if doctors would (or could) take the time to look beyond their desktop for prevention and treatment knowledge already available.7 Although the information can be found in journals, textbooks, online databases, and university research departments, many doctors cannot or will not use it. Instead they rely on their own memory and judgment, heavily influenced by the desire to heal, and reluctant to use diagnostic tools that might define the disease in terms they don’t recognize.

According to a study at the Harvard School of Public Health, 20,000 heart attack deaths a year could have been prevented over the last decade if doctors had accepted research findings when they were first published.8 Simple discoveries such as the value of aspirin in preventing second heart attacks were ignored for a decade because they were published in a statistics journal doctors don’t read. It isn’t that they couldn’t get access to the journals; access is easy. The doctors couldn’t come to grips with information that seemed foreign and contradictory, information that challenged their competence or caused confusion, information of unknown and indeterminable quality.

Herbert Simon, the Nobel economist, writing about how decisions are made, suggested that there are limits to reason. Individuals tend to identify a new situation as representative of a class of conditions they are familiar with. And as they decide how to respond, they remember successful or vivid responses in the past. People do what has worked before. But when the information is incomplete, people have to imagine what might be missing. When the experience is insufficient to explain what is going on, people have to search for other less familiar responses and make judgments about the efficacy of each one. Usually, Simon says, they choose the first combination of situation and remedy that seems to fit the facts. Not the best one, just the one that is, in Simon’s language, most “available.”9 In other words, they guess.

From 1996 through the summer of 2001, the executives of Enron engaged in fraud and asset manipulation to raise the price of its stock from $20 to $90 per share. By the second quarter of 2001, the company claimed that sales in the trailing twelve-month period had jumped from $52 billion to $171 billion. During this astonishing rise, financial analysts, accountants, and the media cheered them on, believing that the company’s executive talent and the so-called new economy were the reasons for such success and the herald of happy days to come. According to “independent” analysts at many investment banks, Enron was a brilliant example of the executive’s art that cast its glow on other financial schemes being offered by the sales side of the same firm. If you missed out on Enron, my colleague has something just as good. Even in the last month of trading, fifteen of the eighteen investment analysts covering the company rated it a “buy.” Senior executives with years of experience couldn’t see the crash coming:

I have been asked how I viewed the August 14, 2001 resignation of Enron’s then-CEO Jeffrey Skilling. At the time, we viewed Kenneth Lay’s return as CEO to be a positive development given his promise of openness and a commitment that Enron would shed non-core assets and focus upon its core business lines. Upon learning of Skilling’s departure, I and my colleagues immediately arranged for a meeting with Lay, which was held in Alliance Capital’s Minneapolis offices on August 21, 2001. At that meeting, we had a very detailed discussion about Enron’s business, and our questions appeared to have been answered in a complete and satisfactory manner. (Alfred Harrison, vice chairman, Alliance Capital, in testimony to the US Senate Committee on Commerce, Science and Transportation)10

But months before the meeting with Harrison, Lay, and colleagues, Bethany McLean, a thirty-one-year-old English major from Hibbing, Minnesota, wrote an article in Forbes calling the whole scheme into doubt. She said she couldn’t understand how the business worked. Suddenly others began to notice the same financial reporting tricks, and within nine months the company had filed for bankruptcy, the largest in United States history. Shareholders lost $60 billion, much of it from the personal retirement funds of Enron employees.

On February 6, 2003, secretary of state Colin Powell stood before the United Nations Security Council and presented the case for the invasion of Iraq. Based on months of research and analysis by the CIA, the State Department, and the Department of Defense, Secretary Powell used satellite photographs, audiotapes, official documents, and captured material to show that Iraq had already developed biological weapons of mass destruction and was in the process of building nuclear weapons. He made three key points: first, the United States had seized precision-ground aluminum tubes on their way to Iraq, tubes whose only purpose could be as part of a centrifuge for the enrichment of weapons-grade uranium. Second, he said that the United States had actual documents showing Iraq’s effort to buy yellowcake uranium from Niger—the only ingredient needed to complete its nuclear bombs. And third, Iraq had built mobile systems for producing nerve gas, anthrax, and other forms of biological and chemical warfare. “Our conservative estimate is that Iraq today has a stockpile of between 100 and 500 tons of chemical-weapons agent.”

“My colleagues,” Secretary Powell said, “every statement I make today is backed up by sources, solid sources. These are not assertions. What we are giving you are facts and conclusions based on solid intelligence."

The heartfelt and articulate arguments from a man as honored and honorable as Powell seemed difficult to doubt, but they were known to be false in every case. The “suspected chemical weapons site” had been visited by the UN inspection team and was determined at the time to be an old ammunition storage area often frequented by Iraqi trucks. The mobile production labs, shown in artist drawings, were just what the Iraqis said they were: facilities for the production of hydrogen gas to fill weather balloons. The “decontamination vehicles” shown in the satellite photographs were really just fire trucks. The VX nerve gas Powell cited had already been destroyed, and the destruction had been confirmed by the UN. The existence of five hundred tons of chemical and biological warfare material had already been denied by a United States Defense Intelligence Agency report. The audiotape seeming to show that Osama bin Laden was in partnership with Saddam Hussein, when correctly translated, indicated that bin Laden was appealing to the Iraqi people and actually wanted Saddam Hussein dead.

The documents that showed that Iraq was buying yellowcake uranium from Niger had been declared forgeries by Secretary Powell’s own State Department weeks before; an investigation by the Italian parliament later concluded that they had been forged in Washington by Michael Ledeen, Dewey Clarridge, Francis Brooke, and Defense Department consultant Ahmed Chalabi. The aluminum tubes had been a controversial issue for months, and as early as January, the Washington Post had reported that UN weapons inspectors were “confident” that the tubes were not part of any nuclear weapons program.

Years later, Powell said that he was merely reporting the truth as he and everyone around him understood it:

Do you feel responsible for giving the UN flawed intelligence?

   I didn’t know it was flawed. Everybody was using it. The CIA was saying the same thing for two years. I gave perhaps the most accurate presentation of the intelligence as we knew it—without any of the “Mushroom clouds are going to show up tomorrow morning” and all the rest of that stuff. But the fact of the matter is that a good part of it was wrong, and I am sorry that it was wrong.

What defense do we have against a complex description of reality that includes false information? What hope is there when the financial press is almost entirely, spectacularly wrong about the most dramatic business failure in history? Whom can we trust when hospitals—not just individual physicians—get the diagnosis and treatment so wrong that death from medical error killed four times more people than motor vehicle accidents in 2000, and fifteen times more people than died that year of AIDS? How do we survive in a complex world when the entire $40-billion-a-year engine of American espionage, diplomacy, and defense intelligence cannot tell whether another country is or is not preparing weapons against us?

It is not enough to say that people lied. We must also admit that we believed. The doctors certainly believed that they were doing the right thing. The investment managers and accountants put their own money down on Enron. Secretary Powell believed he was reporting the facts. But we live in a world of virtual truth where the information is so complex, so internally consistent, and presented in such a persuasive way that the false knowledge cannot be detected. The traditional tests for truth are all positive: the description is defended by men and women of authority and experience, the proffered evidence seems unimpeachable, and broad consensus supports the conclusions. But we pay too little attention to how easily an individual can distort the report. Wishful thinking colors the facts, new knowledge is twisted into old shapes, and self-interest beckons us toward a false conclusion through cognitive processes we have only recently begun to identify. And when the information is processed by a group, these errors are compounded by social forces we know too little about. Each individual in the chain adds his or her own misconceptions. Uncertainty is removed, and additional information is added to make a more “reasonable” story. Members of a group try hard to agree with each other even when they don’t, and those who disagree are punished or fall silent. New information is subtly altered to be more consistent with the prevailing view of the world and slowly, inexorably, the description takes a turn for the logical, the complete, the flattering, and the persuasive. Somebody makes the slides. Somebody else sets up the chairs. Truth is lost and virtual truth has taken its place: a full, articulated, and flawed description of the real world, leading us into disaster.

Years ago, on a rainy evening, a World Airways cargo plane on its way from California to the Philippines was forced to approach the Cold Bay, Alaska, airport on instru­ments alone. Captain John Weininger consulted his charts and set the plane on the correct course, heading cautiously toward the runway. Thirty-five miles out, he asked the flight engineer for a position, but none could be determined electronically. “What kind of terrain are we flying over?” the engineer asked. “Mountains everywhere,” said the captain. The crew decided to climb a little to four thousand feet, and the altimeter needle suddenly started to jump around. The first officer called out, “Hey, John. We’re off course . . . four hundred feet from something.” Six seconds later, according to the aircraft accident report, “the aircraft struck rising terrain.” The three crew members and three passengers were killed.

According to the accident report, the fault lay with the captain, who deviated from approved approach procedures and descended into an area of “unreliable navigation signals and obstructing terrain.” Pilot error. But a lawsuit filed by the family of one the crew members showed that the navigation chart Captain Weininger used was wrong. The altitude data gathered and distributed by the federal government had left out a mountain, and Jeppesen, the publisher that made the maps, hadn’t checked the new version against the old.19 The entire community of pilots, aircraft controllers, and government officials accepted these maps as “accurate,” but in fact there was no way to confirm the truth of the information on which they had been taught to risk their lives.

We are all in Captain Weininger’s chair. We take the medicine, we buy the stock, we cast our votes, and we send our sons and daughters into war. We place our faith in the truth of the information that others have provided without an adequate understanding of how our modern systems can distort and belie reality. We have wandered into a world of virtual truth where much is simulated, where information and ideas pretend to be each other, where logic and consistency are man-made. The markings on the compass fade; the needle starts to jump.