People tend to remember Y2K as a joke, and not a good one. Way back in the last century, computer scientists and I.T. guys began warning that a strange computer bug lay dormant in just about every computer in the world. When the date turned over from 1999 to 2000, computers would go haywire, they said, leading to all manner of annoyances, if not global catastrophe.
At first, no one believed them. As I discovered when I investigated Y2K for its 10th anniversary, the technicians who discussed the problem in the early 1990s were often mocked for their alarmism. The year 2000 was a long time away, and people shrugged.
But then, in the mid-1990s, a sense of urgency took hold. The tech industry was booming and the worldwide web was becoming the white-hot center of American innovation. So it began to make sense that a computer bug could take down the world.
But mostly, what happened was that the narrative changed. Instead of couching the problem in the anodyne language of software, proponents of action began to describe in concrete and frightening terms how the bug could alter modern life. They painted the worst-case picture. And the worst case started to sound pretty darned bad.
A letter that Senator Daniel Patrick Moynihan of New York sent to President Bill Clinton in 1996 illustrates this tack. Pointing to a government study that “substantiates the worst fears of the doomsayers,” he warned that the bug could cripple the Internal Revenue Service and the Social Security Administration, prompting economic chaos. After outlining a series of recommendations — involving enormous organizational and financial costs — Mr. Moynihan ended with a stark warning: “The computer has been a blessing; if we don’t act quickly, however, it could become the curse of the age.”
Prompted by news media coverage of potential devastation, governments and businesses across the globe got in gear. The United States spent $100 billion to address the bug, according to a 2000 report by a Senate committee that studied the effort. (All but $8.5 billion was spent by companies, not the government.) Across the globe, about $580 billion went to fixing Y2K.
The effort was monumental. In the two years before the turn of the century, most of the United States’ large companies and government agencies — many of which had been running on software that was decades old — worked overtime to examine and rid their code of the software bug.
The alarm proved useful. When companies looked at their code, many found they were more vulnerable to Y2K than they’d previously thought, the Senate report found. Many also came up with ways to mitigate disaster in case their fixes didn’t work: Local governments rebuilt and tested emergency management systems, which later proved crucial for New York after the Sept. 11 terrorist attacks.
The fight against Y2K was also close to unprecedented. Throughout our history, Americans have been good at getting things done after the stakes have become clear. We moved mountains after the Great Depression and Pearl Harbor. But Y2K is one of the precious few examples where we mobilized to fight something looming on the horizon — the same kind of mobilization we now need for climate change.
One popular misconception about Y2K is that it was a wasted effort. After all, when the clocks turned over on Jan. 1, 2000, there were scattered problems, but the world didn’t end. And there is some evidence that money was misspent.
But several of the government and outside analysts who have studied the response — including the Senate task force — concluded that on the whole, the effort was justified, given what we knew about the bug beforehand, and especially considering the United States’ particular vulnerability to tech problems.
The best analysis of the effort I’ve read came from two Australian researchers, John Phillimore and Aidan Davison, who argued in a 2002 paper that fighting Y2K was an example of the “precautionary principle,” an idea well-known in the environmental movement. It essentially boils down to this: It’s better to be safe than sorry, especially if the sorry end of the spectrum involves the end of the world as we know it.
And the way to get people to understand that, Mr. Phillimore and Mr. Davison wrote, is to spell out the worst case. “Y2K shows that the way problems are portrayed is crucial to how solutions are approached,” the researchers wrote. “Small, discrete problems are easier to understand than ‘slow-burn’, incremental ones. Providing people with specific examples of things that might go wrong is more effective than general warnings.”
They added: “This might be particularly pertinent to debates on global warming.” Indeed.