Chapter Six, Part VII

When he opened the Guardian Bank and Trust Company in the Cayman Islands in 1986, John Mathewson had no experience, not many clients, and only a cursory knowledge of how banks really worked. But, in his own peculiar way, he was a visionary. What Mathewson understood was that there were many American citizens with lots of money that they did not want the Internal Revenue Service to know anything about, and that these Americans would pay hefty sums if Mathewson could keep their money safe from the prying eyes of the IRS.

So Mathewson obliged them. He showed his clients how to set up shell corporations. He never reported any of the deposits he received to the IRS. And he gave his clients debit cards that allowed them to access their Guardian accounts from anywhere in the United States. Mathewson charged hefty fees for his services— $8,000 to set up an account, $100 for each transaction—but no one seemed to mind. At its peak, Guardian had $150 million in deposits and two thousand clients.

In 1995, Mathewson left the Caymans after a dispute with a government official, and moved to San Antonio to enjoy his retirement. It didn’t last long. Within a few months, he was arrested for money laundering. Mathewson was an old man. He did not want to go to prison. And he had something valuable to trade for his freedom: the encrypted records of all the depositors who had put money into Guardian Trust. So he cut a deal. He pled guilty (and was sentenced to five years’ probation and five hundred hours of community service). And he told the government everything he knew about tax cheats.

The most interesting information Mathewson had to offer was that offshore banks were no longer catering only to drug dealers and money launderers. Instead, these banks served many Americans who had earned their money honestly but simply didn’t want to pay taxes on it, As Mathewson told a Senate panel in 2000, “Most of [Guardian’s] clients were legitimate business people and professionals.” A typical Mathewson client was someone like Mark Vicini, a New Jersey entrepreneur who ran a computer company called Micro Rental and Sales. Vicini was, by all accounts, a respected member of his community. He put his relatives through college. He gave generously to charities. And, between 1991 and 1994, Vicini sent $9 million to the Caymans, $6 million of which he never mentioned to the IRS. This saved him $2. 1 million in unpaid taxes. (It also eventually earned him a five-month stint in federal prison, where he was sent after pleading guilty to tax evasion.)

Mathewson’s clients were not alone, either. In fact, the nineties saw a boom in tax evasion. By the end of the decade, two million Americans had credit cards from offshore banks. Fifteen years earlier, almost none did. Promoters, who often used the Internet to push their scams, advertised “layered trusts,” “offshore asset protection trusts,” and “constitutional pure trusts.” A small but obstinate (and obtuse) group of tax evaders advised people that they didn’t have to pay their taxes because the income tax had never actually been passed by Congress. And old standbys—keeping two sets of books, incorporating yourself as a charity or a church and then writing off all your expenses as charitable contributions—stayed alive. All these schemes did have an important downside: they were illegal. But rough estimates suggested that they were costing the United States as much as $200 billion a year by the end of the decade.

The vast majority of Americans never experimented with any of these schemes. They continued to pay their taxes honestly, and they continued to tell pollsters that cheating on your taxes was wrong. But there’s little doubt that the proliferation of these schemes—and the perception that many of them were successful—made average Americans more skeptical of the fairness of the tax system. Adding to those doubts was the ever-increasing complexity of the tax system, which made it more difficult to know what your fair share of taxes really was, and the 1990s boom in corporate tax shelters, which was responsible for what the Treasury Department called, in 1999, “an unacceptable and growing level of tax avoidance.” The title of a 2001 Forbes article on the tax system captured what more than a few Americans were wondering about themselves: ARE YOU A CHUMP?

Why did this matter? Because tax paying is a classic example of a cooperation problem. Everyone reaps benefits from the services that taxes fund. You get a military that protects you, schools that educate not only your children but the children of others (whom you need to become productive citizens so that they will grow up to support you in your old age), free roads, police and fire protection, and fundamental research in science and technology You also get a lot of other stuff you perhaps don’t want, too, but for most people the benefits must outweigh the costs, or else taxes would be lower than they are. The problem is that you can reap the benefits of all these things whether or not you actually pay taxes. Most of the goods that the government provides are what economists call nonexcludable goods—meaning, as the name suggests, that it’s not possible to allow some people to enjoy the goods while excluding others. If a national missile defense system is ever built, it will protect your house whether or not you’ve ever paid taxes. Once 1-95 was built, anyone could travel on it. So even if you think government spending is a good thing from a purely self-interested perspective, you have an incentive to avoid chipping in your fair share. Since you get the goods whether or not you personally pay for them, it’s rational for you to free ride. But if most people free ride, then the public goods disappear. It’s Mancur Olson’s theory all over again.

We may not normally think of taxpaying as a matter of cooperation, but at its core that’s what it comes down to. Taxpaying is obviously different from, say, being a member of an interest group in one important sense: not paying your taxes is against the law. But the truth is that if you cheat on your taxes, the chances that you’ll get caught have historically been pretty slim. In 2001, for instance, the IRS audited only 0.5 percent of all returns. In purely economic terms, it may actually be rational to cheat. So a healthy tax system requires something more than law. Ultimately, a healthy tax system requires people to pay their taxes voluntarily (if grudgingly). Paying taxes is individually costly but collectively beneficial. But the collective benefits only materialize if everyone takes part.

Why do people take part? In other words, why, in countries like the United States where the rate of tax compliance is relatively high, do people pay taxes? The answer has something to do with the same principle that we saw at work in the story of Richard Grasso: reciprocity. Most people will participate as long as they believe that everyone else is participating, too. When it comes to taxes, Americans are what historian Margaret Levi calls “contingent consenters.” They’re willing to pay their fair share of taxes, but only as long as they think that others are doing so, too, and only as long as they believe that people who don’t pay their taxes have a good chance of being caught and punished. “When people start to feel that the policeman is asleep, and when they think others are breaking the law and getting away with it, they start to feel like they’re being taken advantage of,” says Michael Graetz, a law professor at Yale. People want to do the right thing, but no one wants to be a sucker.

Consider the results of public-goods experiments that the economists Ernst Fehr and Simon Gächter have run. The experiments work like this. There are four people in a group. Each has twenty tokens, and the game will last four rounds. On each round, a player can either contribute tokens to the public pot, or keep them for himself. If a player invests a token, it costs him money. He invests one token, and he personally earns only 0.4 tokens. But every other member in the group gets 0.4 tokens, too. So the group as a whole gets 1.6 tokens for every one that’s invested. The point is this: if everyone keeps their money and invests nothing, they each walk away with twenty tokens. If everyone invests all their money, they each walk away with thirty-two tokens. The catch, of course, is that the smartest strategy ordinarily will be to invest nothing yourself and simply free ride off everyone else’s contributions. But if everyone does that, there will be no contributions.

As with the ultimatum game, the public-goods games are played in a similar fashion throughout the developed world. Most people do not act selfishly at first, instead, most contribute about half their tokens to the public pot. But as each round passes, and people see that others are free riding, the rate of contribution drops. By the end, 70 to 80 percent of the players arc free riding, and the group as a whole is much poorer than it would otherwise be.

Fehr and Gächter suggest that people in general fall into one of three categories. Twenty-five percent or so are selfish—which is to say they are rational, in the economic sense—and always free ride. (That’s close to the same percentage of people who make lowball offers in the ultimatum game.) A small minority are altruists, who contribute heavily to the public pot from the get-go and continue to do so even as others free ride. The biggest group, though, are the conditional consenters. They start out contributing at least some of their wealth, but watching others free ride makes them far less likely to keep putting money in. By the end of most public- goods games, almost all the conditional consenters are no longer cooperating.

The key to the system, then, is making sure the conditional consenters keep cooperating, and the way to do that is to make sure they don’t feel like suckers. Fehr and Gächter tweaked the public- goods game to demonstrate: this time, at the end of every round, they revealed what each person had or had not contributed to the public pot, which made the free riders visible to everyone else. Then they offered people the opportunity to punish the free riders. For the price of a third of a token, you could take one token away from the free rider. Two things happened as a result. First, people spent money to punish the evildoers—even though, again, it was economically irrational for them to do so. Second, the free riders shaped up and started contributing their fair share. in fact, even during the last rounds of these games, when there was no reason to keep contributing (since no punishment could be inflicted), people continued to chip in.

When it comes to solving the collective problem of how to get people to pay their taxes, then, there are three things that matter. The first is that people have to trust, to some extent, their neighbors, and to believe that they will generally do the right thing and live up to any reasonable obligations. The political science professor John T. Scholz has found that people who are more trusting are more likely to pay their taxes and more likely to say that it’s wrong to cheat on them. Coupled with this, but different from it, is trust in the government, which is to say trust that the government will spend your tax dollars wisely and in the national interest. Not surprisingly, Scholz has found that people who trust the government are happier (or at least less unhappy) about paying taxes.

The third kind of trust is the trust that the state will find and punish the guilty, and avoid punishing the innocent. Law alone cannot induce cooperation, but it can make cooperation more likely to succeed. If people think that free riders—people not paying taxes but still enjoying all the benefits of living in the United States—will be caught, they’ll be happier (or at least less unhappy) about paying taxes. And they’ll also, not coincidentally, be less likely to cheat. So the public image of the IRS can have a profound impact on the way conditional consenters behave. Mark Matthews, head of the agency’s Criminal Investigative Division, was keenly aware that the success of criminal investigations was measured not just by the number of criminals caught but also by the public impact of its work.. “There is a group of people that could be tempted by these scams, a group that could let aggressive tax planning become too aggressive. We need to convince them before that happens that it doesn’t make sense,” Matthews said. “A huge part of the agency’s mission is making sure that people believe the system works.”

Getting people to pay taxes is a collective problem. We know what’ the goal is: everyone should pay their fair share (this says nothing, of course, about what a fair share is). The question, then, is how? The U.S. model—which is, by global standards, successful, since despite Americans’ vehement anti-tax rhetoric they actually evade taxes far less than Europeans do—suggests that while law and regulation have a key role to play in encouraging taxpaying, they work only when there is an underlying willingness to contribute to the public good. Widespread taxpaying amounts to a verdict that the system, in at least a vague sense, works. That kind of verdict can only be reached over time, as people—who perhaps first started paying taxes out of fear of prosecution—recognize the mutual benefits of taxpaying and institute it as a norm.

Another way of putting this is to say that successful taxpaying breeds successful taxpaying. And that positive-feedback loop is at work, I’d argue, in most successful cooperative endeavors. The mystery of cooperation, after all, is that Olson was right: it is rational to free ride. And yet cooperation, on both a small and a large scale, permeates any healthy society. It’s not simply the obvious examples, like contributing to charities or voting or marching on picket lines, all of which are examples of collective action that people participate in. It’s also the subtler examples, like those workers who, by all rights, could shirk their responsibilities without being punished (because the costs of monitoring them are too high) and yet do not, or those customers who leave tips for waitresses in restaurants in distant cities. We can anatomize these acts and explain what gives rise to them. But there is something irreducible at their heart, and it marks the difference between society on the one hand and just a bunch of people living together on the other.

Previous section

Chapter Six, Part VI

In five thousand American homes, there are television sets that are rather different from your standard Sony. These sets have been wired by Nielsen Media Research with electronic monitoring devices called “people meters.” The people meters are designed to track, in real time, two things: what TV shows are being watched and, just as important, who is watching them. Every person in a “people-meter family” is given a unique code, which they’re supposed to use to log in each time they sit down to watch television. That way, Nielsen—which downloads the data from the people meters every night—is able to know that Mom and Dad like CSI, while their college-age daughter prefers Alias.

Nielsen, of course, wants that information because advertisers crave demographic data. Pepsi may be interested to hear that 22 million people watched a particular episode of Friends. But what it really cares about is how many people aged eighteen to twenty- four watched the episode. The people meter is the only technology that can tell Pepsi what it wants to know. So, when the major TV networks sell national advertising, it’s the people-meter data that they rely on. Five thousand families determine what ads Americans see and, indirectly, what programs they watch.

There is, of course, something inherently troubling about this. Can five thousand really speak for 120 million? But Nielsen works hard to ensure that its families are a reasonable match, in demographic terms, for the country as a whole. And while the people meters are hardly flawless—over time, people become less religious about logging in—they have one great advantage over most ways of gathering information: they track what people actually did watch, not what they remember watching or say they watched. All in all, Nielsen’s numbers are probably more accurate than your average public-opinion poll.

The trouble with people meters is that there are only five thousand of them, and they are scattered across the country. So while Nielsen’s daily ratings provide a relatively accurate picture of what the country as a whole is watching, they can’t tell you anything about what people in any particular city are watching.
That matters because not all the ads you see on prime-time television are national ads. In fact, a sizable percentage of them are local. And local advertisers like demographic information as much as national advertisers do. If you own a health club in Fort Wayne, Indiana, you’d like to know what Tuesday prime-time show eighteen- to thirty-four-year-olds in Fort Wayne watch. But the people meters can’t tell you.

The major networks have tried to solve this problem with what’s known as “sweeps.” Four times a year—in February, May, July, and November—Nielsen sends out 2.5 million paper diaries to randomly selected people in almost every TV market in the country and asks them to record, for a week, what programs they watch. Nielsen also collects information on all the people who fill out diaries, so that at the end of each sweeps month it’s able to produce demographic portraits of the country’s TV markets. The networks’ local stations—the affiliates—and local advertisers then use the information from those diaries to negotiate ad rates for the months ahead.

What’s curious about this system is that it’s lasted so long— sweeps have been around since the early days of television—even though its flaws are so obvious and so profound. To begin with, there’s no guarantee sweeps ratings are accurate. The lower the response rate to a random survey, the greater the chance of error, and the sweeps system has a remarkably low response rate—only 30 percent or so of the diaries that Nielsen distributes are filled out. That helps create what’s called “cooperator bias,” which means that the people who cooperate with the survey may not watch the same programs as people who don’t. (In fact, they almost certainly don’t.) And the low-tech nature of the diaries creates problems, too. People don’t fill out the diaries as they’re actually watching TV Like most of us, they procrastinate and fill out the diaries at the end of the week. So what people record will be what they remember watching, which may not match what they did watch. People are more likely to remember high-profile shows, so the diary system inflates network ratings while deflating the ratings of smaller cable networks. The diaries are also no good at chronicling the restless viewing habits of channel surfers.

Even if the diaries were accurate, though, they wouldn’t be able to tell advertisers or the networks what people are really watching most of the time. That’s because network programming during sweeps months has almost nothing in common with network programming during the other eight months of the year. Be cause sweeps matter so much to local stations, the networks are forced into what’s called “stunt” programming. They pack sweeps months with onetime specials, expensive movies, and high-profile guest appearances. February 2003, for instance, became the month of Michael Jackson on network television, with ABC, NBC, and Fox all spending millions of dollars on shows about the bizarre pop singer. And that same month saw the long-awaited (at least by a few) climaxes to the unreality-TV sagas The Backelorette and Joe Millionaire. The networks also have to air only new episodes of their best shows. During sweeps months, no reruns are allowed.

Stunt programming is bad for almost everyone: the advertisers, the networks, and the viewers. Advertisers, after all, are paying prices based on ratings that reflect stunt programming. Allen Banks, executive media director at Saatchi and Saatchi, North America, has called sweeps “a sham, a subterfuge.” The picture they give you is anything but typical of what’s going on the rest of the year,” he has said. Some advertisers do try to account for the impact of sweeps when buying ad time, but since in most local markets sweeps represent the only hard data they have, the numbers still end up being disproportionately important.

For the networks, meanwhile, sweeps months mean that much of their best—in the loose sense of the word—programming will be wasted in head-to-head competition. During sweeps month, in any given hour there may be two or three shows worth watching (if you really like television). But viewers can only watch one of those shows. Had the networks been able to air those shows at different times instead of against each other, the total number of people who watched them would have been much higher. By pitting their best shows against each other, the networks actually shrink their total viewership. In the same vein, sweeps are bad for TV viewers because they guarantee a paucity of new and interesting programming in non-sweeps months. If you’re a connoisseur of lurid spectacle, your cup runneth over in November. But in January, you will be drowning in a sea of reruns.

Sweeps, then, are not very good at measuring who’s watching what; they force advertisers to pay for unreliable and unrepresentative data; and they limit the number of viewers the networks can reach over the course of a year. Everyone in television knows this, and believes that the industry would be much better off with a different way of measuring local viewership. But even though there is a better alternative available__namely Nielsen’s people meters— everyone in television continues to participate in the sweeps system and play by its rules. This raises an obvious question: Why would so many people acquiesce in such a dumb system?

The immediate answer is that it’s too expensive to change. People meters are costly to install and even more costly to keep running, since they’re always on. Wiring every local market with people meters would cost . . . well, it’s not exactly clear since Nielsen refuses to release any data on how expensive the people meters are. But at the very least, if you wanted to wire thousands of homes in each of the country’s 210 TV markets, you’d likely be talking at least nine figures. That’s a lot more than the paper diaries—which people fill out for free—cost, even with the postage included.

Still, even $1 billion isn’t that much money in the context of the TV and advertising industries as a whole. Every year something like $25 billion in ad money is spent on the basis of sweeps data, which means that $25 billion is almost certainly being misspent. The networks, meanwhile, spend hundreds of millions of dollars every year during sweeps that could certainly be better spent elsewhere, while they also pay a price for the suicidal competition that sweeps creates. So it seems likely that investing in people-meter technology__or something like it—would be the collectively intelligent thing to do, and would leave the networks and the advertisers much better off.

The problem is that even though most of the players in the TV business would be better off if they got rid of sweeps, no single player would he better off enough to justify spending the money on an alternative. Local advertisers in Sioux Falls, for instance, would obviously like it if they knew that the ratings of the CBS affiliate in Sioux Falls were really accurate. But local advertisers in Sioux Falls don’t spend enough money to make it worth their while to invest in people meters for the town. And ABC might prefer not to have to stunt program, but it doesn’t get much direct economic benefit from a more accurate local-rating system.

One obvious answer would be for everyone to pitch in and fix the system. But that strategy collides with the stinging critique of the possibility of cooperation that the sociologist Mancur Olson offered in his 1965 book, The Logic of Collective Action. Olson focused his work around the dilemma that interest groups, like the American Medical Association, faced in trying to get individual members to participate. Since all doctors benefited from the AMA’s lobbying efforts, but no one doctor’s effort made much of a difference in the success or failure of those efforts, Olson thought that no doctors would voluntarily participate. The only answer, he argued, was for the groups to offer members other benefits—like health insurance or, in the case of the AMA, its medical journal—--that gave them an incentive to join. Even then, Olson suggested that it would be difficult at best to get people to do things like write a letter to Congress or attend a rally. For the individual, it would always make more sense to let someone else do the work. Similarly, if the group of networks and stations and advertisers were to act, everyone in the business—including those who did nothing— would reap the benefits. So everyone has an incentive to sit on their hands, wait for someone else to do something, and free ride. Since everyone wants to be a free rider, nothing gets done.

As we’ve seen, it’s not clear that Olson’s critique is as universally applicable as it was once thought to be. Groups do cooperate. People do contribute to the common good. But the fact that people will contribute to the common good doesn’t mean that businesses necessarily will. The kind of enlightened self-interest that can lead people to cooperate requires an ability to think about the long term. Corporations are, perhaps because investors encourage them to be, myopic. And in any case, the way the TV industry is organized makes the networks and advertisers more susceptible to the collective-action trap than they otherwise would be.

The way Nielsen ratings are paid for exacerbates the problem. Since sweeps data is valuable to both the affiliates and the advertisers, you might imagine that the cost would be split between them. In fact, though, the affiliates pay 90 percent of the cost of collecting and analyzing the sweeps diaries, and since the one who pays is the one who has the power, the affiliates dictate what happens to sweeps. As it turns out, they’re the only players in television who like sweeps. The diary system, after all, favors recognizable names and networks, which means it inflates the affiliates’ ratings at the expense of smaller stations. The affiliates don’t pay any of the hundreds of millions of dollars the networks spend on sweeps programming. They just reap the benefits. As for the negative effect that sweeps has on viewership in the other eight months of the year, the affiliates don’t really care about those months, since their ratings aren’t being tracked then, It’s only a little bit of an overstatement, in fact, to say that the only shows the affiliates care about are those that air in February, May, July, and November. Far from wanting to use people meters, the affiliates are actively hostile to them. In fact, when Nielsen introduced people meters into Boston in 2002, not a single affiliate signed up for the service. The stations decided that no ratings would be better than the people- meter numbers.

As much as the persistence of sweeps testifies to the problem of collective action, it also demonstrates the perils of allowing a single self-interested faction to dictate a group’s decision. If funding a reliable local-ratings system was something that historically the networks and advertisers had helped pay for, they might actually have had some leverage when it came to revamping the system. Instead, they’re effectively dancing to the affiliates’ tune.

All in all, its a grim picture, even if you leave out Joe Millionaire and Michael Jackson’s face. It is a picture that’s going to change—as cable becomes important, the paper-diary system looks more and more like a relic, and in 2003 Nielsen announced that it would go ahead and roll out people meters in the country’s top-ten television markets. But what remains striking is that a multibillion-dollar industry has been stuck for a long time with a backward, inaccurate technology because the major players could not figure out how to cooperate. If successful solutions to cooperation problems are often, as in the case of the uprising against Richard Grasso, the result of individually irrational acts producing collectively rational results, the failure to solve cooperation problems is often the result of the opposite phenomenon. On their own, all the key players in the TV industry have been smart. But together, they’ve been dumb.

Next section

Previous section

Chapter Six, Part V

The social benefits of trust and cooperation are, at this point, relatively unquestioned. But they do create a problem: the more people trust, the easier they are for others to exploit. And if trust is the most valuable social product of market interactions, corruption is its most damaging. Over the centuries, market societies have developed mechanisms and institutions that are supposed to limit corruption, including auditors, rating agencies, third-party analysts, and, as we’ve seen, even Wall Street banks. And they have relied, as well, on the idea that companies and individuals will act honestly—if not generously—because doing so is the best way to ensure long-term financial success. In addition, in the twentieth century a relatively elaborate regulatory apparatus emerged that was supposed to protect consumers and investors. These systems work well most of the time. But sometimes they don’t, and when they don’t, things come apart, as they did in the late 1990s.

The stock-market bubble of the late nineties created a perfect breeding ground for corruption. In the first place, it wiped away, almost literally, the shadow of the future for many corporate executives. CEOs who knew that their companies’ future cash flow could never justify their outrageously inflated stock prices also knew that the future was therefore going to be less lucrative than the present. Capitalism is healthiest when people believe that the long-term benefits of fair dealing outweigh the short-term benefits of sharp dealing. In the case of the executives at companies like Enron and Tyco, though, the short-term gains from self-interested and corrupt behavior were so immense—because they had so many stock options, and because their boards of directors paid them no attention—that any long-term considerations paled by comparison. In the case of Dennis Kozlowski, the CEO of Tyco, for instance, it’s hard to see how he could have made $600 million honestly if he had stayed CEO of Tyco. But dishonestly, it was remarkably easy. Investors should have understood that the rules of the game had changed, and that the incentives for CEOs to keep their promises, or to worry about the long-term health of their businesses, had effectively disappeared. But they didn’t, and because they were so intoxicated with their bull-market gains, they also stopped doing the due diligence that even trusting investors are supposed to do.

At the same time, the mechanisms and institutions that were supposed to limit corruption ended up facilitating corruption rather than stopping it, The business of Wall Street and the accounting industry is supposed to be to distinguish between the trustworthy and the trustworthless, just as the Underwriters Laboratory distinguishes between safe and dangerous electrical equipment. If Goldman Sachs underwrites a stock offering for a company, it’s saying that the company has real value, as is Merrill Lynch when one of its analysts issues a buy recommendation. If the New York Stock Exchange lists a company, it’s attesting to the fact that the firm is not a fly-by-night operation. And when Ernst and Young signs off on an audit, it’s telling us that we can trust that company’s numbers.

We are willing to believe Ernst and Young when it says this because its entire business seems to depend on its credibility. If the Underwriters Laboratory started affixing its UL mark to lamps that electrocuted people, pretty soon it wouldn’t have a business. In the same way, if Ernst and Young tells us to trust a company that turns out to be cooking the books, people should stop working with Ernst and Young. As Alan Greenspan has said of accountants, “The market value of their companies rest[s] on the integrity of their operations.” So accountants don’t have to be saints to be useful. In their self-interest alone will compel them to do a good job of separating the white hats from the black. But this theory only works if the firms that don’t do a good job are actually punished for their failure. And in the late nineties, they weren’t. The Nasdaq listed laughable companies. White-shoe firms such as Goldman Sachs underwrote them. The accountants wielded their rubber stamps. (Between 1997 and 2000, seven hundred companies were forced to restate their earnings. In I98 1,just three companies did.) But none of these institutions paid a price in the marketplace for such derelictions of duty. They got more business, not less. In the late nineties, Arthur Andersen was the auditor of record in accounting disasters like Waste Management and Sunbeam. Yet investors chose not to look skeptically at companies, such as WorldCom and Enron, that continued to use Andersen. In effect, investors stopped watching the watchmen, and so the watchmen stopped watching, too. In a world in which not all capitalists are Quakers, trust but verify remains a useful byword.

Next section

Previous section

Chapter Six, Part IV

In eighteenth- and early nineteenth- century Britain, a sizable chunk of the nation’s economy was run by members of the religious sect known as the Quakers. Quakers owned more than half of the country’s ironworks. They were key players in banking (both Barclays and Lloyds were Quaker institutions). They dominated consumer businesses such as chocolate and biscuits. And they were instrumental in facilitating the transatlantic trade between Britain and America.

Initially, Quaker success was built around the benefits Quakers got from trading with each other. Because they dissented from - the English state religion’, members of the sect were barred from the professions, and as a result they gravitated toward business. When Quakers went looking for credit or for trade, they found it easy to partner with fellow believers. Their common faith facilitated trust, allowing a Quaker tradesman in London to ship goods across the ocean and be certain that he would be paid when they arrived in Philadelphia.

Quaker prosperity did not go unnoticed in the outside world. Quakers were well-known already for their personal emphasis on absolute honesty, and as businessmen they were famously rigorous and careful in their record keeping. They also introduced innovations like fixed prices, which emphasized transparency over sharp dealing. Soon, people outside the sect began to seek Quakers as trading partners, suppliers, and sellers. And as Quaker prosperity grew, people drew a connection between that prosperity and the sect’s reputation for reliability and trustworthiness. Honesty, it started to seem, paid.

In the wake of the orgy of corruption in which American businesses indulged during the stock-market bubble of the late 1990s, the idea that trustworthiness and good business might go together sounds woefully naïve. Certainly one interpretation of these scandals is that they were not aberrations but the inevitable by-product of a system that plays to people’s worst impulses: greed, cynicism, and selfishness, This argument sounds plausible, if only because capitalist rhetoric so often stresses the virtue of greed and the glories of what “Chainsaw” Al Dunlap, the legendarily ruthless, job- cutting CEO, liked to call “mean business.” But this popular image of capitalism bears only slight resemblance to its reality. Over centuries, in fact, the evolution of capitalism has been.in the direction of more trust and transparency, and less self-regarding behavior. Not coincidentally, this evolution has brought with it greater productivity and economic growth.

That evolution did not take place because capitalists are naturally good people. Instead it took place because the benefits of trust—that is, of being trusting and of being trustworthy—are potentially immense, and because a successful market system teaches people to recognize those benefits. At this point, it’s been well demonstrated that flourishing economies require a healthy level of trust in the reliability and fairness of everyday transactions. If you assumed every potential deal was a rip-off or that the products you were buying were probably going to be lemons, then very little business would get done. More important, the costs of the transactions that did take place would be exorbitant, since you’d have to do enormous work to investigate each deal and you’d have to rely on the threat of legal action to enforce every contract. For an economy to prosper, what’s needed is not a Pollyannaish faith in the good intentions of others—caveat emptor remains an important truth—but a basic confidence in the promises and commitments that people make about their products and services. As the economist Thomas Schelling has put it: “One has only to consider the enormous frustration of conducting foreign aid in an underdeveloped country, or getting a business established there, to realize what an extraordinary economic asset is a population of honest conscientious people.”

Establishing that confidence has been a central part of the history of capitalism. In the medieval period, people trusted those within their particular ethnic or provincial group. Historian Avner Greif has shown how the Moroccan traders known as the Maghribi built a trading system across the Mediterranean in the eleventh century by creating a system of collective sanctions to punish those who violated their commercial codes. Trade between groups, meanwhile, depended on rules that applied to the group as a whole. If one Genoese trader ripped off someone in France, all Genoese traders paid the price. This may not have been exactly fair, but it had the virtue of creating conditions under which interstate trading could flourish, since it compelled trading communities to enforce internal discipline to encourage fair dealing. On the flip side of this, merchant guilds—most notably the German Hanseatic League—protected their members against unfair treatment from city-states by imposing collective trade embargoes against cities that seized merchant property.

As the Quaker example suggests, intragroup trust remained important for centuries. For that matter, it remains important today—look at the success of ethnic Chinese businessmen in countries across Southeast Asia. But in England, at least, contract law evolved to emphasize individual responsibility for agreements and, more important, the idea of that responsibility began to take hold among businessmen more generally. As one observer said in 1717, “To support and maintain a man’s private credit, ‘tis absolutely necessary that the world have a fixed opinion of the honesty and integrity, as well as ability of a person.” And Daniel Defoe, around the same time, wrote, “An honest tradesman is a jewel indeed, and is valued wherever he is found.”

Still, Defoe’s very emphasis on how valuable people found an honest businessman is probably evidence that there weren’t many honest businessmen. And the Quakers, after all, became known for their reliability precisely because it seemed exceptional. It’s certainly true that the benefits of honesty and the relationship between trust and healthy commerce were recognized. Adam Smith, in The Wealth of Nations, wrote, “when the greater part of people are merchants they always bring probity and punctuality into fashion,” while Moritesquieu wrote of the way commerce “polishes and softens” men. But it wasn’t until the nineteenth century—not, coincidentally, the moment when capitalism as we know it flowered—that trust became, in a sense, institutionalized. As the historian Richard Tilly has shown in his study of business practices in Germany and Britain, it was during the 1800s that businessmen started to see that honesty might actually be profitable. In America, as John Mueller shows in his wonderful hook Capitalism, Democracy, and Ralph’s Pretty Good Grocery, P. T. Barnum—whom we all know as the victimizer of suckers—in fact pioneered modem ideas of customer service, while around the same time John Wanamaker was making fixed retail prices a new standard. And the end of the nineteenth century saw the creation of independent institutions like the Underwriters Laboratory and the Better Business Bureau, all of which were intended to foster a general climate of trust in everyday transactions. On Wall Street, meanwhile, J. P. Morgan built a lucrative business on the idea of trust. In the late nineteenth century, investors (particularly foreign investors) who had been burned by shady or shaky railroad investments were leery of putting more money into America. The presence of a Morgan man on the board of directors of a company came to be considered a guarantee that a firm was reliable and solid.

At the heart of this shift was a greater emphasis on the accumulation of capital over the long run as opposed to merely short- term profit, an emphasis that has been arguably a defining characteristic of modern capitalism. As Tilly writes, businessmen started to see “individual transactions as links in a larger chain of profitable business ventures,” instead of just “one-time opportunities to be exploited to the utmost.” If your prosperity in the long run depended on return business, on word-of-mouth recommendations, and on ongoing relationships with suppliers and partners, fair dealing became more valuable. The lubrication of commerce that trust provides became more than desirable. It became, necessary.

What was most important about this new concept of trust was that it was, in some sense, impersonal. Previously, trust had been the product primarily of a personal or in-group relationship— I trust this guy because I know him or because he and I belong to the same sect or clan—rather than a more general assumption upon which you could do business. Modern capitalism made the idea of trusting people with whom you had “no prior personal ties” seem reasonable, if only by demonstrating that strangers would not, as a matter of course, betray you. This helped trust become woven into the basic fabric of everyday business. Buying and selling no longer required a personal connection. It could be driven instead by the benefits of mutual exchange.

The impersonality of capitalism is usually seen as one of its unfortunate, if inescapable, costs. In place of relationships founded on blood or affection, capitalism creates relationships founded solely on what Marx called the “money nexus.” But, in this case, impersonality was a virtue. One of the fundamental problems with trust is that it usually flourishes only where there are what sociologists call “thick relationships”—relationships of family or clan or neighborhood. But these kinds of relationships are impossible to maintain with many people at once and they are incompatible with the kind of scope and variety of contacts that a healthy modern economy (or a healthy modern society) needs to thrive. In fact, thick relationships can often be inimical to economic growth, since they foster homogeneity and discourage open market exchange in favor of personalized trading. Breaking with the tradition of defining trust in familial or ethnic terms was therefore essential. As the economist Stephen Knack writes, “The type of trust that should be unambiguously beneficial to a nation’s economic performance is trust between strangers, or more precisely between two randomly selected residents of a country Particularly in large and mobile societies where personal knowledge and reputation effects are limited, a sizeable proportion of potentially mutually beneficial transactions will involve parties with no prior personal ties.”

As with much else, though, this relationship between capitalism and trust is usually invisible, simply because it’s become part of the background of everyday life. I can walk into a store anywhere in America to buy a CD player and be relatively certain that whatever product I buy—a product that, in all likelihood, will have been made in a country nine thousand miles away—will probably work pretty well. And this is true even though I may never walk into that store again. At this point, we take both the reliability of the store and my trust in that reliability for granted. But in fact they’re remarkable achievements.

This sense of trust could not exist without the institutional and legal framework that underpins every modern capitalist economy. Consumers rarely sue businesses for fraud, but businesses know that the possibility exists. And if contracts between businesses are irrelevant, it’s hard to understand why corporate lawyers are so well paid. But the measure of success of laws and contracts is how rarely they are invoked. And, as Stephen Knack and Philip Keefer write, “Individuals in higher-trust societies spend less to protect themselves from being exploited in economic transactions. Written contracts are less likely to be needed, and they do not have to specify every possible contingency.” Or, as Axelrod quotes a purchasing agent for a Midwestern business as saying, “If something comes up you get the other man on the telephone and deal with the problem. You don’t read legalistic contract clauses at each other if you ever want to do business again.”

Trust begins there, as it does in Axelrod’s model, because of the shadow of the future. All you really trust is that the other person will recognize his self-interest. But over time, that reliance on his own attention to his self-interest becomes something more. It becomes a general sense of reliability, a willingness to cooperate (even in competition) because cooperation is the best way to get things done. What Samuel Bowles and Herbert Gintis call prosociality becomes stronger because prosociality works.

Now, I realize how improbable this sounds, Markets, we know, foster selfishness and greed, not trust and fairness. But even if you find the history unconvincing, there is this to consider: in the late 1990s, under the supervision of Bowles, twelve field researchers—including eleven anthropologists and one economist— went into fifteen “small-scale” societies (essentially small tribes that were, to varying degrees, self-contained) and got people to play the kinds of games in which experimental economics specializes. The societies included three that depended on foraging for survival, six that used slash-and-burn techniques, four nomadic herding groups, and two small agricultural societies. The three games the people were asked to play were the three standards of behavioral economics: the ultimatum game (which you just read about), the public-goods game (in which if everyone contributes, everyone goes away significantly better off, while if only a few people contribute, then the others can free ride off their effort), and the dictator game, which is similar to the ultimatum game except that the responder can’t say no to the proposer’s offer. The idea behind all these games is that they can be played in a purely rational manner, in which case the player protects himself against loss but forgoes the possibility of mutual gain. Or they can be played in a prosocial manner, which is what most people do.

In any case, what the researchers found was that in every single society there was a significant deviation from the purely rational strategy But the deviations were not all in the same direction, so there were significant differences between the cultures. What was remarkable about the study, though, was this: the higher the degree to which a culture was integrated with the market, the greater the level of prosociality. People from more market-oriented societies made higher offers in the dictator game and the ultimatum game, cooperated in the public-goods game, and exhibited strong reciprocity when they had the chance. The market may not teach people to trust, but it certainly makes it easier for people to do so.

Next section

Previous section

Chapter Six, Part III

The mystery that the idea of prosocial behavior may help resolve is the mystery of why we cooperate at all. Societies and organizations work only if people cooperate. It’s impossible for a society to rely on law alone to make sure citizens act honestly and responsibly. And it’s impossible for any organization to rely on contracts alone to make sure that its managers and workers live up to their obligations, So cooperation typically makes everyone better off. But for each individual, it’s rarely rational to cooperate. It always makes more sense to look after your own interests first and then live off everyone else’s work if they are silly enough to cooperate. S0 why don’t most of us do just that?

The classic and canonical explanation of why people cooperate was offered by political scientist Robert Axelrod, who argued in the 1980s that cooperation is the result of repeated interactions with the same people. As Axelrod put it in his classic The Evolution of Cooperation, “The foundation of cooperation is not really trust, but the durability of the relationship . . . Whether the players trust each other or not is less important in the long run than whether the conditions are ripe for them to build a stable pattern of cooperation with each other.” People who repeatedly deal with each other over time recognize the benefits of cooperation, and they do not try to take advantage of each other, because they know if they do, the other person will be able to punish them. The key to cooperation is what Axelrod called “the shadow of the future.” The promise of our continued interaction keeps us in line, Successful cooperation, Axelrod argued, required that people start off by being nice—that is, by being willing to cooperate—but that they had to be willing to punish noncooperative behavior as soon as it appeared. The best approach was to be “nice, forgiving, and retaliatory.”

Those rules seem completely sensible, and are probably a good description of the way most people in a well-’functioning society deal with those they know. But there’s something unsatisfying, as Axelrod himself now seems to recognize, about the idea that cooperation is simply the product of repeated interactions with the same people. After all, we often act in a prosocial fashion even when there is no obvious payoff for ourselves. Look at the ultimatum game again. It is a one-shot game. You don’t play it with the same person more than once. The responders who turned down lowball offers were therefore not doing so in order to teach the proposer to treat them better. And yet they still punished those whom they thought were acting unfairly, which suggests that the “shadow of the future” alone cannot explain why we cooperate.

The interesting thing, ultimately, isn’t that we cooperate with those we know and do business with regularly. The interesting thing is that we cooperate with strangers. We donate to charities. We buy things off eBay sight unseen. People sign on to Kazaa and upload songs for others to download, even though they reap no benefit from sharing those songs and doing so means letting strangers have access to their computers’ hard drives. These are all, in the strict sense, irrational things to do. But they make all of us (well, aside from the record companies) better off. It may be, in the end, that a good society is defined more by how people treat strangers than by how they treat those they know.

Consider tipping. It’s understandable that people tip at restaurants that they frequent regularly: tipping well may get them better service or a better table, or it may just make their interactions with the waiters more pleasant. But, for the most part, people tip even at restaurants that they know they’ll never return to, and at restaurants in cities thousands of miles away from their homes. In part, this is because people don’t want to run the risk of being publicly reprimanded for not tipping or undertipping. But mostly, it’s because we accept that tipping is what you are supposed to do when you go to a restaurant, because tips are the only way that waiters and waitresses can make a living. And we accept this even though it means that we end up voluntarily giving money to strangers whom we may never see again. The logic of this whole arrangement is debatable (as Mr. Pink asked in Reservoir Dogs, why do we tip people who do certain jobs and not even think of tipping people who do other jobs?). But given that logic, tipping, and especially tipping strangers, is a resolutely prosocial behavior, and one that the shadow of the future alone cannot explain.

Why are we willing to cooperate with those we barely know? I like Robert Wright’s answer, which is that over time, we have learned that trade and exchange are games in which everyone can end up gaining, rather than zero-sum games in which there’s always a winner and a loser. But the “we” here is, of course, ill defined, since different cultures have dramatically different ideas about trust and cooperation and the kindness of strangers. In the next section, I want to argue that one of the things that accounts for those differences is something that is rarely associated with trust or cooperation: capitalism.

Next section

Previous section

Chapter Six, Part II

In September 2003, Richard Grasso, who was then the head of the New York Stock Exchange, became the first CEO in American history to get fired for making too much money. Grasso had run the NYSE since 1995, and by most accounts he had done a good job. He was aggressively self-promoting, but he did not appear to be incompetent or corrupt. But when the news broke that the NYSE was planning to give Grasso a lump-sum payment of $139.5 million—made, up of retirement benefits, deferred pay, and bonuses— the public uproar was loud and immediate, and in the weeks that followed, the calls for Grasso’s removal grew deafening. When the NYSE’s board of directors (the very people, of course, who had agreed to pay him the $139.5 million in the first place) asked Grasso to step down, it was because the public’s outrage had made it impossible to keep him around.

Why was the public so outraged? After all, they did not have to foot the bill for Grasso’s millions. The NYSE was spending its own money. And complaining about Grasso’s windfall didn’t make anyone else any better off. He had already been paid, and the NYSE wasn’t going to take the money it had promised him and give it to charity or invest it more wisely. From an economist’s point of view, in fact, the public reaction seemed deeply irrational, Economists have traditionally assumed, reasonably, that human beings are basically self-interested. This means a couple of (perhaps obvious) things, First, faced with different choices (of products, services, or simply courses of action), a person will choose the one that benefits her personally. Second, her choices will not depend on what anyone else does. But with the possible exception of business columnists, no one who expressed outrage over how much Dick Grasso made reaped any concrete benefits from their actions, making it irrational to invest time and energy complaining about him. And yet that’s exactly what people did. So the question again is: Why?
The explanation for people’s behavior might have something to do with an experiment called the “ultimatum game,” which is perhaps the most well-known experiment in behavioral economics. The rules of the game are simple. The experimenter pairs two people with each other. (They can communicate with each other, but otherwise they’re.anonymous to each other.) There given $10 to divide between them, according to this rule: One person (the proposer) decides, on his own, what the split should be (fifty-fifty, seventy-thirty, or whatever). He then makes a take-it-or-leave-it offer to the other person the responder). The responder can either accept the offer, in which case both players pocket their respective shares of the cash, or reject it, in which case both players walk away empty-handed.

If both players are rational, the proposer will keep $9 for himself and offer the responder $1, and the responder will take it. After all, whatever the offer, the responder should accept it, since if he accepts he gets some money and if he rejects, he gets none, A rational proposer will realize this and therefore make a lowball offer.

In practice, though, this rarely happens. Instead, lowball offers—anything below $2—are routinely rejected. Think for a moment about what this -means. People would rather have nothing than let their “partners” walk away with too much of the loot. They will give up free money to punish what they perceive as greedy or selfish behavior. And the interesting thing is that the proposers anticipate this—presumably because they know they would act the same way if they were in the responder’s shoes. As a result, the proposers don’t make many low offers in the first place. The most common offer in the ultimatum game, in fact, is $5.

Now, this is a long way from the “rational man” picture of human behavior. The players in the ultimatum game are not choosing what’s materially best for them, and their choices are clearly completely dependent on what the other person does. People play the ultimatum game this way all across the developed world: cross- national studies of players in Japan, Russia, the United States, and France all document the same phenomenon. And increasing the size of the stakes doesn’t seem to matter much either. Obviously, if the proposer were given the chance to divide $1 million, the responder wouldn’t turn down $100,000 just to prove a point. But the game has been played in countries, like - Indonesia, where the possible payoff was equal to three days’ work, and responders still rejected lowball offers.

It isn’t just humans who act this way, either. In a study that was fortuitously released the day Richard Grasso stepped down, primatologists Sarah F. Brosnan and Frans B. M. de Waal showed that female capuchin monkeys are also offended by unfair treatment. The capuchins had been trained to give Brosnan a granite pebble in exchange for food. The pay, as it were, was a slice of cucumber. The monkeys worked in pairs, and when they were both rewarded with cucumbers, they exchanged rock for food 95 percent of the time. This idyllic market economy was disrupted, - though, when the scientists changed the rules, giving one capuchin a delicious grape as a reward while still giving the other a cucumber slice. Confronted with this injustice, the put-upon capuchins often refused to eat their cucumbers, and 40 percent of the time stopped trading entirely. Things only got worse when one monkey was given a grape in exchange for doing nothing at all. In that case, the other monkey often tossed away.her pebble, and trades took place only 20 percent of the time. In other words, the capuchins were willing to give up cheap food—after all, a cucumber slice, for a pebble seems like a good deal—simply to express their displeasure at their comrades’ unearned riches. Presumably if they’d been given the chance to stop their comrades from enjoying those riches—as the players in the ultimatum game were—the capuchins would have gladly taken it.

Capuchins and humans alike, then, seem to care whether rewards are, in some sense, “fair.” That may seem like an obvious thing to worry about, but it’s not. If the monkey thought a rock for a cucumber slice was a reasonable trade and was happy to make it before she saw her comrade get a grape, she should be happy to make the trade afterward, too. After all, her job hasn’t gotten any harder, nor is the cucumber any less tasty. (Or if it is, that’s because she’s obsessed with what her neighbor’s getting.) So her feelings about the deal should stay the same. Similarly, the responders in the ultimatum game are being offered money for what amounts to a few minutes of “work,” which mostly consists of answering ‘yes” or “no.” Turning down free money is not something that, in most circumstances, makes sense. But people are willing to do it in order to make sure that the distribution of resources is fair.

Does this mean people think that, in an ideal world, everyone would have the same amount of money? No. It means people think that, in an ideal world, everyone would end up with the amount of money they deserved. In the original version of the ultimatum game, only luck determines who gets to be the proposer and who gets to be the responder. So the split, people feel, should be fairly equal. But people’s behavior in the game changes quite dramatically when the rules are changed. In the most interesting version of the ultimatum game, for instance, instead of assigning the proposer role randomly, the researchers made it seem as if the proposers ha earned their positions by doing better on a test. In those experiments, proposers offered significantly less money, yet not a single offer was rejected. People apparently thought that a proposer who merited his position deserved to keep more of the wealth.

Put simply, people (and capuchins) want there to be a reasonable relationship between accomplishment and reward. That’s what was missing in Grasso’s case. He was getting too much for having done too little. Grasso seems to have been good at his job. But he was not irreplaceable: no one thought the NYSE would fall apart once he was gone. More to the point, the job was not a $140 million job. (What job is?) In terms of complexity and sophistication, it bore no resemblance to, say, running Merrill Lynch or Goldman Sachs. Yet Grasso was being paid as much as many Wall Street CEOs, who are themselves heftily overcompensated.

The impulse toward fairness that drove Grasso from office is a cross-cultural reality, but culture does have a major effect on what counts as fair. American CEOs, for instance,, make significantly more money than-European or Japanese CEs, and salary packages that would send the Germans to the barricades barely merit a moment’s notice in the United States. More generally, high incomes by themselves don’t seem to bother Americans much— even though America has the most unequal distribution of income in the developed world, polls consistently show that Americans care much less about inequality than Europeans do. In fact, a 2001 study by economists Alberto Alesina, Rafael di Tella, and Robert MacCulloch found that in America the people whom inequality bothers most are the rich. One reason for this is that Americans are far more likely to believe that wealth is the result of initiative and skill, while Europeans are far more likely to attribute it to luck. Americans still think, perhaps inaccurately, of the United States as a relatively mobile society, in which it’s possible for a working-class kid to become rich. The irony is that Grasso himself was a working- class kid who made good. But even for Americans, apparently, there is a limit to how good you can make it.

There’s no doubt the indignation at Grasso’s retirement package was, in an economic sense, irrational. But like the behavior of the ultimatum ‘game responders, the indignation was an example of what economists Samuel Bowles and Herbert Gintis call “strong reciprocity,” which is the willingness to punish bad behavior (and reward good behavior) even when you get no personal material benefits from doing so. And, irrational or not, strong reciprocity is, as Bowles and Gintis term it, a “prosocial behavior” because it pushes people to transcend a narrow definition of self-interest and do things, intentionally or not, that end up serving the common good. Strong reciprocators are not altruists. They are not rejecting lowball offers, or hounding Dick Grasso, because they love humanity. They’re rejecting lowball offers because the offers violate their individual sense of what a just exchange would be. But the effect is the same as if they loved humanity: the group benefits. Strong reciprocity works. Offers in the ultimatum game are usually quite equitable, which is what they should be given the way the resources are initially set up. And whenever the NYSE thinks about hiring a CEO, it will presumably be more rigorous in figuring out how much he’s actually worth. Individually irrational acts, in other words, can produce a collectively rational outcome.

Next section

Previous section

Chapter Six, Part I

In the summer of 2002, a great crime was perpetrated against the entire nation of Italy. Or so at least tens of millions of Italian soccer fans insisted after the country’s national team was knocked out of the World Cup by upstart South Korea. The heavily favored Italians had scored an early goal against the Koreans and had clung to their 1—0 lead for most of the game, before yielding a late equalizer and then an overtime goal that sent them packing. The Italian performance had been mediocre at best. But the team was victimized by a couple of very bad officiating decisions, including one that disallowed a goal. Had those decisions gone the other way, it’s likely Italy would have won.

The Italian fans, of course, blamed the referee, an Ecuadorean named Byron Moreno, for the defeat, Strikingly, though, they did not blame Moreno for being incompetent (which he was). Instead, they blamed him for being criminal. In the fans’ minds, their team had been the victim of something more sinister than just bad officiating. Instead, the Italians had fallen prey to a global conspiracy—perhaps orchestrated by FIFA, soccer’s governing body—designed to keep them from their just desserts. Moreno had been the point man for the conspiracy. And he had carried out his orders perfectly.

The Milan daily Corriere della Sera, for instance, protested against a system in which “referees … are used as hitmen.” La Gazzetta dello Sport editorialized, “Italy counts for nothing in those places where they decide the results and put together million- dollar deals.” A government minister declared, “It seemed as if they just sat around a table and decided to throw us out.” And Francesco Totti, one of the stars of the Italian team, captured the conspiratorial mood best when he said, “This was a desired elimination. By who? I don’t know—there are things greater than me but the feeling is that they wanted us out.” In the weeks that followed the game, no proof of an anti-Italian cabal or of Moreno’s supposed chicanery surfaced (despite the best efforts of the Italian. papers). But the fans remained unwavering in their conviction that dark forces had united to destroy Italy’s ambitions.

To an outside observer, the accusations of corruption seemed crazy. Honest referees make bad decisions all the time. What reason was there to believe that Moreno was any different? But to anyone familiar with Italian soccer the accusations were completely predictable. That’s because in Italian soccer, corruption is assumed to be the natural state of affairs. Every year, the Italian soccer season is marred by weekly charges of criminality and skulduggery. Teams routinely claim -that individual refs have been bought off, and request that particular referees not be assigned to their games. Refereeing is front-page news. Every Monday night, a TV show called Biscardi’s Trial devotes two and a half hours to dissecting officiating mistakes and lambasting the officials for favoritism.

The effect of all this on actual Italian soccer games is not good. Although the players are among the very best in the world, the games are often halting, foul-ridden affairs repeatedly delayed by playacting, whining players more interested in working the refs than anything else. Defeat is never accepted as the outcome of a fair contest. And even victory is marred by the thought that perhaps backroom machinations were responsible for it.

So what does Italian soccer have to do with collective decision making and problem solving? Well, although the teams in a soccer game are trying to defeat each other, and therefore have competing interests, the teams also have a common interest:
namely, making sure that the games are entertaining and compelling for the fans. The more interesting the games are, the more likely it is that people will come, the greater ticket sales and TV ratings will be, and the higher team profits and player salaries will be. When two soccer teams play each other, then, they’re not just competing. They’re also, at least in theory, working together—along with the officials—in order to produce an entertaining game. And this is precisely what the Italian teams are unable to do. Because neither side can be sure that its efforts will be fairly rewarded, the players devote an inordinate amount of time to protecting their own interests. Energy time, and attention that would be better spent improving the quality of play instead goes into excoriating, monitoring, and trying to manipulate the referees. And the manipulation feeds on itself. Even if most players would rather be honest, they realize that they’d only be asking to be exploited. As Gennaro Gattuso, a winger for European champions AC Milan, said in October of 2003, “The system prevents you from telling the truth and being yourself.” Hardly anyone likes the system the way it is, but no one can change it.

What Italian soccer is failing to do, then, is come up with a good solution to what I’ll call here a cooperation problem. Cooperation problems often look something like coordination problems, because in both cases a good solution requires people to take what everyone else is doing into account. But if the mechanism is right, coordination problems can be solved even if each individual is single-mindedly pursuing his self-interest—in fact, in the case of price, that’s what coordination seems to require. To solve cooperation problems—which include things like keeping the sidewalk free of snow, paying taxes, and curbing pollution—the members of a group or a society need to do more. They need to adopt a broader definition of self-interest than the myopic one that maximizing profits in the short term demands. And they need to be able to trust those around them, because in the absence of trust the pursuit of myopic self-interest is the only strategy that makes sense. How does this happen? And does it make a difference when it does?

Next section