Queuing as a Coordination Mechanism

In some countries queuing has become a social norm.  In other countries queuing is not so established.  Here are some images of queues in China on so-called "queuing" days, days when queuing is enforced by the government.  You can read about why these "queuing days" are necessary here and how they were introduced here, here and here and the perils of getting in a taxi here.

Queues in China

A comment posted with the photo stated:

Queuing is never in China's vocabulary and cutting a queue is perceived as normal. You could encounter this act in almost anywhere such as restaurants, banks, toilets or ATMs. Once I experienced this in a super-mart. A customer shouted at the counter girl that since he bought only one small item he should be served first and it would be ridiculous for him to go to the end of the queue for paying. Surprisingly, the counter girl gave in.

The picture here depicted a scene where the Chinese are forced to queue up for purchasing the train tickets before their Chinese New Year. Noticed that they are so worried that people may cut into their queue they have to hug or arm-lock one another.

Here's a normal day getting onto public transport in China.

  Queueing for Transport Queueing for Transport2

Of course, the Chinese reaction to queuing, or lack thereof, can be viewed in a completely rational manner.  Time spent in a queue is time wasted.  It is likely that a society wastes a huge amount of resources when people stand in queues.  This time could be spent more productively instead of waiting in a queue.  Is their a more efficient way of organising queues as a coordination and allocation mechanism.

Steven Landsburg, the armchair economist, has a theory on this. A foolproof method to shorten queues.

You spend too much time waiting in lines. "Too much" isn't some vague value judgment—it's a precise economic calculation. A good place in line is a valuable commodity, but it's not ordinarily traded in the marketplace. And this "missing market" inevitably produces inefficient outcomes.

Under the current rules, line formation suffers from economic inefficiencies because we enter lines without regard to the interests of later arrivals who queue behind us. How to make line formation more efficient? Change the rules so that new arrivals go to the front of the line instead of the back. Then the addition of a new person in line would impose no costs at all on those who come later. With that simple reform, lines would be a lot shorter. People who got pushed back beyond a certain point would give up and go home. (Well, actually they'd leave the line and try to re-enter as newcomers, but let's suppose for the moment that we can effectively prohibit that behavior.) On average, we'd spend less time waiting, and we'd be happier.

Follow the link above to see how he proposes that this can work.  You can read more about this idea here and why queuing is bad for business here.

Chapter Five, Part VII

In January 1956, the economist Vernon L. Smith decided to use his classroom as a laboratory to answer that exact question. Today this would hardly be surprising. Economists routinely use classroom experiments to test out economic hypotheses and to try to understand how human behavior affects the way markets work. But fifty years ago, the idea was a radical one. Economics was a matter of proving mathematical theorems or of analyzing real-world markets. The assumption was that lab tests could tell you nothing interesting about the real world. In fact, in all the economic literature, there were hardly any accounts of classroom experiments. The most famous had been written by Harvard professor Edward Chamberlin, who every year set up a simulated market that allowed his students to trade among themselves. One of those students, as it happened, was Vernon Smith.

The experiment Smith set up was, by modern standards, uncomplicated. He took a group of twenty-two students, and made half of them buyers and half of them sellers. Then he gave each seller a card that indicated the lowest price at which she’d be willing to sell, and gave each buyer a card that indicated the highest price at which she’d be willing to buy. In other words, if you were a seller and you got a card that said $25, you’d be willing to accept any offer of $25 or more. You’d look for a higher price, since the difference would be your profit. But if you had to, you’d be willing to sell for $25. The reverse was true for buyers. A buyer with a card that said $20 would try to pay as little as possible, but if necessary she’d be willing to shell out the double sawbuck. With that information, Smith was able to construct the class’s supply-and-demand curves (or “schedules”) and to figure out therefore at what price they would meet.

Once all the students had their cards and the rules had been explained, Smith let them start trading among themselves. The market Smith set up was what’s called a double auction, which is much like a typical stock market. Buyers and sellers called out bids and asks publicly, and anyone who wanted to accept a bid or ask would shout out his response. The successful trades were recorded on a blackboard at the front of the room. If you were a buyer whose card said $35, you might start bidding by shouting out “Six dollars!” If no one accepted the bid, then you’d presumably raise it until you were able to find someone to accept your price.

Smith was doing this experiment for a simple reason. Economic theory predicts that if you let buyers and sellers trade with each other, the bids and asks will quickly converge on a single price, which is the price where supply and demand meet, or what economists call the “market-clearing price.” What Smith wanted to find out was whether economic theory fit reality

It did The offers in the experimental market quickly converged on one price. They did so even though none of the students wanted this result (buyers wanted prices to be lower, sellers wanted prices to be higher), and even though the students didn’t know anything except the prices on their cards. Smith also found that the student market maximized the group’s total gain from trading. In other words, the students couldn’t have done any better had someone with perfect knowledge told them what to do.

In one sense these results could be -thought of as unsurprising. In fact, when Smith submitted a paper based on his experiment to the Journal of Political Economy, an ardently pro-market
academic journal which was run by economists at the University of Chicago, the paper was rejected at first, because from the editors’ perspective all Smith had done was prove that the sun rose in the east. (The journal eventually did publish the paper, even though four referee judgments on it had come back negative.) After all, ever since Adam Smith economists had been arguing that markets did an excellent job of allocating resources. And in the 1950s, the economists Kenneth J. Arrow and Gerard Debreu had proved that, under certain conditions, the workings of the free market actually led to an optimal allocation of resources. So why were Smith’s experiments so important?

They were important because they demonstrated that markets could work well even when real people were trading in them. Arrow and Debreu’s proof of the efficiency of markets—which is called the general equilibrium theorem—was beautiful in its perfection. It depicted an economy in which every part fit together and in which there was no possibility of error. The problem with the proof was that no real market could fulfil its conditions. In the Arrow-Debreu world, every buyer and seller has complete information, meaning that every one of them knows what all the other buyers and sellers are willing to pay or to sell for, and they know that everyone else knows that they know. All the buyers and sellers are perfectly rational, meaning that they have a clear sense of how to maximize their own self-interest. And every buyer and seller has access to a complete set of contracts that cover every conceivable state of the world, which means that they can insure themselves against any eventuality.

But no market is like this. Human beings don’t have complete information. They have private, limited information. It may be valuable information and it may be accurate (or it may be useless and false), hut it is always partial. Human beings aren’t perfectly rational either. They may want, for the most part, to maximize their self-interest, but they aren’t always sure how to do that, and they’re often willing to settle for less-than-perfect outcomes. And contracts are woefully incomplete. So while Arrow-Debreu was an invaluable tool—in part because it provided a way of measuring what an ideal outcome would look like—as a demonstration of the wisdom of markets, it didn’t prove that real-world markets could be efficient.

Smith’s experiment showed that they could, that even imperfect markets populated by imperfect people could still produce near-ideal results. The people in Smith’s experiments weren’t always exactly sure of what was going on. Many of them saw the experience of trading as chaotic and confusing. And they described their own decisions not as the result of a careful search for just the right choice but rather as the best decisions they could come up with at the time. Yet while relying only on their private information, they found their way to the right outcome.

In the four decades since Smith published the results of that first experiment, they have been replicated hundreds, if not thousands, of times, in ever more complex variations. But the essential conclusion of those early tests—that, under the right conditions, imperfect humans can produce near-perfect results—has not been challenged.

Does this mean that markets always lead to the ideal outcome? No. First of all, even though Smith’s students were far from ideal decision makers, the classroom was free of the imperfections that characterize most markets in the real world (and which, of course, make business a lot more interesting than it is in economics textbooks). Second, Smith’s experiments show that there’s a real difference between the way people behave in consumer markets (like, say, the market for televisions) and the way people behave in asset markets (like, say, the market for stocks). When they’re buying and selling “televisions,” the students arrive at the right solution very quickly. When they’re buying and selling “stocks,” the results are much more volatile and erratic. Third, Smith’s experiments— like the Arrow-Debreu equations—can’t tell us anything about whether or not markets produce socially, as opposed to economically, optimal outcomes. If wealth is unevenly distributed before people start to trade in a market, it’s not going to be any more evenly distributed afterward. A well-functioning market will make everyone better off than they were when trading began—but better off compared to what they were, not compared to anyone else. On the other hand, better off is better off.

Regardless, what’s really important about the work of Smith and his peers is that it demonstrates that people who can be, as he calls them, “naïve, unsophisticated agents,” can coordinate themselves to achieve complex, mutually beneficial ends even if they’re not really sure, at the start, what those ends are or what it will take to accomplish them. As individuals, they don’t know where they’re going. But as part of a market, they’re suddenly able to get there, and fast.

Chapter Five, Part VI

A giant flock of starlings moves purposefully through the African sky keeping its shape and speed while sweeping smoothly around a tree. From above, a bird of prey dives into the flock. As the starlings scatter, the flock seems to explode around the predator, but it quickly reassembles itself. As the frustrated predator dives again and again, the flock breaks up, re-forms, breaks up, re-forms, its motion creating an indecipherable but beautiful pattern. In the process, the hawk becomes disoriented, since no individual starling ever stays in the same place, even though the flock as a whole is never divided for long.

From the outside, the flock’s movements appear to be the result of the workings of one mind, guiding the flock to protect itself. At the very least, the starlings appear to be acting in concert with each other, pursuing an agreed-upon strategy that gives each of them a better chance to survive. But neither of these is true. Each starling is acting on its own, following four rules: 1) stay as close to the middle as possible; 2) stay two to three body lengths away from your neighbor; 3) do not bump into any other starling; and 4) if a hawk dives at you, get out of the way. No starling knows what the other birds are going to do. No starling can command another bird to do anything. The rules alone allow the flock to keep moving in the right direction, to resist predators and to regroup when divided.

It’s safe to say that anyone who’s interested in group behavior is enamored of flocking birds. Of all the hundreds of books published in the past decade on how groups self-organize without direction from above, few have omitted a discussion of bird flocks (or schools of fish). The reason is obvious: a flock is a wonderful example of a social organization that accomplishes its goals and solves problems in a bottom-up fashion, without leaders and without having to follow complex algorithms or complicated rules. Watching a flock move through the air, you get a sense of what the economist Friedrich Hayek liked to term “spontaneous order.” It’s a biologically programmed spontaneity—starlings don’t decide to follow these rules, they just do, But It is spontaneity for all that. No plans are made. The flock just moves.

You can see something similar—albeit much less beautiful—the next time you go to your local supermarket looking for a carton of orange juice. When you get there, the juice will be waiting, though you didn’t tell the grocer you would be coming. And there will probably be, over the next few days, as much orange juice in the freezer as the store’s customers want, even though none of them told the grocer they were coming, either. The juice you buy will have been packaged days earlier, after it was made from oranges that were picked weeks earlier, by people who don’t even know you exist. The players in that chain— shopper, grocer, wholesaler, packager, grower—may not be acting on the basis of formal rules, like the starlings, but they are using local knowledge, like the starlings, and they are making decisions not on the basis of what’s good for everyone but rather on the basis of what’s good for themselves. And yet, without anyone leading them or directing them, people—most of them not especially rational or farsighted—are able to coordinate their economic activities.

Or so we hope. At its core, after all, what is the free market? It’s a mechanism designed to solve a coordination problem, arguably the most important coordination problem: getting resources to the right places at the right cost. If the market is working well, products and services go from the people who can produce them most cheaply to the people who want them most fervently. What’s mysterious is that this is supposed to happen without any one person seeing the whole picture of what the market is doing, and without anyone knowing in advance what a good answer will look like. (Even the presence of big corporations in the mirket doesn’t change the fact that everyone in a market has only a partial picture of what’s going on.) So can this work? Can people with only partial knowledge and limited calculating abilities actually get resources to the right place at the right price, just by buying and selling?

Chapter Five, Part V

Convention may play an important role in everyday social life. But in theory it should be irrelevant to economic life and to the way companies do business. Corporations, after all, are supposed to be maximizing their profits. That means their business practices and their strategic choices should be rationally determined, not shaped by history or by unwritten cultural rules. And yet the odd thing is that convention has a profound effect on economic life and on the way companies do business. Convention helps explain why companies rarely cut wages during a recession (it violates workers’ expectations and hurts morale), preferring instead to lay people off. It explains why the vast majority of sharecropping contracts split the proceeds from the farm fifty-fifty, even though it would be logical to tailor the split to the quality of the farm and the soil. Convention has, as we’ve already seen, a profound effect on strategy and on player evaluation in professional sports. And it helps explain why every major car company releases its new models for the year in September, even though there would presumably be less competition if each company released its cars in different months.

Convention is especially powerful, in fact, in the one part of the economy that you might expect it to have little sway: pricing. Prices are, after all, the main vehicle by which information gets transmitted from buyers to sellers and vice versa, so you’d think companies would want prices to be as rational and as responsive to consumer demand as possible. More practically, getting the price right (at least for companies that aren’t in pure competitive markets) is obviously key to maximizing profits. But while some companies—like American Airlines, which it’s been said changes prices 500,000 times a day, and Wal-Mart, which has made steady price-cutting into a religion—have made intelligent pricing key to their businesses, many companies are positively cavalier about prices, setting them via guesswork or by following simple rules of thumb. in a fascinating study of the pricing history of thirty-five major American industries between 1958 and 1992, for instance, the economist Robert I-Ia1] found that there was essentially no connection between increases in demand and increases in price, which suggests that companies decided on the price they were going to charge and charged that price regardless of what happened. Clothing retailers, for instance, generally apply a simple mark-up rule: charge 50 percent more than the wholesale price (and then discount like mad if the items don’t sell). And until recently, the record industry blithely insisted that consumers were actually indifferent to prices, insisting that it sold as many CDs while charging $17 per disk as it would if it charged $12 or $13 a disk.

One of the more perplexing examples of the triumph of convention over rationality are movie theaters, where it costs you as much to see a total dog that’s limping its way through its last week of release as it does to see a hugely popular film on opening night. Most of us can’t remember when it was done differently, so the practice seems only natural. But from an economic perspective, it makes little sense. In any given week, some movies will be playing to packed houses, while others will be playing to vacant, theaters. Typically, when demand is high and supply is low, companies should raise prices, and when demand is low and supply is high, they should lower prices. But movie theaters just keep charging the same price for all of their products, no matter how popular or unpopular.

Now, there’s a good reason for theaters not to charge more for popular movies. Theaters actually make most of their money on concessions, so they want as many people as possible coming through the door.. The extra couple of dollars they’d make by charging $12.50 instead of $10 for the opening weekend of Spider Man 2 is probably not worth the risk of forgoing a sellout, especially since in the first few weeks of a movie’s run the theaters get to keep only 25 percent or so of the box-office revenue. (The movie studios claim the rest.) But the same can’t be said for charging’ less for movies that are less popular. After all, if theaters make most of their money on concessions, and their real imperative is to get people into the theater, then there’s no logic to charging someone $10 to see Cuba Gooding Jr. in Snow Dogs in its fifth week of release. Just as retail Stores mark down inventory to move it, theaters could mark down movies to lure more customers.

So why don’t they? Theaters offer a host of excuses. First, they insist (as the music industry once did) that moviegoers don’t care about price, so that slashing prices on less-popular films won’t bring in any more business. This is something you hear about cultural products in general but that is, on its face, untrue. It’s an especially strange argument to make about the movies, when we know that millions of Americans who won’t shell out $8 to see a • not-so-great flick in the theater will happily spend $3 or $4 to watch the same movie on their twenty-seven-inch TV. In 2002, Americans spent $1 billion more on video rentals than on movies in the theaters. That year, the most popular video rental in the country was Don’t Say a Word, a Michael Douglas thriller that earned a mediocre $55 million at the box office. Clearly, there were lots of people who thought Don’t Say a Word wasn’t worth $9 but was worth $4, which suggests that there is a lot of cash being spent at Blockbuster that theater owners could be claiming instead.

Theater owners also worry that marking down movies would confuse customers and alienate the movie studios, which don’t want their products priced as if they’re second-rate. Since theaters have to cut separate deals every time they want to show a movie, keeping the studios happy is important. But whether a studio is willing to admit that its movie is second-rate has no impact on its second-rateness. And if annoying a few studio execs is the price of innovation, one would think theater chains would be willing to pay it. After all, fashion designers are presumably annoyed when they see their suits and dresses marked down 50 percent during a Saks Fifth Avenue sale. But Saks still does it, as do Nordstrom and Barneys, and the designers still do business with them.

In the end, though, economic arguments may not be enough to get the theaters to abandon the one-price-fits-all model—a model that the theaters themselves discard when it comes to the difference between showing a movie during the day and seeing one at night (matinees are cheaper than evening shows), but that they cling to when it conies t the difference between Finding Nemo and Gigli (for which they charge the same price). The theaters’ unwillingness to change is not a well-considered approach to profit maximization and more a testament to the power of custom and convention. Prices are uniform today because that’s how they were done back in the days when Hollywood made two different kinds of movies: top-of- the-line features and B movies. Those films played in different kinds of theaters at different times, and where people lived and when they saw a movie affected how much they paid. But tickets to all A-list movies cost the same (with the occasional exception, actually, of a big event -film, like My Fair Lady, which played in theaters with reserved seating and cost more). Today, there are no B movies. Every film a studio puts out is considered top-of-the-line, so they’re all priced the same. It is true that this ensures customers remain unconfused. But as the economists Liran Einav and Barak Orbach have written, it also means that movie theaters “deny the law of supply and demand.” They’ve uncoordinated themselves with moviegoers.

Chapter Five, Part IV

Culture also enables coordination in a different way, by establishing norms and conventions that regulate behavior. Some of these norms are explicit and bear the force of law. We drive on the right- hand side of the road because it’s easier to have a rule that everyone follows rather than to have to play the guessing game with oncoming drivers. Bumping into a fellow pedestrian at the crosswalk is annoying, but smashing into an oncoming Mercedes-Benz is quite another thing. Most norms are 1ongstanding, hut it also seems possible to create new forms of behavior quickly, particularly if doing so solves a problem. The journalist Jonathan Rauch, for instance, relates this story about an experience Schelling had while teaching at Harvard: “Years ago, when he taught in a second-floor classroom at Harvard, he noticed that both of the building’s two narrow stairwells—one at the front of the building, the other at the rear—were jammed during breaks with students laboriously jostling past one another in both directions. As an experiment, one day he asked his 10:00 AM class to begin taking the front stairway up and the back one down. ‘it took about three days,’ Schelling told me, ‘before the nine o’clock class learned you should always come up the front stairs and the eleven o’clock class always came down the back stairs—without, so far as Schelling knew, any explicit instruction from the ten o’clock class. ‘I think they just forced the accommodation by changing the traffic pattern,’ Schelling said.” Here again, someone could have ordered the students to change their behavior, but a slight tweak allowed them to reach the good solution on their own, without forcing anyone to do anything.

Conventions obviously maintain order and stability. Just as important, though, they reduce the amount of cognitive work you have to put in to get through the day. Conventions allow us to deal with certain situations without thinking much about them, and when it comes to coordination problems in particular, they allow groups of disparate, unconnected people to organize themselves with relative ease and an absence of conflict.

Consider a practice that’s so basic that we don’t even think of it as a convention: first-come, first-served seating in public places. Whether on the subway or a bus or in a movie theater, we assume that the appropriate way to distribute seats is according to when people arrive. A seat belongs, in some sense, to the person occupying it. (In fact, in some places—like movie theaters—as long as a person has established his or her ownership of a seat, he or she can leave it, at least for a little while, and be relatively sure no one will take it.)

This is not necessarily the best way to distribute seats. It takes no account, for instance, of how much a person wants to sit down. It doesn’t ensure that people who would like to sit together will be able to. And it makes no allowances—in its hard and fast form—for mitigating factors like age or illness. (In practice, of course, people do make allowances for these factors, but only in some places. People will give up a seat on the subway to an elderly person, but they’re unlikely to do the same with a choice seat in a movie theater, or with a nice spot on the beach.) We could, in theory, take all these different preferences into account. But the amount of work it would require to figure out any ideal seating arrangement would far outweigh whatever benefit we would derive from a smarter allocation of seats. And, in any case, flawed as the first-come, first-served rule may be, it has a couple of advantages. To begin with, it’s easy. When you get on a subway, you don’t have to think strategically or worry about what anyone else is thinking. If there’s an open seat and you want to sit down, you take it. Otherwise you stand. Coordination happens almost without anyone thinking about it. And the convention allows people to concentrate on other, presumably more important things. The rule doesn’t need coercion to work; either. And since people get on and off the train randomly, everyone has as good a chance of finding a seat as anyone else.

Still, if sitting down really matters to you, there’s no law preventing you from trying to circumvent the convention by, for instance, asking someone to give up his seat. So in the 1980s, the social psychologist Stanley Milgram decided to find out what would happen if you did just that. Milgram suggested to a class of graduate students that they ride the subway and simply ask people, in a courteous bit direct manner, if they could have their seats. The students laughed the suggestion away, saying things like, “A person could get killed that way.” But one student agreed to be the guinea pig. Remarkably, he found that half of the people he asked gave up their seats, even though he provided no reason for his request.

This was so surprising that a whole team of students fanned out on the subway, and Milgram himself joined in. They all reported similar results: about half the time, just asking convinced people to give up their seat. But they also discovered something else: the hard part of the process wasn’t convincing the people, it was mustering the courage to ask them in the first place. The graduate students said that when they were standing in front of a subject, “they felt anxious, tense, and embarrassed.” Much of the time, they couldn’t even bring themselves to ask the question and they just moved on. Milgram himself described the whole experience as ‘wrenching.” The norm of first-come, first-served was so ingrained that violating it required real labor.

The point of Milgram’s experiment, in a sense, was that the most successful norms are not just externally established and maintained. The most successful norms are internalized. A person who has a seat on the subway doesn’t have to defend it or assert her right to the seat because, for the people standing, it would be more arduous to contest that right.

Even if internalization is crucial to the smooth workings of conventions, it’s also the case that external sanctions are often needed. Sometimes, as in the case of traffic rules, those sanctions are legal. But usually the sanctions are more informal, as Milgram discovered when he studied what happened when people tried to cut into along waiting line. Once again, Milgram sent his intrepid graduate students out into the world, this time with instructions to jump lines at offtrack betting parlors and ticket counters. About half the time the students were able to cut the line without any problems. But in contrast to the subway—where, when people re fuse to give up their seat they generally just said no or even re fuse to answer—when people did try to stop the line cutting, their reaction was more vehement. Ten percent of the time they took some kind of physical action, sometimes going so far as to shove the intruder out of the way (though usually they just tapped or pulled on their shoulders). About 25 percent of the time they verbally protested and refused to let the jumper in. And 15 percent of the time the intruder just got dirty looks and hostile stares.

Interestingly, the responsibility for dealing with the intruder fell clearly on the shoulders of the person in front of whom the intruder had stepped. Everyone in line behind the intruder suffered when he cut the line, and people who were two or three places be him would sometimes speak up, but in general the person who was expected to act was the one who was closest to the newcomer. (Closest, but behind: people in front of the intruder rarely said anything.) Again, this was not a formal rule, but it made a kind of intuitive sense. Not only did the person immediately behind the intruder suffer most from the intrusion, but it was also easiest for him to make a fuss without disrupting the line as a whole.

That fear of disruption, it turns out, has a lot to do with why it’s easier to cut a line, even in New York, than you might expect. Milgram, for one, argued that the biggest impediment to acting against line jumpers was the fear of losing one’s place in line. The line is, like the first-come, first-served rule, a simple but effective mechanism for coordinating people, but its success depends upon everyone’s willingness to respect the line’s order. Paradoxically, this sometimes means letting people jump in front rather than risk wrecking the whole queue. That’s why Milgram saw an ability to tolerate line jumpers as a sign of the resilience of a queue, rather than of its weakness.

A queue is, in fact, a good way of coordinating the behavior of individuals who have gathered in a single location in search of goods or a service. The best queues assemble everyone who’s waiting into a single line, with the person at the head of the line being served first. The phalanx, which you often see in supermarkets, with each checkout counter having its own line, is by contrast a recipe for frustration. Not only do the other lines always seem shorter than the one you’re in—which there’s a good chance they are, since the fact that you’re in this line, and not that one, makes it likely that this one is longer—but studies of the way people perceive traffic speed suggest that you’re likely to do a bad job of estimating how fast your line is moving relative to everyone else’s. The phalanx also makes people feel responsible for the speed with which they check out, since it’s possible that if they’d picked a different line, they would have done better. As with strategizing about the subway seat, this is too much work relative to the payoff. The single-file queue does have the one disadvantage of being visually more intimidating than the phalanx (since everyone’s packed into a single line), but on average everyone will be served faster in a single queue. If there’s an intelligent way to wait in line, that’s it. (One change to convention that would make sense would be to allow people to sell their places in line, since that would let the placeholders trade their time for money—a good trade for them—and people with busy jobs to trade money for tine—also a good trade. But this would violate the egalitarian ethos that governs the queue.)

At the beginning of this chapter, I suggested that in liberal societies authority had only limited reach over the way citizens dealt with each other. In authority’s stead, certain conventions—voluntarily enforced, as Milgram showed, by ordinary people—play an essential role in helping large groups of people to coordinate their behavior with each other without coercion, and without requiring too much thought or labor. It would seem strange to deny that there is a wisdom in that accomplishment, too.

Chapter Five, Part III

In 1958, the social scientist Thomas C. Schelling ran an experiment with a group of law students from New Haven, Connecticut. He asked the students to imagine this scenario: You have to meet someone in New York City. You don’t know where you’re supposed to meet, and there’s no way to talk to the other person ahead of time. Where would you go?

This seems like an impossible question to answer well. New York is a very big city, with lots of places to meet. And yet a majority of the students chose the very same meeting place: the information booth at Grand Central Station. Then Schelling complicated the problem a bit. You know the date you’re supposed to meet the other person, he said. But you don’t know what time you’re supposed to meet. When will you show up at the information booth? Here the results were even more striking. Just about all the students said they would show up at the stroke of noon. In other words, if you dropped two law students at either end of the biggest city in the world and told them to find each other, there was a very good chance that they’d end up having lunch together.

Schelling replicated this outcome in a series of experiments in which an individual’s success depended on how well he coordinated his response with those of others. For instance, Schelling paired people up and asked them to name either “heads” or “tails,” with the goal being to match what their partners said. Thirty-six of forty-two people named “heads.” He set up a box of sixteen squares, and asked people to check one box (you got paid if everyone in the group checked the same box). Sixty percent checked the top left box. Even when the choices were seemingly infinite, people did a pretty good job of coordinating themselves. For instance, when asked the question: “Name a positive number,” 40 percent of the students chose “one.”

How were the students able to do this? Schelling suggested that in many situations, there were salient landmarks or “focal points” upon which people’s expectations would converge. (Today these are known as “Schelling points.”) Schelling points are important for a couple of reasons. First, they show that people can find their way to collectively beneficial results not only without centralized direction but also without even talking to each other. As. Schelling wrote, “People can often concert their intentions and expectations with others if each knows that the other is trying to do the same.” This is a good thing because conversation isn’t always possible, and with large groups of people in particular it can be difficult or inefficient. (Howard Rheingold’s book Smart Mobs, though, makes a convincing case that new mobile technologies— from cell phones to mobile computing—make it much easier for large collections of people to communicate with each other and so coordinate their activities.) Second, the existence of Schelling points suggests that people’s experiences of the world are often surprisingly similar, which makes successful coordination easier-After all, it would not be possible for two people to meet at Grand Central Station unless Grand Central represented roughly the same thing to both of them. The same is obviously true of the choice between “heads” and “tails.” The reality Schelling’s students shared was, of course, cultural. If you put pairs of people from Manchuria down in the middle of New York City and told them to meet each other, it’s unlikely any of them would manage to meet. But the fact that the shared reality is cultural makes it no less real.

Chapter Five, Part II

Consider, to begin with, this problem. There’s a local bar that you like. Actually, it’s a bar that lots of people like. The problem with the bar is that when it’s crowded, no one has a good time. You’re planning on going to the bar Friday night. But you don’t want to go if it’s going to be too crowded. What do you do?

To answer the question, you need to assume, if only for the sake of argument, that everyone feels the way you do. In other words, the bar is fun when it’s not crowded, but miserable when it is. As a result, if everyone thinks the bar will be crowded on. Friday night, then few people will go. The bar, therefore will be empty, and anyone who goes will have a good time. On the other hand, if everyone thinks the bar won’t be crowded, everyone will go. Then the bar will be packed, and no one will have a good time. (This problem was captured perfectly, of course, by Yogi Berra, when he said of Toots Shor’s nightclub: “No one goes there anymore. It’s too crowded.”) The trick, of course, is striking the right balance, so that every week enough—but not too many—people go.

There is, of course, an easy solution to this problem: just invent an all-powerful central planner—a kind of uber-doorman—-- who tells people when they can go to the bar. Every week the central planner would issue his dictate, banning some, allowing others in, thereby ensuring that the bar was full but never crowded. Although this solution makes sense in theory, it would be intolerable in practice. Even if central planning of this sort were possible, it would represent too great an interference with freedom of choice. We want people to be able to go to a bar if they want, even if it means that they’ll have a bad time. Any solution worth talking about has to respect people’s right to choose their own course of action, which means that it has to emerge out of the collective mix of all the potential bargoers’ individual choices.

In the early 1 990s, the economist Brian Arthur tried to figure out whether there really was a satisfying solution to this problem. He called the problem the “El Farol problem,” after a local bar in Santa Fe that sometimes got too crowded on nights when it featured Irish music. Arthur set up the problem this way: If El Farol is less than 60 percent full on any night, everyone there will have fun. If it’s more than 60 percent full, no one will have fun. Therefore, people will go only if they think the bar will be less than 60 percent full otherwise, they stay home.

How does each person decide what to do on any given Friday? Arthur’s suggestion was that since there was no obvious answer, no solution you could deduce mathematically, different people would rely on different strategies. Some would just assume that the same number of people would show up at El Farol this Friday as showed up last Friday. Some would look at how many people showed up the last time they’d actually been in the bar, (Arthur assumed that even if you didn’t go yourself, you could find out how many people had been in the bar.) Some would use an average of the last few weeks. And some would assume that this week’s attendance would be the opposite of last week’s (if it was empty last week, it’ll he full this week).

What Arthur did next was run a series of computer experiments designed to simulate attendance at El Farol over the period of one hundred weeks. (Essentially, he created a group of computer agents, equipped them with the different strategies, and let them go to work.) Because the agents followed different strategies, Arthur found, the number who ended up at the bar fluctuated sharply from week to week. The fluctuations weren’t regular, but were random, so that there was no obvious pattern. Sometimes the bar was more than 60 percent full three or four weeks in a row, while other times it was less than 60 percent full four out of five weeks. As a result, there was no one strategy that a person could follow and be sure of making the right decision. Instead, strategies worked for a while and then had to be tossed away

The fluctuations in attendance meant that on some Friday nights El Farol was too crowded for anyone to have fun, while on other Fridays people stayed home who, had they gone to the bar, would have had a good time. What was remarkable about the experiment, though, was this: during those one hundred weeks, the bar was—on average—exactly 60 percent full, which is precisely what the group as a whole wanted it to be. (When the bar is 60 percent full, the maximum number of people possible are having a good time, and no one is having a bad time.) In other words, even in a case where people’s individual strategies depend on each other’s behavior, the group’s collective judgment can be good.

A few years after Arthur first formulated the El Farol problem, engineers Ann M. Bell and William A. Sethares took a different approach to solving it. Arthur had assumed that the would-be bargoers would adopt diverse strategies in trying to anticipate the crowd’s behavior. Bell and Sethares’s bargoers, though, all followed the same strategy: if their recent experiences at the bar had been good, they went. If their recent experiences had been bad, they didn’t.

Bell and Sethares’s bargoers were therefore much less sophisticated than Arthur’s. They didn’t worry much about what the other bargoers might be thinking, and they did not know—as Arthur’s bargoers did—how many people were at El Farol on the nights when they didn’t show up. All they really knew was whether they’d recently enjoyed themselves at El Farol or not. If they’d had a good time, they wanted to go back. If they’d had a bad time, they didn’t. You might say, in fact, that they weren’t worrying about coordinating their behavior with the other bargoers at all. They were just relying on their feelings about El Farol.

Unsophisticated or not, this group of bargoers produced a different solution to the problem than Arthur’s bargoers did. After a certain amount of time had passed—giving each bargoer the experience he needed to decide whether to go back to El Farol—the group’s weekly attendance settled in at just below 60 percent of the bar’s capacity, just a little bit worse than that ideal central planner would have done. In looking only to their own experience, and not worrying about what everyone else was going to do, the bargoers came up with a collectively intelligent answer, which suggests that even when it comes to coordination problems, independent thinking may be valuable.

There was, though, a catch to the experiment. The reason the group’s weekly attendance was so stable was that the group quickly divided itself into people who were regulars at El Farol and people who went only rarely. In other words, El Farol started to look a lot like Cheers. Now, this wasn’t a bad solution. In fact, from a utilitarian perspective (assuming everyone derived equal pleasure from going to the bar on any given night), it was a perfectly good one. More than half the people got to go to El Farol nearly every week, and they had a good time while they were there (since the bar was only rarely crowded). And yet it’d be hard to say that it was an ideal solution, since a sizable chunk of the group rarely went to the bar and usually had a bad time when they did.

The truth is that it’s not really obvious (at least not to me) which solution—Arthur’s or Sethares and Bell’s—is better, though both of them seem surprisingly good. This is the nature of coordination problems: they are very hard to solve, and coming up with any good answer is a triumph, When what people want to do depends on what everyone else wants to do, every decision affects every other decision, and there is no outside reference point that can stop the self-reflexive spiral. When Francis Galton’s fairgoers made their guesses about the ox’s weight, they were trying to evaluate a reality that existed outside the group. When Arthur’s computer agents made their guesses about El Farol, though, they were trying to evaluate a reality that their own decisions would help construct. Given those circumstances, getting even the average attendance right seems miraculous.

Chapter Five, Part I

No one has ever paid more attention to the streets and sidewalks of New York City than William H. Whyte. In 1969, Whyte—the author of the sociological classic The Organization Man—got a grant to run what came to be known as the Street Life Project, and spent much of the next sixteen years simply watching what New Yorkers did as they moved through the city. Using time-lapse cameras and notebooks, Whyte and his group of young research assistants compiled a remarkable archive of material that helped explain how people used parks, how they walked on busy sidewalks, and how they handled heavy traffic. Whyte’s work, which was eventually published in his book City was full of fascinating ideas about architecture, urban design, and the importance to a city of keeping street life vibrant, it was also a paean to the urban pedestrian. “The pedestrian is a social being,” Whyte wrote. “He is also a transportation unit, and a marvelously complex and efficient one.” Pedestrians, Whyte showed, were able, even on crowded sidewalks, to move surprisingly fast without colliding with their neigh- hors, in fact, they were often at their best when the crowds were at their biggest. “The good pedestrian,” Whyte wrote, “usually walks slightly to one side, so that he is looking over the shoulder of the person ahead. In this position he has the maximum choice and the person ahead is in a sense running interference for him.”

New Yorkers mastered arts like “the simple pass,” which involved slowing ever so slightly in order to avoid a collision with an oncoming pedestrian. They platooned at crosswalks as a protection against traffic. In general, Whyte Wrote, “They walk fast and they walk adroitly. They give and they take, at once aggressive and accommodating. With the subtlest of motions they signal their intentions to one another.” The result was that “At eye level, the scene comes alive with movement and color—people walking quickly, walking slowly, skipping up steps, weaving in and out in crossing patterns, accelerating and retarding to match the moves of others. There is a beauty that is beguiling to watch.”

What Whyte saw—and made us see—was the beauty of a well-coordinated crowd, in which lots of small, subtle adjustments in pace and stride and direction add up to a relatively smooth and efficient flow. Pedestrians are constantly anticipating each other’s behavior. No one tells them where or when or how to walk. 1nead, they all decide for themselves what they’ll do based on their best guess of what everyone else will do. And somehow it usually works out well. There is a kind of collective genius at here.

It is, though, a different kind of genius from the one represented by the NFL point spread or Google. The problem that a crowd of pedestrians is “solving” is fundamentally different from a problem like “Who will win the Giants—Rams game, and by how much?” The pedestrian problem is an example of what are usually called coordination problems. Coordination problems are ubiquitous in everyday life. What time should you leave for work? Where do we want to eat tonight? How do we meet our friends? How do we allocate seats on the subway? These are all coordination problems. So, too, are many of the fundamental questions that any economic system has to answer: Who will work where? How much should my Factory produce? How can we make sure that people get the goods and that to solve it, a person has to think not only about what he believes the right answer is but also about what other people think the right answer is. And that’s because what each person does affects and depends on what everyone else will do, and vice versa.

One obvious way of coordinating people’s actions is via authority or coercion. An army goose-stepping in a parade is, after all, very well-coordinated. So, too, are the movements of workers on an old-fashioned assembly line. But in a liberal society, authority (which includes laws or formal rules) has only limited reach over the dealings of private citizens, and that seems to be how most Americans like it. As a result many coordination problems require bottom-up, not top-down, solutions. And at the heart of all of them is the same question: How can people voluntarily—that is, without anyone telling them what to do—make their actions fit together in an efficient and orderly way?

It’s a question without an easy answer, though this does not mean that no answer exists. What is true is that coordination problems are less amenable to clear, definitive solutions than are many of the problems we’ve already considered. Answers, when they can be found, are often good rather than optimal. And those answers also often involve institutions, norms, and history, factors that both shape a crowd’s behavior and are also shaped by it. When it comes to coordination problems, independent decision making (that is, decision making which doesn’t take the opinions of others into account)’ is pointless—since what I’m willing to do depends on what I think you’re going to do, and vice versa. As a result, there’s no guarantee that groups will come up with smart solutions. What’s striking, though, is just how often they do.

The Man on the Spot

Nobel Laureate Friedrich von Hayek was a strong advocate of tacit knowledge. This is best explained by this short extract from his 1945 article The Use of Knowledge in Society:
Times have changed since Hayek wrote this and most of the centrally planned economies of which he speaks failed. However, in a sense we have swapped one kind of planned coordination for another. In the middle of the 20th century whole countries were organised using central planning. Now we have corporations acting as coordination mechanisms where most of their activity is controlled and planned by central management. Of course, central management are likely to fall foul of the same knowledge problem that central planners faced.
This is, perhaps, also the point where I should briefly mention the fact that the sort of knowledge with which I have been concerned is knowledge of the kind which by its nature cannot enter into statistics and therefore cannot be conveyed to any central authority in statistical form. The statistics which such a central authority would have to use would have to be arrived at precisely by abstracting from minor differences between the things, by lumping together, as resources of one kind, items which differ as regards location, quality, and other particulars, in a way which may be very significant for the specific decision. It follows from this that central planning based on statistical information by its nature cannot take direct account of these circumstances of time and place and that the central planner will have to find some way or other in which the decisions depending on them can be left to the "man on the spot."
Central planning has failed as a way to organise economies but a central structure has not stopped corporations from getting larger and larger. If Wal-mart were a country it would be the in the top 30 economies in the world ranked by GDP, ahead of countries such as Austria, Argentina and Indonesia, and it would rank as China'a 8th largest trading partner. Central planning didn't work out too well for countries but doesn't seem to be doing too bad for companies. In 2006 Wal-mart reported profits of $12billion on sales of $350billion.

How has Wal-mart being so successful and avoided the problems of statistics that "lump together items which differ as regards location, quality, and other particulars, in a way which may be very significant for the specific decision." A recent post by journalist, Charles Platt on the boingboing.net blog provides a great insight.

Platt took a minimum wage job at Wal-mart to see if the commonly held beliefs about the company were true. He found that they were largely untrue and that the reputation was unwarranted. That is not our focus. His piece gives us this gem on tacit knowledge and "the man on the spot".
My standard equipment included a handheld bar-code scanner which revealed the in-store stock and nearest warehouse stock of every item on the shelves, and its profit margin. At the branch where I worked, all the lowest-level employees were allowed this information and were encouraged to make individual decisions about inventory. One of the secrets to Wal-Mart’s success is that it delegates many judgment calls to the sales-floor level, where employees know first-hand what sells, what doesn’t, and (most important) what customers are asking for.
That sums it up perfectly.