As the group size increases from just a few people a few dozen people to hundreds of people the accuracy of the crowds judgment usually increases. For many sorts of judgments and decisions we care about intellectual diversity, where people bring different information in different approaches to the same problem.
The average can be misleading
There’s a frequent misconception that the best way to aggregate data from the crowd is to take the average. The average value can be fine for some sort of judgment primarily when people’s judgments are all the same scale, but when people’s judgments are on vastly different scales, then the average can be misleading.
Suppose that I asked three people to question how many miles of road are there in the United States. The first answers 500,000 the second answers 5 million and the third answers 50 million. The average of those three guesses is more than 18 million, that average is most influenced by the very large guess, but the median or middle guess is only 5 million that’s much closer to the true answer which is about 4 million miles of road.
For many real-world judgments people might not even just the right order of magnitude. They can be off by factors of 10 or more so the median or middle guess usually provides a better estimate of the true value being judged. Another misconception is that it’s good to have a crowd of experts. If you look back at Dalton’s ox judging contest those fairgoers were indeed pretty expert about 60% of their guesses came within 50 pounds of the correct weight.
Factors influencing the wisdom of crowds
I suspect that if you took 21st century Americans most of them wouldn’t do anywhere near as well, but expertise isn’t as important as other factors:
1. the first is pretty simple the size of the crowd matters to a point. As the group size increases from just a few people a few dozen people to hundreds of people the accuracy of the crowds judgment usually increases. It’s often better to have a large crowd of non-experts that only a few experts, but once the crowd gets sufficiently large there are diminishing returns, just as we saw in our discussion of sample size in the opinion polls from a previous article.
2. But there’s a second factor and it’s the primary reason for the wisdom of crowds: diversity. Diversity has become a loaded term, one that carries social and political baggage. I’ll use diversity in this article to mean something very simple: the degree to which different people approach a decision in different ways. Diversity isn’t something absolute, it is defined with regard to something.
Aim for intellectual diversity
For many sorts of judgments and decisions we care about intellectual diversity, where people bring different information in different approaches to the same problem. Let me explain this in two ways: first with a simple example and then with something more complex. Suppose that you’re an economist trying to predict how much the US economy will grow over the next six months.
In its semiannual survey of economic forecasts the Wall Street Journal collects and aggregates such predictions from about 50 leading economists and attracts those predictions over time to see who does best. Suppose that the Wall Street Journal surveyed only economists who studied consumer confidence: the predictions of those economists would tend to be correlated, that is they might all prioritize the same information in making their predictions they might all overestimate economic growth and consumer confidence is high and underestimate growth when confidence is low.
Without diversity of thought people’s judgments tend to be correlated and the value of the crowd is lost. The aggregate prediction is much more likely to be accurate if the individual predictions are uncorrelated, if the economists use different information and different models when making their predictions. For any given prediction, some economists might be too high and so might be too low, but the middle prediction would be more likely to be accurate.
Being right might often be a sign of poor judgment
Of course this example is purposefully simplified, it will be very unlikely for all of the economists in the Wall Street Journal to use the same information, but I picked this example for a particular reason: the future of the economy is difficult to predict and it’s not the case that the best prediction is made by the best predictor. For a wonderful feature that Wall Street Journal survey is that all the predictions are shown publicly, so anyone contract who was accurate one year and whether they remained accurate the next year.
One year a forecaster correctly predicted that interest rates would be unusually high when the rest of the crowd thought that they would be low. He was the most accurate predictor that year, was he using different information than the rest yes. According to the Wall Street Journal he had visited a jeans factory and seeing the demand for $250 jeans. He based his inflation forecast on that visit reasoning that consumers must have discretionary money if they were buying such expensive jeans.
It’s probably not a good idea to base one’s predictions about the entire US economy on such an idiosyncratic observation but this example might not be that atypical. It turns out that when an individual was correct the crowd wasn’t and that individual’s predictions tended to be much worse than average on other occasions. That is. being right when the crowd is wrong might often be a sign of poor judgment, not good judgment.