Can you predict the future? Do you have what it takes to be
a “superforecaster”? While my answer to the first question is a clear No, I’m
intrigued by the second question. This is the premise of a new book by Philip
Tetlock and Dan Gardner, Superforecasting: The Art and Science of Prediction. In the mid-1980s, Tetlock devised a
methodology and started a long experiment to get a first approximation of how
accurate people are at forecasting. He asked for specific predictions (not too
easy, but not too difficult) on global political issues. The results were
published in 2005 in a treatise titled Expert
Political Judgment: How Good Is It? How Can We Know? (abbreviated EPJ).
While Tetlock tried to corral as many “experts” as possible to
participate, the most famous pundits declined. This is perhaps not surprising –
very high-profile experts did not really want their expertise tested. The
results are now well-known and oft-quoted (out of context and misinterpreted in
many cases): “The average expert was roughly as accurate as a dart-throwing
chimpanzee.” (This got a lot of press at the time.) What is often left out is
that there were “two statistically distinguishable groups of experts.” One
group did not do better than random guessing (actually slightly worse). The
other did, but not by a large margin. The authors write: “Why did one group do
better than the other? It wasn’t whether they had PhDs or access to classified
information. Nor was it what they thought
– whether they were liberals or conservatives, optimists or pessimists. It was how they thought.”
The authors define these two groups following the
philosopher Isaiah Berlin (and an ancient Greek poet) as foxes and hedgehogs:
“The fox knows many things but the hedgehog knows one big thing.” The foxes are
“eclectic” experts, while the hedgehogs are “Big Idea” experts. In the EPJ
results, foxes beat hedgehogs. More importantly, they did it in the two key
areas (calibration and resolution) that the authors define in their text (with
useful graphs). The problem is that the hedgehog “knows one big thing… and uses [it] over and over when trying to
figure out what happens next.” They liken it to the green-tinted classes had to
wear when visiting the Emerald City in Oz. Sometimes they can help accentuate a
feature that might be missed, but more often that not, they distort reality.
Even worse, acquiring more information doesn’t help and even “increases
confidence… but not accuracy.” The EPJ results were very telling. Hedgehog
experts were actually less accurate in their area of expertise.
Perhaps this is why high-profile experts did not want to
participate in EPJ. In fact the results “revealed an inverse correlation
between fame and accuracy: the more famous an expert was, the less accurate he
was. That’s not because editors, producers, and the public go looking for bad
forecasters. They go looking for hedgehogs, who just happen to be bad
forecasters. Animated by a Big Idea, hedgehogs tell tight, simple, clear
stories that grab and hold audiences. As anyone who has done media training
knows, the first rule is ‘keep it simple,
stupid.’ Better still, hedgehogs are confident...” This reminds me of an
excellent presentation I attended by science communicator Randy Olson (I mentioned his book briefly in a post sometime back.). It can be quite challenging to communicate nuanced
positions, but there are key things you can do in the narrative to hold the
attention of the audience or at least keep them interested.
This also connected closely with my most recent post on
Ambiguity. Tetlock and Gardner continue: “With their one-perspective analysis,
hedgehogs can pile up reasons why they are right without considering other
perspectives and the pesky doubts they may raise… For many audiences, that’s
satisfying. People tend to find uncertainty disturbing… The simplicity of the
hedgehog impairs foresight, but it calms nerves – which is good for the careers
of hedgehogs.”
On the other hand, “Foxes don’t fare so well in the media.
They’re less confident… and are likelier to settle on shades of ‘maybe’. And their stories are complex,
full of ‘howevers’ and ‘on the other hands’… This aggregation
of many perspectives is bad TV. But it’s good forecasting. Indeed, it’s
essential.” This is scary. Especially if you consider what you see in today’s
political circus. I’ve noticed that many of my students have a narrow view of
science classes having to do with finding the “correct” answer. They get
distressed if they aren’t making progress towards that goal. “Is this right?”
is the most common question I get in office hours, after I help point a student
in a direction that may help them “solve” the problem. I’m hoping that one
takeaway from my class is that scientific inquiry is particularly interesting
and helpful in studying complex problems. You need to aggregate data from a
variety of sources to make headway, and even then there are so many unknowns.
I’d never really thought carefully whether I’m a fox or a
hedgehog. I’ve only reflected on this for about a day, but I can see that in
some areas, I’m definitely fox-like, but in others I’m a hedgehog. In fact my
thinking can be quite compartmentalized in different situations depending on
what I’m dealing with. Sometimes hedgehog behaviors are helpful, but the data
do show that forecasting is not one of them. Hopefully I can help my students
unleash their inner fox!
P.P.S. For some reason this makes me think of Zootopia, which I watched last weekend. It is excellent and highly recommended.
No comments:
Post a Comment