The last two weeks of the year never disappoint when it comes to predictions and forecasts. Everywhere we look it seems someone has a prediction or wants one. The areas of interest span from equity prices and bond yields to mergers and acquisitions activity, from the inches of snow we’ll get to the outcome of next year’s mid-terms. Personally, I find the flood of forecasts fun and entertaining, including my own, and will enjoy looking back one year from now to see how my own panned-out.
Like most, I’m no clairvoyant. I’m right on some forecasts and wrong on others. But I do stand in good company with my predictions – my accuracy is on par with that of a dart-throwing chimp.
Comparing my forecasting proficiency to that of our simian cousin’s may sound like an odd expression of confidence. One would think I would be able to leverage my experience and expertise in financial technology, software, and investment banking, to make accurate forecasts about future trends in the same.
But it’s true…at least that’s what the data shows.
The analogy of the dart-throwing chimp to ‘chance’ isn’t my own. Its attributable to University of Pennsylvania, Wharton School of Business professor Philip E. Tetlock in describing his seminal, longitudinal study on the accuracy of expert forecasting in Superforecasting, The Art And Science Of Prediction (co-written with Dan Gardener 2015). Tetlock’s study of the accuracy of human forecasting – the Expert Political Judgement (EPJ) study – was conducted over a twenty-year period from 1984 to 2004, when he gathered 284 professional experts – people paid to be pundits, commentators and/or academics – to make predictions, among other things, about wars, election results, the economy, and equity prices. The results showed that the experts broke down (roughly) into two groups – one group’s predictive accuracy was less than that had they made their predictions by flipping a coin (chance), and the other group’s accuracy was only “slightly better.” The analogy to the dart-throwing chimp was a little color that Tetlock added after the fact, in part because it attracted attention to the study, and in part because he thought it was a “funny”, yet accurate, way of describing just how bad we all really are at making predictions.
Tetlock went on to conduct a second study with his wife Barbara Mellers, also a professor at the University of Pennsylvania, in an attempt to gain greater insights to why experts are so inaccurate with their forecasts, and if there are any people who can predict future events and outcomes with a high degree of accuracy and consistency. The study has sampled over 10,000 people to date, and operates under the name The Good Judgment Project. The findings show that there are people who can consistently make accurate forecasts (Tetlock calls them Superforecasters), but their ability to do so has less to do with their expertise, and much more to do with their behavior. In Tetlock’s words, predictive accuracy has little correlation with “what” people know (or think they know), but a strong correlation to “how” they think.
Another byproduct of Tetlock’s studies, whether intended or not, was bringing to the fore an illusion that many of us share regarding the credibility of experts making forecasts. We have a tendency to attribute far too much credibility to them – those whose works we read, listen to on TV and podcasts, or learn about in school. Tetlock’s research shows us that it’s a mistake to weigh their predictive abilities as heavily as we do. In what I thought was the most interesting finding of the study, Tetlock’s data showed an inverse correlation between an expert’s fame and their accuracy – the more famous and high-profile a participant was – TV commentator, world-renown academic – the more likely they were to be less accurate in their predictions than had they made them playing ‘heads-or-tails’.
Investment bank take…
As we finish out 2021 and roll into the new year, we’ve all been bombarded by forecasts. And every one of them seems to have been made by some kind of expert, who presents him or herself as an authority in their field, and therefore, as someone whose forecast we ought to give greater weight to.
But we shouldn’t give their predictions too much weight if we’re basing decision-making on them…at least not without greater scrutiny.
Tetlock’s findings are far too broad and deep to unpack in 750 words, but he was able to tease-out a profile of the people who tend to be the most accurate predicters – his Superforecasters. Interestingly, it wasn’t until Tetlock expanded his sample size to include ‘regular’, non-professional experts, that he was able to do this. He broke his profile down into three categories, again, focusing on “how” people think. Here are the headlines:
Philosophically – cautious and humble
Thinking Style – reflective, open-minded, and knowledgeable
Methodology – analytical, probabilistic, and always updating their knowledge
Even a cursory reading of the above creates an impression of good forecasters that clashes with that which we experience in our day-to-day. Especially the hubris and arrogance of the professional class of TV know-it-alls.
So, as we begin 2022 with predictions in-hand, perhaps there’s wisdom in being thoughtful with those that we’re using to develop business strategies and make investment decisions with. The data says that in general, they’re not that accurate.
And for those of us who are making predictions, if your accuracy is slightly better than a dart-throwing chimp’s, take heart in knowing that you’re in good company, in fact, you’re closer to the best forecasters than you are to the worst.
Happy new year!