Can computer algorithms make better probabilistic forecasts than humans?
Over a period of several months, a human can review a thousand inputs that can be used to make better decisions in the financial markets. However, it is nearly impossible for the same human to review the same 1,000 inputs in a fast-moving market and make a rapid, rational, and unemotional decision based on probabilities. Computer models can help humans make better decisions, especially during periods of high market volatility, uncertainty, and stress.
The excerpts below are from Minds and machines: The art of forecasting in the age of artificial intelligence - Deloitte Review issue 19:
Forty years of behavioral science research into the psychology of probabilistic reasoning have revealed the surprising extent to which people routinely base judgments and forecasts on systematically biased mental heuristics rather than careful assessments of evidence. These findings have fundamental implications for decision making, ranging from the quotidian (scouting baseball players and underwriting insurance contracts) to the strategic (estimating the time, expense, and likely success of a project or business initiative) to the existential (estimating security and terrorism risks).
The bottom line: Unaided judgment is an unreliable guide to action.
A body of research dating back to the 1950s has established that even simple predictive models outperform human experts’ ability to make predictions and forecasts. This implies that judiciously constructed predictive models can augment human intelligence by helping humans avoid common cognitive traps.
Algorithms can augment human judgment but not replace it altogether.
More than 200 studies have compared expert and algorithmic prediction, with statistical algorithms nearly always outperforming unaided human judgment.
Even after the model has been built and deployed, human judgment is typically required to assess the applicability of a model’s prediction in any particular case. After all, models are not omniscient—they can do no more than combine the pieces of information presented to them.
Human-computer collaboration is therefore a major avenue for improving our abilities to make forecasts and judgments under uncertainty.
Beyond the tendency to form reference-class base rates based on hard data, Tetlock identifies several psychological traits that superforecasters share:
They are less likely than most to believe in fate or destiny and more likely to believe in probabilistic and chance events.
They are open-minded and willing to change their views in light of new evidence; they do not hold on to dogmatic or idealistic beliefs.
They possess above-average (but not necessarily extremely high) general intelligence and fluid intelligence.
They are humble about their forecasts and willing to revise them in light of new evidence.
While not necessarily highly mathematical, they are comfortable with numbers and the idea of assigning probability estimates to uncertain scenarios.