Prediction Markets 14 min read

9 Reasons Prediction Markets Are More Accurate Than Experts

Individual experts are wrong a lot. Markets with real money on the line are wrong less often. Nine structural reasons explain the gap — with data from elections, Fed decisions, Oscar picks, and more.

D
Daniel Chen Senior Financial Analyst
|
Event/Domain Expert/Pundit Accuracy Poll Accuracy Market Accuracy Winner
2020 US Presidential Election 72% of pundits called Biden 7.2pt avg poll error in swing states PredictIt: Biden 63 cents (correct) Markets
2024 US Presidential Election Split pundit consensus 3-4pt avg poll error Polymarket: Trump 60 cents night before (correct) Markets
COVID Vaccine Timeline Fauci: 12-18 months (Jan 2020) N/A Metaculus median: 14 months (within 2 weeks of EUA) Markets
Fed Rate Decisions (2015-2024) 71% economist survey accuracy N/A CME FedWatch: 89% accuracy Markets
Supreme Court Rulings (2018-2024) 68% legal scholar accuracy N/A FantasySCOTUS/Kalshi: 79% accuracy Markets
Oscar Best Picture (2010-2024) 58% critic consensus picks N/A Hollywood Stock Exchange: 73% accuracy Markets
NFL Point Spreads (season avg) 52% ESPN analyst picks ATS N/A 52-53% market-implied (even) Tie
GDP Growth Direction (quarterly) 63% economist surveys N/A 75% Iowa Electronic Markets Markets

The Scoreboard Nobody Wants to Show You

Philip Tetlock tracked 28,000 predictions from 284 credentialed experts over two decades. Political scientists, economists, intelligence analysts, national security advisors. People with PhDs and cable news segments. His finding was devastating: the average expert performed about as well as a "dart-throwing chimpanzee." Not one of them dominated across domains. Many did worse than simple statistical models that any undergraduate could build.

Meanwhile, prediction markets — platforms where real money changes hands on future outcomes — have beaten expert consensus in domain after domain. Elections, economic data, geopolitical events, entertainment awards, even the timing of scientific breakthroughs.

This is not because individual traders are geniuses. Most aren't. The edge comes from structural mechanics: how markets process information versus how a single person sitting in a TV studio processes it. Here are nine reasons that gap exists, backed by data from every domain where the two have gone head-to-head.

Accuracy Rates: Markets vs. Experts by Domain

Bar chart comparing prediction market accuracy versus expert accuracy across five domains. Markets lead in every category except NFL point spreads, where both hover near 52%.

1. Skin in the Game Forces Honesty

When a cable news analyst predicts a recession, nothing happens to them if they're wrong. They keep their segment. They still get invited to Davos. The professional cost of a bad forecast is approximately zero.

Prediction market traders pay cash for their opinions. A contract priced at 60 cents means the crowd thinks there's a 60% chance. If you disagree, you're backing that disagreement with your own dollars. Wrong? You lose real money. Right? You profit. That feedback loop changes behavior.

Nassim Taleb made this argument for years: forecasts without consequences produce noise. The 2016 presidential election is the textbook case. Most political commentators said Trump had no realistic path. PredictIt, where traders had money on the line, priced Trump at 20-30% — not likely, but meaningfully possible. The market was wrong too. But it was less wrong in a way that mattered: it correctly flagged real uncertainty that the pundit class dismissed as fantasy.

The 2024 election repeated the pattern. Most legacy media outlets hedged their calls. Polymarket had Trump at 60 cents the night before the election. The market didn't just predict the winner — it priced the margin of confidence more accurately than any panel of talking heads.

Skin in the game works as a filter. People who bluff, exaggerate, or forecast based on wishful thinking lose money. Eventually, they stop trading. What remains is a self-selected pool of participants who've earned the right to keep playing by being less wrong than everyone else.

2. Markets Aggregate Dispersed Information

No single person knows everything relevant about any complex event. An economist tracks macro data but misses the ground-level signals from small business owners. A political journalist has campaign sources but doesn't model turnout demographics. A data scientist builds models but ignores qualitative intelligence.

Markets solve this through price discovery. Friedrich Hayek described the mechanism in 1945: prices synthesize the private knowledge of thousands of participants into a single number. Each trader contributes a different fragment of the puzzle, and the market price reflects all those fragments simultaneously.

Hewlett-Packard ran internal prediction markets in the 1990s to forecast printer sales. The markets beat official HP planning department forecasts 75% of the time. The sales reps knew things headquarters didn't — which dealers were excited, which products drew complaints, which regions were softening. No single person had the full picture. The market aggregated it automatically.

Google, Ford, and Intel ran similar experiments. The results were consistent. Crowds with diverse information and a mechanism to combine it outperform the "smartest person in the room" model almost every time.

This is why prediction markets shine on events with many information sources: elections (pollsters, canvassers, donors, local journalists), economic data (businesses, consumers, government agencies), even weather events (meteorologists, insurance actuaries, local observers). The more dispersed the relevant knowledge, the bigger the market's advantage.

3. Real-Time Updating (Experts Can't Keep Up)

An expert publishes an op-ed on Monday. By Wednesday, new data drops that changes the picture entirely. But the op-ed is already out there. The expert might update their view in a week. Or a month. Or never.

Markets update in seconds. When the June 2024 jobs report printed hotter than expected, CME FedWatch probabilities shifted within three minutes. No waiting for an economist to draft a revised memo. No scheduling a panel discussion for next Thursday. The price moved because traders acted immediately on new information.

During COVID, the speed gap was stark. Epidemiologists published papers with two-week lag times. By the time the paper landed, the data it analyzed was already stale. Prediction markets on Metaculus and early Polymarket updated daily — sometimes hourly — as new case data, vaccine trial leaks, and policy announcements hit. Were the markets perfect? No. But they tracked reality faster than any expert or institution could.

The real-time nature also means markets self-correct. A bad forecast gets arbitraged away: someone with better information trades against it and pushes the price toward accuracy. Expert opinions have no correction mechanism. A wrong op-ed just sits there, accumulating page views, until the author decides to change their mind. If they ever do.

Speed compounds over multiple updates. A market that reprices after each new data point is recalibrating dozens of times before an expert publishes once. Over a multi-month event arc — a presidential campaign, a Fed cycle, a pandemic — that compounding advantage is enormous.

4. Anonymity Strips Out Reputational Distortion

Imagine you're a political analyst at a major outlet, and you honestly think a fringe candidate has a 25% shot. Do you say that publicly? Probably not. If you're right, people say you got lucky. If you're wrong, you're the person who took a longshot seriously. The reputational math pushes you toward the safe consensus view.

Prediction market traders are anonymous or pseudonymous. Nobody cares what your name is on Kalshi. You buy the contract at what you think is the right price, and your identity is irrelevant. Anonymity strips away the social pressure that distorts expert forecasts.

Tetlock found that "intellectual humility" — the willingness to assign meaningful probabilities to unfashionable outcomes — was one of the strongest predictors of forecasting accuracy. Markets enforce this structurally. If consensus says 5% and you think 20%, you buy cheaply and profit when you're right. No think piece required. No career risk. Just a position and a price.

This matters most in politically charged domains. Experts have careers, affiliations, audiences with expectations. A Republican economist has career incentives to be bearish under a Democratic president, and vice versa. A trader with money at stake can't afford those luxuries. Wrong is wrong, regardless of which team you root for.

5. The Wisdom of Crowds, With Teeth

James Surowiecki's "Wisdom of Crowds" popularized what statisticians had known for decades: under the right conditions, the average guess of a large group beats almost any individual. The conditions are diversity of opinion, independence of judgment, decentralization, and a mechanism for aggregation.

Prediction markets satisfy all four by design. Traders come from different backgrounds (diversity). They trade individually, not by committee (independence). No central authority sets prices (decentralization). The market mechanism itself combines their views into one number (aggregation).

The classic example: Francis Galton's ox-weighing contest at a 1906 country fair. 787 people guessed an ox's weight. The median guess: 1,207 pounds. Actual weight: 1,198. No individual came that close. The crowd nailed it within 0.75%.

Markets add a layer that Surowiecki's framework doesn't require but that sharpens accuracy: money. Financial stakes filter out casual guesses and reward precision. When the Iowa Electronic Markets ran parallel to pollsters from 1988 through 2004, they outperformed the polls in 74% of election forecasts — and polls are already a form of crowd aggregation. Markets are crowds with financial teeth.

This isn't unlimited, though. The wisdom of crowds breaks down when participants aren't independent (herding), aren't diverse (echo chambers), or when the market is too thin (fewer than ~100 active traders). In those conditions, the "crowd" is really just a small group of similar people reinforcing each other's biases.

6. Natural Selection Weeds Out Bad Forecasters

When a TV producer books a guest to discuss the economy, the selection criteria are: credentials, camera presence, and availability on Thursday. Forecasting track record is not on the list.

Markets run opposite selection pressure. The traders who accumulate the most capital are the ones who've been most accurate. Their positions carry more weight in the market price. The loudest voice doesn't dominate. The best-funded voice does — and funding comes from being right.

A retired surgeon who's never traded before can still bring genuine domain expertise to a healthcare policy contract. But if their trades lose money consistently, their influence on the price shrinks as their capital depletes. Meanwhile, a 23-year-old political science student who's been profitable for five straight election cycles gets to deploy more capital on the sixth.

This is meritocracy enforced by profit and loss, not by credentials committees or peer review panels. Over hundreds of trades, the market's price increasingly reflects the views of people who've demonstrated accuracy — and gives less weight to people who haven't. No expert review process achieves anything close to this level of real-time performance filtering.

7. Built-In Accountability Through P&L

Ask a pundit about their track record and you'll get highlights. "I called the housing crisis." "I was early on inflation." Nobody keeps score in any serious way, so nobody can be meaningfully evaluated.

In prediction markets, your P&L statement is your scorecard. Every trade is recorded. Every win and loss is quantified. You cannot spin a portfolio that's down 40% as "I was directionally right but the timing was off."

Tetlock's Good Judgment Project demonstrated the power of scorekeeping. When forecasters were graded using Brier scores — a mathematical measure of calibration accuracy — the top performers ("superforecasters") separated from the pack. But only because they were being scored at all. Before scoring was introduced, there was no way to tell who was good. Everyone sounded equally confident on TV.

Markets impose Brier-score-level accountability automatically. You don't need a twenty-year research project to identify who forecasts well. The money does it. Traders who are well-calibrated — whose 70% predictions come true roughly 70% of the time — grow their bankrolls. Everyone else subsidizes them, one bad trade at a time.

8. Resistance to Narrative Bias

Humans love stories. Experts are especially susceptible because constructing narratives is literally their job. "The economy is weakening because of X, which leads to Y, which means Z." The narrative sounds logical. The world frequently refuses to cooperate with neat storylines.

Markets are harder to derail with narrative because prices are set by action, not argument. You can tell a compelling story about why inflation will surge. The market's response: put money on it. If inflation contracts are at 30 cents and you think 70% is right, back it up. But the market price won't budge just because your story is persuasive.

The 2022 "red wave" midterm prediction is a good case study. Political commentators, armed with the "party-in-power always loses" historical pattern and confident narratives about inflation-driven voter anger, predicted massive Republican gains. Prediction markets were more cautious. PredictIt priced the Senate much closer to a toss-up. The markets were right: Republicans won a slim House majority and lost the Senate. The narrative was plausible. It was also wrong.

Traders aren't immune to narrative. They're human. But the market structure punishes narrative-driven trading when the story doesn't pan out. Over time, price-driven thinking crowds out story-driven thinking. Stories are compelling. P&L is final.

9. Wrong Traders Go Broke (Calibration Pressure)

An expert who's badly miscalibrated — who says "I'm 90% sure" about things that happen 50% of the time — can keep publishing for decades. Cable news doesn't fire analysts for bad predictions. Newspapers don't cancel columns based on Brier scores. There is no accountability mechanism.

Markets have one. It's called going broke. If you consistently buy contracts at 90 cents that resolve at a 50% rate, you bleed money on every marginal trade. Do it long enough and you're out of capital. The market has fired you — permanently.

This evolutionary pressure makes prediction markets better over time. The 2008 InTrade markets were decent. The 2020 PredictIt markets were sharper. The 2024 Polymarket and Kalshi markets were sharper still. Each generation builds on a survivor pool of traders who've been through calibration pressure and come out more accurate.

Tetlock calls this "the wisdom of the select crowd." It's not just that crowds are wise. It's that crowds with a built-in mechanism for pruning poorly-calibrated participants become progressively wiser. Prediction markets are that mechanism operating at scale, with real money, across thousands of simultaneous questions.

When Experts Still Win

This isn't a blanket indictment of expertise. In three specific situations, individual experts consistently outperform markets.

Novel situations with no historical analog. When COVID-19 first appeared, epidemiologists with domain training had a real edge over market traders pricing it like a bad flu season. Markets are backward-looking by nature — they weight what's worked before. True black swans break that pattern.

Deeply specialized technical domains. A structural engineer evaluating bridge safety will outperform a prediction market of generalists. The information asymmetry is too large for the crowd to close. Markets work best when knowledge is dispersed; they fail when it's concentrated in a handful of specialists.

Thin markets with few participants. A prediction market with 30 traders and $5,000 in volume isn't a market. It's a group chat with a price attached. The accuracy advantage scales with participation. On an obscure question with negligible trading activity, find an actual expert instead.

The honest conclusion: markets are a better default for widely-followed questions where information is dispersed and participation is high. Use market prices as your baseline. Reach for individual experts in the narrow conditions where they add value above what the crowd already provides.

Markets vs. Experts: The Numbers

Frequently Asked Questions

No. Markets have a clear edge on well-traded events with dispersed information — elections, economic data, policy outcomes. In novel situations without historical precedent, deeply technical fields, and thin markets with very few traders, individual experts typically outperform. Markets are a better default, not a universal replacement for domain expertise.
The wisdom of crowds is the statistical finding that the average estimate from a large, diverse, independent group tends to be more accurate than any individual's estimate. Prediction markets add a financial layer: participants put money behind their estimates, which filters out uninformed guesses and rewards calibrated thinking. The result is a crowd estimate that's both broad and financially sharpened.
Three structural reasons. First, no accountability — wrong predictions carry almost no professional cost, so there's no pressure to improve. Second, narrative bias — experts build stories that sound logical but don't match reality. Third, reputational incentives push them toward safe consensus views instead of honest probability estimates. Markets fix all three problems through financial stakes, price-based signaling, and anonymity.
Three scenarios: (1) the event is truly novel with no historical pattern for the market to price off, (2) the relevant knowledge is concentrated in a tiny group of specialists that the market's general traders can't match, and (3) the market is thinly traded with fewer than ~100 active participants. In those cases, a credentialed expert with specific domain knowledge will usually give you a better probability estimate.
Research from Hanson and Oprea (2009) found that manipulation attempts are usually short-lived. Other traders correct distortions quickly to profit from the mispricing. In liquid markets with deep order books, manipulation is expensive and self-defeating. In thin markets with low volume, manipulation is cheaper and the prices are less trustworthy. Volume and open interest are your quality indicators.
prediction markets forecasting experts wisdom of crowds superforecasting Tetlock accuracy