The Difference Between Asking and Betting
Polls ask what people think. Markets ask them to put money on it. That distinction sounds trivial, but it produces systematically different results.
A poll says: "Who do you want to win?" A market says: "Who do you think will win, and how much will you stake on that belief?" The first question captures preference. The second captures conviction weighted by confidence.
Over the past 20 years, prediction markets and polls have diverged on dozens of major events. Sometimes the markets were right and the polls were wrong. Sometimes — less often, but it happens — the polls nailed it and the markets whiffed.
Here are 12 specific examples where the markets got it right. We'll also cover the times polls beat markets, because intellectual honesty matters more than a clean narrative. For each example: what the polls said, what the markets said, what actually happened, and why the market had the better read.
1. 2024 US Presidential Election (Trump vs. Harris)
Date: November 5, 2024
Polls said: Toss-up. FiveThirtyEight polling average: Harris +1.3 nationally. RealClearPolitics average: Harris +0.2. Swing state polls showed margins within 1-2 points in both directions across Arizona, Georgia, Michigan, Nevada, Pennsylvania, Wisconsin.
Markets said: Trump. Polymarket had Trump at 62% on election morning. Kalshi had Trump at 58%. PredictIt had Trump at 55%.
What happened: Trump won 312 electoral votes, carrying all seven swing states. The national popular vote margin was approximately 1.5 points — within the range of polling error, but every swing state broke in the same direction, which polls as a group failed to predict.
Why markets were right: Prediction market traders appeared to incorporate information that polling aggregates missed: early vote return data showing lower-than-expected Democratic turnout in key counties, the persistent polling undercount of Trump voters seen in 2016 and 2020, and a large-scale shift among Latino and young male voters that traditional polling models weren't capturing. The "whale" trader on Polymarket who wagered $30 million on Trump was reportedly using his own private polling data, which showed a larger Trump lead than public polls.
2. 2016 US Presidential Election (Trump vs. Clinton)
Date: November 8, 2016
Polls said: Clinton. National polling average: Clinton +3.2. FiveThirtyEight model: Clinton 71%. HuffPost model: Clinton 98%.
Markets said: Clinton, but less confidently. PredictIt had Clinton at 76% on election day. Betfair had Clinton at 82%. IEM had Clinton at roughly 70%.
What happened: Trump won 306 electoral votes. Polls underestimated Trump support by 2-3 points in key Rust Belt states (Michigan, Wisconsin, Pennsylvania).
Why this matters: Markets were also wrong, but less wrong. At 70-76% Clinton probability, markets were implying a roughly 1-in-4 chance of a Trump win. Polls-based models at 95%+ were implying 1-in-20. The actual outcome was surprising but not "1-in-20" surprising. Markets correctly priced the uncertainty that polling models dismissed. This is the core argument for markets: even when they're on the wrong side, their confidence level is usually better calibrated.
3. 2020 US Presidential Election (Biden vs. Trump)
Date: November 3, 2020
Polls said: Biden in a landslide. National average: Biden +8.4 (actual margin: +4.5). Key state polls overstated Biden by 3-5 points.
Markets said: Biden wins, but closer than polls suggest. Polymarket had Biden at 63%. PredictIt had Biden at 60%. Both implied a competitive race, not a blowout.
What happened: Biden won 306-232, but the margin was much closer than polls predicted. Arizona was decided by 0.3%, Georgia by 0.2%, Wisconsin by 0.6%.
Why markets were right: Markets in 2020 had already been burned by 2016's polling miss and adjusted accordingly. Traders applied a "polling error correction" that data modelers were slower to adopt. The market price essentially said: "Yes, Biden will probably win, but this is not an 8-point race." They were correct on both counts.
4. Brexit Referendum — Wait, This One's Complicated
Date: June 23, 2016
Polls said: Too close to call. Final polling average: Remain 48%, Leave 46%, Undecided 6%. Most forecasters put the race at roughly 50/50 based on polls alone.
Markets said: Remain, strongly. Betfair had Remain at 85-90% in the final days. PredictIt had Remain at 78%.
What happened: Leave won 51.9% to 48.1%.
Why this matters for markets: This is the most famous prediction market failure of the past decade, and it's more nuanced than it looks. The polls actually had the race closer to correct — near 50/50. The markets overshot in the Remain direction, likely because the types of people who trade prediction markets (urban, educated, internationally connected) were themselves more likely to favor Remain and projected their preferences onto their probability estimates.
The lesson: markets can suffer from demographic bias in the trader pool. When the traders aren't representative of the relevant population, their collective wisdom can systematically miss. This is the strongest argument against treating market prices as ground truth.
5. 2022 US Midterm Elections (Red Wave That Wasn't)
Date: November 8, 2022
Polls said: Republican wave. Generic ballot average: R+2.5. Forecasters predicted Republicans would gain 20-30 House seats and flip the Senate.
Markets said: Republican gains, but more modest. PredictIt's Senate control contract had Republicans at 68%. But individual Senate race markets told a different story — markets had Democrats favored in Pennsylvania (Fetterman), Arizona (Kelly), Nevada (Cortez Masto), and Georgia (Warnock runoff). Adding those up implied Democrats would likely hold the Senate, even though the "Senate control" headline contract said otherwise.
What happened: Republicans gained only 9 House seats (the narrowest majority in 20 years). Democrats held the Senate 51-49.
Why markets had the edge: Individual race markets were more accurate than the headline "who controls the Senate?" contract because they incorporated local information — candidate quality, early voting patterns, abortion ballot measures driving turnout. The lesson: disaggregated market prices (individual races) are often more informative than aggregated ones (overall control), because the aggregation step introduces modeling assumptions that can be wrong.
6. Federal Reserve Pivots: March 2023 Emergency Decision
Date: March 22, 2023
Polls said: N/A (no meaningful public polling on Fed decisions). Economist surveys pre-SVB collapse: 80% expected a 50bp hike.
Markets said: Before the SVB collapse on March 10, CME FedWatch showed a 25bp hike at 30% probability and a 50bp hike at 70%. Within 48 hours of SVB's failure, the 50bp hike probability crashed to 0%, a 25bp hike was at 60%, and a full pause was at 40%. Kalshi contracts tracked these moves in near-real-time.
What happened: The Fed hiked 25bp — exactly what the post-SVB market price predicted. The pre-SVB economist consensus of 50bp was instantly obsolete.
Why markets were right: Markets updated within hours of the SVB news. Economist surveys, which take days to compile, couldn't reflect the changed reality. This is the speed advantage of markets: continuous, real-time information processing versus periodic snapshots.
7. COVID-19 Vaccine EUA Timeline (2020)
Date: March-December 2020
Polls/Expert surveys said: 18-24 months. In March 2020, the WHO estimated 12-18 months. Many epidemiologists publicly stated that no vaccine had ever been developed in under 4 years and that expecting one in 12 months was unrealistic.
Markets said: Metaculus community median in April 2020 predicted an EUA by December 2020. Polymarket (which was very new at the time) had a December 2020 EUA contract trading at $0.45 in mid-2020, rising to $0.72 by October.
What happened: Pfizer-BioNTech received EUA on December 11, 2020. Nine months from pandemic declaration to approved vaccine — the fastest in history.
Why markets were right: Forecasters on Metaculus decomposed the question: (1) Will mRNA technology work? (2) Will Operation Warp Speed's parallel trial design compress timelines? (3) Will the FDA fast-track review? Each sub-question had a >75% probability of "yes," which made the overall December timeline more likely than not — even though it seemed aggressive when stated as a single prediction.
8. 2023 House Speaker Vote (McCarthy)
Date: January 3-7, 2023
Polls/Pundits said: McCarthy would be elected Speaker on the first or second ballot. Most Congressional reporters predicted a dramatic but quick resolution.
Markets said: PredictIt's "Will McCarthy be elected Speaker by January 31?" contract was trading at $0.72 in late December 2022 — implying 72% confidence. But "Will McCarthy be elected on the first ballot?" was at $0.35. Markets were telling you: McCarthy probably wins eventually, but it's going to be ugly.
What happened: It took 15 ballots over 4 days — the most since 1859. McCarthy won on January 7, 2023. Markets correctly priced the messiness that pundits underestimated.
Why this matters: Markets can express probabilities on conditional outcomes (first ballot vs. eventual winner) in ways that pundit commentary can't. "McCarthy will probably be Speaker but probably not on the first ballot" is a nuanced take that's easy to express with two contract prices but hard to communicate in a cable news segment.
9. 2024 Biden Withdrawal
Date: July 21, 2024
Polls said: N/A. Polls don't ask "Will the incumbent withdraw?" But media consensus through early July was that Biden would stay in the race despite the June 27 debate performance.
Markets said: Polymarket's "Will Biden be the Democratic nominee?" contract dropped below 50% in late June 2024, within days of the debate. By mid-July, it was trading at 25%. The market was pricing in a withdrawal three weeks before it happened.
What happened: Biden withdrew from the race on July 21, 2024, and endorsed Kamala Harris.
Why markets were right: Markets aggregated private signals — staffers talking to reporters off the record, Democratic donors signaling displeasure, Congressional members privately urging withdrawal — that hadn't yet become "news." The market price moved before any major outlet reported that Biden was likely to withdraw, because the traders were synthesizing information from dozens of small signals that individually weren't reportable but collectively pointed in one direction.
10. 2015 UK General Election (Conservative Majority)
Date: May 7, 2015
Polls said: Hung parliament. Every major poll had Conservatives and Labour within 1-2 points, with a hung parliament as the near-certain outcome. Not a single published poll in the final two weeks predicted a Conservative majority.
Markets said: Betfair had a Conservative majority at 28% — a minority probability, but much higher than the ~5% implied by polling models. The market was saying: "A hung parliament is most likely, but don't dismiss a Tory majority."
What happened: Conservatives won a clear majority with 331 seats. The polls were off by 4-6 points in key marginals, suffering from the "shy Tory" effect — Conservative voters who told pollsters they were undecided.
Why markets had the edge: At 28%, the market wasn't predicting a Conservative majority, but it was correctly pricing the uncertainty that polls were ignoring. The polls said ~5% chance. The markets said ~28% chance. The markets were closer to calibrated.
11. Supreme Court ACA Decision (NFIB v. Sebelius, 2012)
Date: June 28, 2012
Expert surveys said: ACA would be struck down. After oral arguments in March 2012, legal analysts overwhelmingly predicted the individual mandate would be ruled unconstitutional. A Bloomberg survey of constitutional law professors found 58% expected the mandate to fall.
Markets said: InTrade (a now-defunct prediction market) had the individual mandate being upheld at 62-68% throughout spring 2012, even as expert commentary grew more bearish. The market was telling you that the legal analysis being published in op-eds didn't match the actual probability of the outcome.
What happened: Chief Justice Roberts joined the four liberal justices in upholding the mandate (recharacterized as a tax), 5-4. The market was right; the expert consensus was wrong.
Why markets were right: Supreme Court prediction is an area where experts are systematically overconfident in their ability to read oral argument tea leaves. Markets, which incorporate base rates (most challenged laws are upheld) alongside oral argument analysis, tend to be better calibrated on SCOTUS outcomes.
12. 2017 French Presidential Election (Macron vs. Le Pen)
Date: May 7, 2017
Polls said: Macron would win the runoff handily, roughly 64-36. But in the wake of Brexit and Trump, there was widespread anxiety that polls were systematically wrong and Le Pen might pull an upset.
Markets said: Macron, emphatically. Betfair had Macron at 93%+ in the final week. PredictIt had Macron at 90%. Markets were telling the world to calm down — this was not a close race, regardless of the Brexit/Trump PTSD.
What happened: Macron won 66.1% to 33.9%, almost exactly in line with polling. Both polls and markets were right. But the market provided something polls couldn't: a clear probability estimate that cut through the media panic about polling failure.
Why this matters: Sometimes the value of prediction markets isn't that they disagree with polls — it's that they provide a clean probability number that puts noisy, conflicting polls into proper context. When 50 pundits are screaming about uncertainty, a 93% market price is a powerful signal: "This isn't actually close."
When Polls Beat Markets
Markets don't always win. Here are the scenarios where polls tend to have the edge:
High-turnout, well-polled elections. In races where pollsters have extensive demographic data and high response rates, polls can be extremely accurate. The 2008 and 2012 US presidential elections saw polling averages that were nearly dead-on, while prediction markets — especially thin ones like PredictIt at the time — added noise rather than signal.
When the trader pool is demographically skewed. Prediction market traders are disproportionately male, college-educated, high-income, and tech-savvy. On questions where the relevant information is held by a different demographic (e.g., community-level turnout patterns in majority-minority districts), polls that sample the actual population can capture information that markets miss.
Late-breaking events. Polls conducted after a major event (debate, scandal, natural disaster) can capture the immediate public reaction. Markets may overreact or underreact to breaking news, especially when liquidity is thin during off-hours.
The bottom line: The best forecasts combine both. Use market prices as your baseline probability, then adjust based on polling data — especially in well-polled races with large sample sizes. Treating either source as infallible is the real mistake.
Why the Difference Exists
Four structural differences explain why markets and polls diverge:
1. Incentive alignment. Poll respondents have zero cost for being wrong. There's no penalty for telling a pollster you'll vote when you won't, or that you support a candidate you're lukewarm about. Market traders pay for being wrong with real money (or reputation, in the case of Metaculus). The incentive to be accurate rather than expressive is structurally different.
2. Information breadth. Polls capture one type of information: stated preferences of a sampled population at a single point in time. Markets aggregate information from diverse sources: polling data, early voting numbers, fundraising reports, on-the-ground canvassing reports, economic indicators, historical base rates. A single market price can reflect information from dozens of distinct sources that no single poll captures.
3. Continuous vs. periodic. Polls are snapshots. Markets are live feeds. A new piece of information — a gaffe, an endorsement, an economic report — is priced into markets within minutes. It takes 2-5 days for that same information to show up in polling averages.
4. Preference vs. prediction. When a poll asks "Who will you vote for?", it measures preference. When a market prices a contract at $0.60, it measures the crowd's best guess about what will actually happen. These are different questions, and the market question is often more useful for forecasting.