The Latest Pennsylvania Polls Show Biden Ahead. Can You Believe Them?
The majority of pollsters show Trump consistently trailing in the Keystone State. We ask one why he's so confident.
On November 1, 2016, a week before the latest presidential election, Franklin & Marshall College released its final poll of Pennsylvania. A poll is but a snapshot in time, but at that time, anyway, the analysts at the Lancaster college found that the race for Pennsylvania wasn’t especially close: Hillary Clinton led Donald Trump by 11 points. Even within the poll’s margin of error of 4.4 percent, the lead seemed safe.
A few days later, it became blindingly obvious that F&M was wrong. Dead wrong. Some of this wasn’t F&M’s fault. The poll was conducted right before James Comey publicly reopened the investigation into Clinton’s emails, a factor that in all likelihood swung the race decisively in Trump’s favor. Still, in the post-election reckoning of the polls that got things wrong in 2016, F&M’s stood out as a symbol — along with the nightmarish New York Times election forecast needle — of the misfires that characterized the entire 2016 cycle.
Fast-forward four years. On Thursday, F&M released its final poll of 2020. This time, Biden is leading Trump 50 percent to 44 percent. The poll also asked voters to state the most important issue in the election. Twenty-seven percent named the pandemic, followed by 23 percent who named the economy. That breakdown is likely good news for Biden, too, since only 33 percent of voters approve of how President Trump has handled COVID-19, per the survey. And just 42 percent of voters approve of Trump’s performance in general. (By comparison, 52 percent of voters view Biden favorably.)
The F&M findings track with the results from other pollsters, almost all of which are showing between a five-point and a seven-point advantage for Biden. Nationally, the website FiveThirtyEight currently gives Biden an 89 percent chance of winning the Electoral College.
But we could forgive you for not being quite as credulous this year. So we caught up with Berwood Yost, the head methodologist of the Franklin & Marshall poll, to talk about what happened in 2016, what’s changed in polling this year, and whether the poll-refreshing addicts among us can trust the numbers. Here’s our conversation, edited for length and clarity.
You’re of the mind that the polls weren’t so bad in 2016, right?
Yeah. The perception that the polls failed in 2016 has encouraged people maybe to think about them more carefully, which I think is always a good thing. When we look at the polls objectively and look at their performance over time, they’ve actually gotten better, more accurate. And I think pollsters have focused, and are focused constantly, on reviewing how they do their work and adjusting. Because if one thing’s for sure, it’s that the environment in which we work as pollsters is constantly changing.
In talking about how pollsters recalibrate, what was the biggest mistake made in 2016?
Well, certainly, education was an issue that many people have since addressed, because that became so aligned as a predictor of Trump support. So we had to adjust for that. In fact, in 2016, we were adjusting in our polls in Pennsylvania for education. We just didn’t adjust enough. We kind of overshot there. So we’re paying really close attention to that.
How did you weigh it in 2016 vs. what it looked like in reality?
I think we overestimated the college-educated turnout, and it should have been a little less in our polls. But the other thing that’s really important to understand is that the polls can only capture the moment in which they’re conducted. And so, for instance, in our case, we basically finished our interviews the day that Jim Comey announced he was reopening the investigation into Hillary Clinton’s emails. That was an important moment, particularly in some of the swing states.
That particular poll you’re referring to was the one released a week before Election Day. It found that Clinton was up 11 points on Trump. Pollsters will typically say – and this is true — that the poll captures a moment in time, and it’s not predictive. But I think sometimes that can also be used to shirk some responsibility if a number is a little bit off. So I’m curious: With that particular poll, do you think it was accurate in the moment in time and then the Comey letter happened? Or was it maybe a little off?
In that poll, we estimated Hillary Clinton’s level of support, I think, at about 48 percent, which is what she ended up with. But we had underestimated the support for President Trump, and so we still had a relatively decent number of undecided and other voters. So I think at that time, if you look at the gap between the candidates, you’d say, “Yeah that’s a little off,” and we certainly underestimated President Trump’s numbers. But given where we had Hillary Clinton’s point estimate — below 50 percent — and given the number of undecided voters we were showing, I think that’s certainly reflective of what you might have expected at that time.
There was a particularly interesting group of people. About 17 percent of the electorate of Pennsylvania didn’t like either candidate. And those folks, a large majority of them, voted for President Trump. So those are things that were evident in the polling data that hinted that something could happen. I think that’s one of the big lessons to learn. We shouldn’t be looking at just the gap between two candidates, which is subject to a high amount of variability. We should be looking at all those other contextual indicators as well: favorabilities, feelings about the economy, feelings about the direction of the state and the country. There’s a bunch of indicators that we can also look at that are probably a bit more stable and give us a sense of the context of the election.
One of the other hypotheses of 2016 was this idea of “Trump-shyness syndrome,” where people said they were undecided on a phone call with a pollster, maybe because of some social stigma, but actually were going to vote for Trump. Since 2016, there’s been a lot of anti-media sentiment. Have you picked up on people not wanting to engage with you this time around, not because they’re shy, but because you’re the “fake news”?
I think you get some of that, but you also get people who want to participate to tell how they’re thinking. While it’s possible that people are avoiding us, I think that may be true of both parties. We do get Trump voters in our samples. We have tried to pay attention to this notion of a shy Trump voter.
Do you believe in that?
I’m willing to believe in that. Unfortunately, for those who express that perspective, it’s hard to find any data that supports it. We let people complete the survey online or over the phone. We expected that if there were shy Trump voters, they’d be more likely to express a preference for Trump online than they are to an interviewer. We didn’t see any of those kinds of patterns.
Obviously it’s an extremely complicated ordeal to do accurate polling in the best of times, but this election is maybe even more unpredictable than usual, given the rise in mail-in ballots. And then there are all sorts of questions about ballots being invalidated, naked ballots, that sort of thing. Does that make this election harder to accurately poll? And how are you thinking of those added variables?
In some ways, I think the early voting helps, because people have made a choice, and they’re not going to change their minds. So when I look at the numbers, I see that almost 1.7 million Pennsylvanians have already cast their ballots. That’s more than a quarter of the ballots cast in 2016.
One of the points of uncertainty this year is that Republicans did a terrific job registering voters, and they cut significantly into the Democratic registration advantage in the state, where we’re down to maybe a half-million vote difference between the two parties. That suggests a lot of enthusiasm for Republicans. On the other side, though, you see these early ballots being cast, and it’s like 70 percent of them have been from registered Democrats. So I think it just encourages us all to remember that there are plenty of other things we should look at besides the horse race, and we need to take those things into account. And most of us aren’t really trying to predict the election. Other people may take our data and use it as input for their models. I think that’s a big part of the story of 2016 — that many of the folks who were handicapping the race probably were a bit too certain about the outcome in their projections.
There was an interesting FiveThirtyEight article that was looking at historical polling margins of error in different types of races – Senate, president, House. It ended up concluding that on average, across different kinds of races over the years, the average polling margin of error is, like, six percent.
How useful is a polling industrial complex that on average has a six-point margin of error?
It’s a good question, and everybody has to ask themselves why they’re looking at the polls. The situation is actually worse than you described, because if the average margin of error is six percent, the margin of error between two estimates — that is, subtracting one candidate from another — is twice that. So what that should tell all of us is that polls are not precision instruments. They are approximations to knowledge. I think it just reinforces my case that we need to look at other things that tend to be more stable, less variable, and that can tell us something about where voters may go. I mean, in 2016, the fact that almost one in five voters didn’t like either candidate suggested that anything really could have happened at the end of the day.
Is there some part of you, just from the perspective of a polling institutionalist, that feels the stakes are higher than normal in this election? Could these results be a referendum, at least in the public’s mind, on polling writ large?
Well, look: Do we stop paying attention to the weather forecast when they get it wrong? No. Because we want to have at least some idea of what might happen in the coming days. So polling is a tool, like any other, that has its own set of limitations. If the polls get it wrong, we’ll figure out why, and we’ll keep moving on. There are issues with polling that we’re all dealing with. We all feel a lot of pressure.
But I am worried. We’re all worried. As a journalist, do you like when they talk about the fake news and the fake media? Of course you don’t. Put it this way: It was pretty clear that even the Trump campaign didn’t expect to win that election, from the reporting I’ve read. So it was a surprise, and it was a surprise because there was a sizable movement of people at the end, some of whom stayed home and some of whom changed their minds. And you know, there’s not much polling can do about that.