Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
3,000+ voting intention surveys
Like many others around the world, I have been watching with interest the democratic process in the United States of America. One of the most influential and watched websites is FiveThirtyEight.com, headed by Nate Silver, author of the excellent popularisation of the craft of statistical time series forecasting The Signal and the Noise. I have no intention of adding substantive commentary to the crowded space of analysis of US voting behaviour (some of which, and not just from FiveThirtyEight, is extremely high quality BTW), but I wanted at least poke around the data. In particular I’m interested in who is doing all these polls, and what does FiveThirtyEight do to the numbers before putting them into their forecasting models.
In pale grey text at the bottom of the election prediction page, FiveThirytyEight have a link to “Download CSV of polls”.
There are more than 9,000 (at the time of writing) rows in this spreadsheet, but a quick look shows that there are only 3,067 unique values of the poll_id
field. Each poll is listed three times in this dataset: once each for “polls-only”, “polls-plus” and “now-cast”. Here’s R code for downloading the data and a first quick look at it:
Poll weight?
The polling data has a a poll_wt
field. At first I thought this would estimate the value of the poll based on the grade of its provider’s quality (they are all rated from D to A+) and sample size, but it turns out to be more complicated than that, and to be based largely on how long before the election the data was collected. The values for polls-only and polls-plus (both of which forecast the election on its actual date) are identical; the now-cast weights are slightly higher. Sample size and pollster grade also contribute to the determination of poll_wt
. Many of the older surveys have a poll_wt
of pretty much zero, meaning they are not being used at all in estimating current predictions. All this can be seen in the graphic below.
That graphic was created with this code:
For the rest of this post I’ll be focusing on the “polls-only” data.
Raw versus adjusted?
Each poll has two sets of percentages for each of four presidential candidates (Clinton, Trump, McMullin and Johnson – but McMullin is often missing). The two sets are labelled “raw” and “adj”. My understanding of the FiveThirtyEight approach is that the “adj” value reflects calibration for historical statistical bias of the individual polls (ie if they consistently have overestimated party X in the past, their reported proportion for party X is adjusted downwards). This is perfectly sound and standard, and it’s good that both the adjusted and unadjusted data are made available. Neither the raw nor adjusted series add up to 100, presumably because of undecided or otherwise non-informative respondents.
Here is a comparison of the raw and adjusted intended vote for Clinton:
The diagonal line shows parity – points on the line represent surveys where the adjusted value is exactly the same as raw. We see that much of the adjustment for Clinton is upwards. Before any conclusions are drawn from that, let’s check in on the same graphic for Trump:
We see that Trump’s vote also gets a share of upwards adjustment. Before leaving this, let’s look at the distribution of adjustment factors for them both:
For both candidates, the most common adjustment is actually very slightly downwards (ie just below 1). Clinton’s estimated intended votes do get adjusted up a little more than do Trump’s. With adjustments based on historical relationships between polls and results, this is the sort of thing to be uncertain about in this far-from-usual election year; but FiveThirtyEight only have history to go on. Certainly in the methodological pieces on their website like this one, they pay all due respect to this aspect of uncertainty.
Trends over time
Note that in the graphics above I’m ignoring the regional element of the data – some of the surveys’ target population is the whole USA, some are for specific locations, from 14 for the District of Colombia to 109 for the key state of Florida. This is the main reason for the large variance in estimated intended vote for Senator Clinton when looking at all the polls together.
More interesting is seeing trends in voting intention over time. In the media, these are usually shown as lovely clean time series charts with a single, simple line for each candidate. The reality of the data is much messier of course. Here’s the estimated intended vote for both Clinton and Trump, from just the surveys aiming to estimate voting behaviour of the whole U.S.A., illustrating the raw versus adjusted data:
I like this graphic because it illustrates several things:
- the big range of individual survey results. All these surveys are trying to measure the same thing – intended vote in the same election, from the same population. Nothing better illustrates the folly of paying attention to only individual polls than looking at this spread. Pooling them – which is implicitly done by the smoothing blue line – is the way to go.
- one result of the adjustment process is to reduce the variance in survey results. For both Clinton and Trump, the adjusted poll results are tighter than the raw data.
- the basic story – Clinton is doing better than Trump – isn’t changed by the adjustment process
- a cluster of recent large sample size, “B” grade surveys have particularly low raw values for both Clinton and Trump and these are both adjusted upwards. Checking the data, these turn out to be “Google Consumer Surveys”, which are apparently judged by FiveThirtyEight to materially understate the probable vote for major party candidates and overstate the chances of Johnson (Johnson’s percentages in these surveys are typically adjusted down about three percentage points by FiveThirtyEight)
It’s useful to think of each pollster’s data as a group of longitudinal results. This suggests a variant of our last graphic, the standard longitudinal spaghetti plot:
Who are all these pollsters?
Finally, I wanted a graphic that showed the relative influence of the various pollsters on FiveThirtyEight’s polling forecasts. This takes me back to the poll_wt
variable.
Unfortunately visualising all 180 pollsters is a non-trivial problem. After trialling various ways of showing pollster grade, sample size, and total weight simultaneously I fell back on a straightforward wordcloud:
It could definitely be improved on with some imagination.
Survey Monkey polls are the most influential in the data set despite their relatively low grade (C-) because of the sheer number of them – at least three surveys in every state so far plus 30 national surveys. Ipsos, Google and Survey Monkey between them have surveyed 1.75 million people so far in this campaign.
Final code snippet – creating the wordcloud:
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.