Ask the Experts – Polling Before the Presidential Election

The Survey Practice editors prepared questions for the three experts – Frank Newport from the Gallup Organization, Kathy Frankovic from CBS News, and Doug Schwartz from the Quinnipiac Poll. We asked them to relate their answer to polling in the month prior the presidential election. We appreciate them taking the time to respond during their busiest period. We invited them to respond as they felt appropriate and to treat the questions as topics rather than direct questions. We allowed the experts to choose which questions to answer.

SP: Measuring the race by “likely voters” rather than registered voters

Newport – Gallup has periodically analyzed the responses of likely voters throughout this year. We report likely voters in order to give those interested in the election an indication of the difference likely voters could make, in theory, while recognizing that differential turnout among groups is not finally determinable until near the end of the campaign.

Schwartz – We began measuring “likely voters” rather than registered voters once we knew for sure who the Republican and Democratic nominees would be. Once Obama clinched the Democratic nomination in June we switched from registered voters to “likely voters”.

Frankovic – We have collected our likely voter questions since the conventions. Our current practice is to report the likely voter percentages, but to continue to describe the entire registered voter group when it comes to feelings about the candidates and positions on issues. At some point – but much closer to the election – this will very likely change.

SP: Allocating undecided voters

Newport – Gallup historically has allocated undecided voters only with the final poll taken before the election. Gallup has used several methods for allocating undecided voters: proportionate (that is, allocated in line with how decided voters split among the candidates), statistical modeling (using the characteristics of those who do make a candidate choice as independent variables, thus creating a statistical model which – when the characteristics of those who don’t make a choice are inputted into the model – allows an estimate to be calculated of the candidate choice of those undecided voters), and a method based on analysis of the historical record of elections involving incumbents and challengers.

Schwartz – We don’t allocate undecided voters. We simply report the percentage that are undecided in our final election-poll.

Frankovic – Historically, we have not allocated undecideds in our final poll.

SP: race-of-interviewer or gender-of-interviewer effects

Frankovic – I worry about how we talk about possible race of interviewer effects. In a way, this is something we really don’t want interviewers to know about or at least we don’t want them to suspect they may have an impact. I’m not sure what that would do to their ability to do their jobs. That said, we do pay attention to this – and have since we worked on the 1979 Philadelphia Mayor’s race, when a black third independent candidate ran. This spring we aggregated several polls and looked at the interaction between the race of the interviewer and the race of the respondent and how that might impact vote choice. We did find some race of interviewer effects among white respondents, but it was relatively small, and appeared limited. White Republicans, the oldest white voters and the youngest white voters, and whites in the South and the West, showed no race of interviewer effect. There was some among white voters who were Democratic identifiers, those who were between the ages of 44 and 64, and those who lived in the Northeast or Midwest. However, more recent polls haven’t found a significant race of interviewer effect. But clearly we need to look at this on a regular basis.

SP: Measuring the effect of the debates

Newport – Gallup has historically conducted one night reaction polls after debates using subsamples of overall random samples based on self-reported intention to watch the debate. We have not yet, as of this writing, made final plans for this year’s debates. Given Gallup’s program on interviewing of 1000 respondents each night this year, we will definitely be able to track ballot change, self-reported impact of the debate, and perceived “winner” of the debate using interviewing conducted the day after each debate.

Schwartz – We will poll before and after the debates but not on a schedule strictly tied to a debate schedule. We will not do any one-night reaction polls.

Frankovic – Since 2000, we have worked with Knowledge Networks during presidential debates. In the last two elections, we have looked at the impact of the debates on “uncommitted” voters, registered voters who were either undecided or who said their minds could still change. KN draws a random sample from their RDD-selected web-based panel, interviews them before the debate and then re-interviews them immediately after the debate ends. We work with KN to insure that their sample reflects what we have learned in our polls about this group of voters.

SP: Conducting tracking polls toward the end of the campaign

Newport – Tracking polls use essentially the same methodology as normal polls. Gallup has been conducting a nightly tracking program involving 1000 interviews a night since January 2nd of this year. The obvious advantage of continual interviewing is the ability to track changes in attitudes on a constant basis and thus do a much better job of estimating the impact of real world events (conventions, debates, high profile news events), and to have in place a mechanism for monitoring unexpected occurrences on the campaign trail. Plus, tracking allows us to aggregate very large sample sizes by combining nights of interviewing (e.g. 7000 interviews in one week’s time period) and thus to be able to provide continuing, stable, and statistically valid estimates of the attitudes of small subgroups of the population.

SP: Measuring “swing voters”

Schwartz – Not as such, but we will analyze the “usual suspects” and how they’ve changed over the course of the year.

Frankovic – We define “uncommitted voters” as registered voters who are either undecided or who say their minds could still change. So far this season we have tracked some of them to see how major events have changed their minds, and have twice conducted a complete re-interview of uncommitted voters to examine individual change and the reasons for it.

SP: cell-phone only voters and polling accuracy

Newport – There have been a number of tests of the implication of the inclusion of cell phones on estimates in pre-election trial heat polls. Most such tests appear to conclude that the effect is small. In a close election, of course, even a small difference could be important. This year, as in previous elections, Democratic operatives claim to be registering and bringing to the polls increased numbers of those who have characteristics associated with higher cell phone penetration, including young people and minorities. It is unknown at this point whether or not this will increase the importance of the inclusion of cell phone interviews. Gallup has been including cell phone only households in its national poll samples since Jan 2, 2008.

Schwartz – The impact is there but not yet significant enough to make a significant difference this time around.

Frankovic – We include a sample of cell phones in our polls. I think this should be done, even though weighting may be an issue, and some of the people reached on cell phones these days may be reached in other ways.

I expect that this WILL be the last election where pollsters may survive not incorporating cell phones, where weighting by demographic characteristics will take care of many differences. But we don’t know that now.

SP: What else do you think is important for polling in the last month before the presidential election?

Newport – Two factors affect the final outcome of an election: Voter shifts in candidate support, and differential turnout. The last month of pre-election polling therefore is an effort to monitor both of these factors. Voter shifts are the easier of the two to monitor, particularly to the degree it is possible to use large sample sizes. Estimating turnout, particularly if there is a very strong push by the campaigns to register and get to the polls new voters, is the more difficult challenge. There has been much discussion this year about the particular characteristics of the candidates (in particular races) and the impact thereof on pre-election polling. The direct impact of race on voters’ choice of a candidate, no matter how large, should have no impact on the accuracy of pre-election polling as long as the influence of race is manifested both in what the voter tells the pollster and what the voter does in the booth on Election Day. The hypothesis that voters express a different choice to a survey interviewer than the choice they actually make in the voting booth appears difficult to support with systematic data.

Schwartz – To conduct large enough samples to be able to accurately analyze subgroups.

Frankovic – Knowing enough to know when you don’t know something, and not being afraid to admit it.

The editors “Thank the Experts” for their responses. If you want to comment or continue the discussion, please comment below.

About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.