How effective are political polls? The Thursday before the Nov. 5 election, WRTV-Channel 6 broadcast an interview with TeleResearch pollster Jeff Lewis. Lewis had recently completed a poll of likely 7th District voters and found that Congresswoman Julia Carson"s challenger Brose McVey, once behind, had now pulled ahead by 3 percentage points. "This race is turning," a solemn Lewis pronounced.
Public Opinion Lab director Brian Vargus
Lewis was not alone in his opinion. Libertarian candidate Andrew Horning publicly stated he was certain McVey was going to win. Indianapolis Star political writer Mary Beth Schneider wrote that McVey"s message was resonating with voters and "this could be the year" that Republicans finally defeated Carson. Political columnist Brian Howey approvingly cited his partner Lewis" polling data and rated the contest a toss-up. The consensus of WFYI"s Indiana Week in Review pundit panel was that it would be a very close election. It wasn"t. Carson won with 53 percent of the vote to McVey"s 44 percent, a margin of over 13,000 ballots. On an election night when several countywide races were determined by little more than a thousand votes and control of the Indiana House of Representatives was decided by a 37-vote margin, the hyped Carson-McVey race turned out to be an easy win for the incumbent. Why were so many pundits so wrong? The biggest reason is that they relied on polling data that was not worth the newsprint it was reported on. Besides Lewis" poll showing McVey"s lead, The Indianapolis Star published a Page 1 story on Nov. 1, touting a poll the newspaper commissioned from Market Shares Corporation of Mt. Prospect, Ill. The Star poll showed Carson with a shrinking 1-point margin, which The Star suggested was likely to be wiped out by most of the 10 percent of undecided voters turning to McVey before Election Day. Only one poll"s numbers came close to the actual results. A survey conducted in late October by the Indiana University Public Opinion Laboratory for WISH TV-8 showed Carson with a 9 percent lead over McVey, the same margin by which she prevailed in the election. Although his results outshone those of his competitors, IUPUI political science professor and Public Opinion Lab director Brian Vargus cautions that even his numbers are often misunderstood by the public and misused by the media. At best, he says, survey data on something as volatile as candidate choice is able to provide a snapshot of public opinion on the days the poll was conducted. By the time a poll is released, it is already out of date. "These are not predictions, even though everybody wants to turn them into that," Vargus says. Vargus and others also say it is increasingly difficult to gather reliable polling data. Pollsters currently lack the ability to call cell phones to reach respondents, and Caller ID technology allows potential subjects to screen out survey calls. The resulting lower response rates were blamed for several mis-called elections throughout the nation last week. CRAP polling Given the difficulties pollsters face, there are accepted best practices for the profession, set by the American Association of Public Opinion Researchers (AAPOR). In adherence to those guidelines, Vargus" operation extensively pre-tests its questions and trains college and graduate students to conduct interviews. In contrast, Howey/TeleResearch poll results come from automated calls asking respondents to press phone buttons to indicate their voting intentions. The technique"s low response rates and inability to screen respondents - it could be a preschooler pressing the buttons - have led to widespread criticism. Survey research expert and University of Michigan professor Michael Traugott has tagged the method as CRAP - Computerized Response Automated Polling. To many in the public opinion research field, the acronym is apt. National studies of the year 2000 election showed CRAP poll numbers varied widely from the election results. "There is no sound theoretical basis for the way in which these surveys are conducted," Traugott says. Self-imposed mystery surrounding The Star poll makes it difficult to determine why its numbers were so different from the Carson-McVey result. The Star does not follow the AAPOR mandate to make publicly available the details of its survey methodology, including the sample and the actual question phrasing. (Neither The Star nor Market Shares Corporation returned calls seeking comment for this article.) One factor that likely affected the poll"s response rate is Gannett Corporation"s hiring of an Illinois company to do the work. Vargus says having Indiana University affiliated with his researchers - and indicated on Caller ID - almost doubles the response rate compared to studies he has conducted without the IU imprimatur. Of course, the local pundits who predicted a close congressional election qualified their reliance on the flawed polls with the caveat that "Julia always gets out her vote." That analysis is not much more sophisticated than CRAP polling. For one, the characterization smacks of racism, carrying with it the image of Carson herding unwitting African-Americans to the polls. (In her first campaign for Congress, a local TV reporter asked Carson at a candidate forum if she handed out whiskey bottles to people who voted for her.) More to the point, the analysis is a too-easy cover story for repeated polling flaws. Carson does have a peerless Election Day apparatus, but Republicans work hard to mobilize their core voters, too. And the African-American community struggles with low voter turnout, just as the white community does. Carson has won election four times now in a congressional district where over 70 percent of the voters are white. Her success is less an Election Day miracle than an indication that a comfortable majority of local voters - white and black - want her to represent them in Washington. Pundits take note: The real polls - the ones conducted on the first Tuesday in November - keep proving that point.