What database did you actually use to find them? More traditional means of recruiting people include random telephone digit dialing and polling from a list of registered voters. However, the polling industry has dramatically shifted away from telephone methods and towards digital. Online polls pull from a wide array of sources. People may be solicited through pop-up ads, social media, or an emailed survey invitation by a corporate membership list. Or was it done with more ad hoc means?
Kennedy admits this statistical assessment can be tough to make, but she says you want to ask: Did the pollster weight the data to be representative of race, age, education, gender, geography, and other demographics? Get science images that will blow your mind with our newsletter, Picture of the Week.
The Iowa caucuses, less than three weeks away. Six days ago, a different poll had Senator Sanders up by three points in Iowa. And at the start of the year, another poll had the candidates tying in the state. Add to that the lurking memory of the election when the major storyline was how political polling got it wrong.
No wonder many of our listeners expressed some polling fatigue and uneasiness when sharing their own experience of being polled. I believe that your vote should be secret, and you should never ever tell anyone how you should vote. I used to talk to them, and I used to give out some information.
But I no longer do that. And to be honest with you, I thought it might be a spam email. But despite how people may feel about political polling, the numbers suggest that polls are still working. Things get a bit trickier as polling has moved online. Earlier this week, we spoke with Doug Rivers, chief scientist with the online pollster YouGov, who says online polling still faces some major hurdles.
And will the probably continue to trust the numbers? Here with me to talk about what polling looks like in the year and beyond is Courtney Kennedy, director of survey research at Pew Research Center, one of the authors of that recent polling report. Welcome to Science Friday. Let me let our listeners weigh in on polling. You can tweet us at scifri. Give me the ABCs of a typical poll. Take me from the beginning to the end. Ideally, you want to start a poll with what we call a frame.
You need some complete list, ideally, that has every single American on it. And so traditionally with polling, as you mentioned, we used to do phone numbers. And for decades, it worked really well to use those lists to draw a nationally random sample and contact folks. But as you alluded to, in recent years, a lot of the polling industry has moved online. And as Doug Rivers said, there is no analog online, right? You go out, you do your interviews, try to ask the right questions that are neutral and unbiased.
And then, one piece of polling that is getting a lot more attention these days is the back end, which is the stuff that we call waiting, where the pollster needs to take their data set of interviews and statistically adjusted to make it as representative as possible of the US population.
And that stems from if you do a census, if you interview everybody in the whole country, you have no sampling error because you talk to everybody. But if you do a subsample, which we all do, you only interview 1, or people.
By virtue of interviewing a subset of people and not everybody, you automatically have sampling error. Your one estimate could be roughly three points too high, it could be three points too low. A lot of people look back on , and they remember just feeling misled, right? But they miss— what actually happened was a bit more nuanced than that. Yes, there were problems with a lot of the state polls, especially in the upper Midwest states that flipped from being consistently democratic to voting for Trump.
What people miss, what they forget, is that national polling in was quite good. National pollsters actually had a pretty good year. But to your question, what was off in those state polls.
I worked on a committee. We did a comprehensive report looking at this. And we found evidence mostly for two things. You have to remember, in , Hillary Clinton, Donald Trump, were two historically unpopular candidates. And typically, voters like that who are undecided, wash out about evenly between the two major party candidates. Robo-polls may also have lower response rates, because there is no live person to persuade the respondent to answer.
There is also no way to prevent children from answering the survey. Lastly, the Telephone Consumer Protection Act made automated calls to cell phones illegal, which leaves a large population of potential respondents inaccessible to robo-polls.
The latest challenges in telephone polling come from the shift in phone usage. A growing number of citizens, especially younger citizens, use only cell phones, and their phone numbers are no longer based on geographic areas. The Millennial generation currently aged 21—37 is also more likely to text than to answer an unknown call, so it is harder to interview this demographic group.
Polling companies now must reach out to potential respondents using email and social media to ensure they have a representative group of respondents. Yet, the technology required to move to the Internet and handheld devices presents further problems. Web surveys must be designed to run on a varied number of browsers and handheld devices. Online polls cannot detect whether a person with multiple email accounts or social media profiles answers the same poll multiple times, nor can they tell when a respondent misrepresents demographics in the poll or on a social media profile used in a poll.
These factors also make it more difficult to calculate response rates or achieve a representative sample. Yet, many companies are working with these difficulties, because it is necessary to reach younger demographics in order to provide accurate data. For a number of reasons, polls may not produce accurate results. Two important factors a polling company faces are timing and human nature. Unless you conduct an exit poll during an election and interviewers stand at the polling places on Election Day to ask voters how they voted, there is always the possibility the poll results will be wrong.
The simplest reason is that if there is time between the poll and Election Day, a citizen might change his or her mind, lie, or choose not to vote at all. Timing is very important during elections, because surprise events can shift enough opinions to change an election result.
Of course, there are many other reasons why polls, even those not time-bound by elections or events, may be inaccurate. Created in to survey the American public on all topics, Rasmussen Reports is a new entry in the polling business. Rasmussen also conducts exit polls for each national election. Polls begin with a list of carefully written questions. The questions need to be free of framing, meaning they should not be worded to lead respondents to a particular answer. For example, take two questions about presidential approval.
Similarly, the way we refer to an issue or concept can affect the way listeners perceive it. Many polling companies try to avoid leading questions , which lead respondents to select a predetermined answer, because they want to know what people really think. Some polls, however, have a different goal. Their questions are written to guarantee a specific outcome, perhaps to help a candidate get press coverage or gain momentum.
These are called push polls. Figure 3. Sometimes lack of knowledge affects the results of a poll. A poll to discover whether citizens support changes to the Affordable Care Act or Medicaid might first ask who these programs serve and how they are funded. Respondents who cannot answer correctly may be excluded from the poll, or their answers may be separated from the others. People may also feel social pressure to answer questions in accordance with the norms of their area or peers.
This result was nicknamed the Bradley effect , on the theory that voters who answered the poll were afraid to admit they would not vote for a black man because it would appear politically incorrect and racist.
In the presidential election, the level of support for Republican nominee Donald Trump may have been artificially low in the polls due to the fact that some respondents did not want to admit they were voting for Trump.
In , Proposition 19, which would have legalized and taxed marijuana in California, met with a new version of the Bradley effect. Nate Silver, a political blogger, noticed that polls on the marijuana proposition were inconsistent, sometimes showing the proposition would pass and other times showing it would fail. Silver compared the polls and the way they were administered, because some polling companies used an interviewer and some used robo-calling. He then proposed that voters speaking with a live interviewer gave the socially acceptable answer that they would vote against Proposition 19, while voters interviewed by a computer felt free to be honest.
African Americans, for example, may give different responses to interviewers who are white than to interviewers who are black. Figure 4. The measure was defeated on Election Day. One of the newer byproducts of polling is the creation of push polls , which consist of political campaign information presented as polls. A respondent is called and asked a series of questions about his or her position or candidate selections.
In , a fracking ban was placed on the ballot in a town in Texas. Fracking, which includes injecting pressurized water into drilled wells, helps energy companies collect additional gas from the earth.
It is controversial, with opponents arguing it causes water pollution, sound pollution, and earthquakes. During the campaign, a number of local voters received a call that polled them on how they planned to vote on the proposed fracking ban. These techniques are not limited to issue votes; candidates have used them to attack their opponents. The purpose of a poll is to identify how a population feels about an issue or candidate.
Many polling companies and news outlets use statisticians and social scientists to design accurate and scientific polls and to reduce errors.
A scientific poll will try to create a representative and random sample to ensure the responses are similar to what the actual population of an area believes. Scientific polls also have lower margins of error, which means they better predict what the overall public or population thinks.
Most polls are administered through phones, online, or via social media. Even in scientific polls, issues like timing, social pressure, lack of knowledge, and human nature can create results that do not match true public opinion. Skip to main content. Module 6: The Politics of Public Opinion. Search for:. How Is Public Opinion Measured? The Ins and Outs of Polls Ever wonder what happens behind the polls?
Try It. September 27, Link and Robert W. If the group selected is truly random, a small sample of 1, adults can reflect the attitudes of millions. Even if a poll is truly random, another poll conducted with the same method but different respondents can get a different result.
According to Sheldon R. Gawiser, Ph. Every poll has a margin of error, which reflects the possible range of responses in randomly selected group. Margin of error decreases as the number of people who respond to a poll increases. The way in which a question is worded can influence a response, though Mark Blumenthal notes that pollsters may disagree on what constitutes neutral questioning.
The order in which questions are asked also influences responses as well, and so can the tone of the questioner. A national poll may be skewed if its respondents hail disproportionately from one region of the country.
In the s, pollsters had an easy time phoning households, Blumenthal notes, because 93 percent of homes had land-based telephones.
0コメント