Posted: 10/17/2008 1:24:05 PM EDT
[#44]
Quoted:
Quoted:
Even though you're a fucking troll, this isn't that hard.
When they do polls, they "adjust" the results to make sure their sample is adjusted to what they believe will be voter turn out.
One of the tools they use to make these adjustments is called "voter identification"
One of the sources of the adjustment for "voter identification" comes from information about the NUMBER of registered voters in an area, including the DELTA in such registrations over a period of time.
Ironically, the ACORN people's piles of fraudulent registrations result in the poll people over adjusting (in Dem favor) the voter identifications hashed against poll percentages.
This makes the polls all show Nobama way ahead of where he likely really is. The polls think there are more total democrats going to vote than there actually are.
This is designed to get the polls showing a bump lead, to dissuade the other side from bothering.
Plus the Acorn guys are now trying to actually cast VOTES using the fraudulent cards.
Ergo: Bad shit.
Now quit fucking trolling. |
I know it's hard for you to believe, but this was an honest question and I appreciate you taking the time to respond.
I took a class on polling methodology while working on my political science degree and while I don't remember much, I do remember that in fact this is not how a lot of polling works.
The use of political party in polling questions is a subject of big contention among the polling community. Many feel that it skews results so it is certainly not used in all polls.
_________________________________________________________________________________
So I just did a bit of reading on the subject and I found a really interesting article that covers this exact discussion. It says that polls conducted by academics do not like to use party affiliation but that many pollsters do use it. This would explain why in my class it was not used. It's an interesting read and I thank you for bringing up this point even though the anger wasn't really necessary.
Pre-election polling presents particular challenges. As Election Day approaches, these polls are most relevant and accurate if conducted among voters. Yet actual voters are an unknown population — one that exists only on (or, with absentees, shortly before) Election Day. Pre-election polls make their best estimate of this population. Our practice at ABC News is to develop a range of "likely voter" models, employing elements such as self-reported voter registration, intention to vote, attention to the race, past voting, age, respondents' knowledge of their polling places and political party identification. We evaluate the level of voter turnout produced by these models and diagnose differences across models when they occur.
The use of political party identification in likely voter models is a subject of debate among opinion researchers. It's used commonly by campaign pollsters, less so among academic researchers. After an extensive evaluation of the issue, ABC News began employing party ID as a factor in our likely voter modeling for our tracking poll in 2000, and we continue the practice in our 2004 tracking poll. (A tracking poll is a series of consecutive, one-night, stand-alone polls reported in a rolling multi-night average. Ours is conducted among 600 general population respondents per night, using a nightly mix of fresh and redialed random telephone numbers.)
We made a detailed presentation on our 2000 tracking poll, including an examination of the effects of party ID as a factor in modeling, at the 2001 annual meeting of the American Association for Public Opinion Research (AAPOR). It showed that party ID factoring in 2000 had essentially no effect on our estimate of vote preferences — no more than a single point on any given day.
Proponents of using party ID in likely voter modeling point out that party ID has been remarkably stable in exit polls conducted in presidential elections since 1984 — Democrats accounting for either 38 percent or 39 percent of voters, Republicans 35 percent and independents 26 percent or 27 percent. (That stability is impressive given the differing vote margins in these elections — Rep +18, Rep +8, Dem +6, Dem +9, tie.) Opponents of the practice note that party ID can and does change, and that polls measuring the dynamics of the race — rather than simply attempting to predict its outcome — need to measure and report this change, not suppress it.
Our practice is informed by the fact that, in all our polling, we see night-to-night variability in party ID that appears to represent trendless sampling variability rather than actual changes in partisan self-identification. It also appears to us that some likely voter models (although not ones that we use) may accentuate this short-term variability in party ID. This affects portrayals of the race, given the very high correlation between party ID and vote preference. Rather than reporting actual changes in opinion, these surveys instead may be reporting who's moving into and out of their likely voter models. That's meaningful if it represents true movement of potential voters into and out of the pool of presumed actual voters, but not if it only represents an artifact of the likely voter model itself. Claims that this movement is meaningful seem to be contradicted by its trendless variability and by the remarkable consistency in party ID in actual turnout in the last five presidential elections.
We do not use party ID as a factor in our pre-election polls before tracking begins. These polls, done well in advance of Election Day, are not predictive, and do not seek to model actual turnout. The shifts in allegiance they record often appear as consistent, multi-night, event-based changes, rather than trendless, night-to-night variability. We noted and reported, for example, shifts in party ID around the 2004 conventions — more Democratic self-identification after the Democratic National Convention, more Republican self-identification after that party's convention.
Tracking polls, done in the final weeks of the campaign, are seen as more predictive. They need to sharpen their best estimate of actual likely voters, and not let the accuracy of their portrayal of the race fall victim to sampling variability or model-induced fluctuations.
Keeping in mind that actual change can occur, but also that random movement can distort, our solution is to compute an average of party ID as measured in our nightly tracking poll, and party ID as measured in recent presidential elections. This averaging approach allows us to pick up real movement in party ID while constraining random variability. It reflects our conclusion that, on one hand, the stability in party ID in the last five elections is persuasive, but not necessarily fully predictive; and, on the other, that some variability in party ID in tracking polls may be real, but that it also can reflect sampling or modeling variability, rather than true movement in voter attitudes.
Some critics of using party ID from exit polls in likely voter modeling point out that it's the equivalent of weighting a poll to a poll, which increases sampling error. It still, however, may improve the estimate. Exit polls are based on much larger samples than tracking polls — at least 13,000 voters in each election since 1992 — with correspondingly low margins of sampling error, less than one percentage point. Exit polls also are based on samples of actual voters, rather than likely voter estimates. And they're post-stratified to actual vote, which is highly correlated with party ID. All these increase the reliability of exit poll estimates. Opponents of using party ID in modeling also note that it introduces judgment into the process. However, judgment is required across all components of likely voter modeling — what elements to include, how to compute them, what turnout to anticipate.
While our modeling is intended to produce the best possible estimate, we reject the myth of pinpoint accuracy in pre-election polls. A good final poll, rigorously conducted and with accurate modeling, should come within a few points of each candidate's actual support. Any more indicates a problem, but any closer is the luck of the draw. Winning the horse-race lottery is not sufficient grounds for a substantive evaluation of the quality of any pre-election poll.
Indeed, while good polling produces the best available estimate of the candidates' standing at any point in time, that is not the sole or even the main reason ABC News engages in pre-election polling. We conduct these surveys as part of our effort to cover the election fully and well, by independently measuring the concerns and interests of likely voters and voter groups, and reporting how these inform their decisions in the deliberative process under way.
Response Rates
Response rates are a complex issue. Rates are computed for "contact," that is, the number of households reached out of total telephone numbers dialed (excluding an estimate of nonworking and business numbers); and "cooperation," the number of individuals who complete interviews out of total households reached. The two together produce the "response rate." There is no single, agreed-upon means of calculating response rates (including, for example, how to estimate nonworking and business numbers).
Even given a probability sample, it cannot be assumed that a higher response rate ensures greater data integrity. Research over many years, including a variety of studies reported at the annual meeting of AAPOR in May 2003, has found no significant biases as a result of response rate differences. As far back as 1981, in "Questions & Answers in Attitude Surveys," Professor Howard Schuman of the University of Michigan, describing two samples with different response rates but similar results, reported, "Apparently the answers and associations we investigate are largely unrelated to factors affecting these response rate differences." (p332.) For more details on response rate issues, click here.
In spring 2003 ABC News and the Washington Post produced detailed sample dispositions for five randomly selected ABC/Post surveys at the request of Professor Jon Krosnick, then of Ohio State University, and his associates for their use in a study of response rate differences. The cooperation rate calculations produced by Krosnick's team for these five surveys ranged from 43 to 62 percent, averaging 52 percent. The response rate calculations produced by Krosnick's team ranged from 25 to 32 percent based on what the AAPOR describes as a "very conservative" estimate of the number of business and nonworking numbers in the sample; it would be 31 to 42 percent based on a less conservative estimate reported in the June 2000 issue of Public Opinion Quarterly. The difference underscores one of the many factors that make the issue so complex, and response rate comparisons so tenuous. |
what this shows is that most pollsters are using the stable number of Democrats vs Republicans that has not changed since 1984 instead of a new number that ACORN could influence. It really takes the wind out of the sails of your argument I think.
|
Please, go back to DU. Tell them you tried, but it didn't take.
|