Research 2000

Another R2K Smoking Gun: Bargain Basement Pricing

In my previous post on Kos and Research 2000 I noted how weird it was that Kos could afford to commission dozens of campaign polls given that by (his own admission) he runs a low seven-figure operation and the polls are likely far from his main traffic driver -- though I'm sure they don't hurt eyeballs-wise.  

I put this question to a pollster, who said R2K's claimed methodology and the likely cost of doing such polls legitimately raised immediate red flags. The pollster pointed me to this massive, 2000-person survey limited to Republicans back from January, commenting thusly: 

Take, for instance, their large January 2010 survey aimed at proving Republicans were all kooks.  They did a sample of about 2000 Republicans - a totally absurd sample size, most pollsters wouldn't in good conscience have a client pursue a survey of that size unless they had microtargeting aims and really needed a lot of subsample detail.  Unless you really, really want a big sample for the smaller cells (say, you want 100 interviews from female Hispanic Republicans age 18-34) there's no reason to do a survey of that size.  1000 interviews will do for a national. Sometimes we go up to 1250 with our bigger clients who really need that level of detail on a few key subsamples.  The difference in margin of error from 1000 (+/- 3.1%) and 2000 (about +/- 2%) is not a huge deal, not worth spending 2x as much on a poll.

Now, even weirder, it is just of Republicans, AND it wasn't done from a listed sample.  When you want to do a survey of, say, primary voters in a statewide, a listed/voter file sample is a totally acceptable practice because the alternative is unbelievably costly.  Think about it - not only did they call 2000 people, but they randomly dialed people, and turned away anyone who didn't identify as a Republican.  This will crush your incidence rate (meaning the number of folks who pick up the phone who are eligible to take the survey) and send costs through the roof.  We're talking at least tripling the costs.  

A survey of the length of that January 2010 survey, about 25 short-ish questions, plus a handful of demographics, is probably about a 10 minute questionnaire (I'm just eyeballing it and assuming an introductory statement and guessing on the # of demos asked, I could be off by a few minutes).  Fielding a 10 minute questionnaire to 1000 registered voters is going to run you in the $25-30 range.  Fielding a 10 minute questionnaire to 2000 voters? Probably 45-55.  But with the crazy drop in incidence caused by the Republican screener?  That survey could not have been done for less than six figures.  Period.  

Remember now that Research 2000 never claimed to be a robo-polling outfit. They claimed they did live interviews. And most polls are of likely voters, not registered voters. More screening means more cost. As far as what R2K claimed was its methodology, we're pretty much talking the Cadillac in terms of what the polls should cost. 

So, the question is did Kos really pay high five-figures, or low-six figures, for a single poll to drive eyeballs to one or two blog posts to prove Republicans are nuts? Huh?

I'm guessing no. I'm guessing R2K sold it to him for far less, say $10,000? And anyone with a rudimentary understanding of polling would have known you can't do a poll like this for that amount of money. So the question now is what this says about what Kos should have known about this. Is he so rich he can drop 100K on a single poll to drive a single day's news cycle -- something not even the major networks would do? Is he simply gullible? Or was he negligent in not checking out what what I can only guess were R2K's absurd price quotes compared to live operator pollsters? 

It wasn't just (relatively) deep-pocketed new media sources like Daily Kos who were spending money on Research 2000 polls. R2K did polling for state-level liberal blogs like Blue Mass Group in the run up to the Massachusetts special election. On January 14, R2K produced a poll showing Coakley with an 8-point lead (while other polls were showing Brown pulling ahead), and in touting the "good" news, Blue Mass Group proudly noted that "Research 2000 does live interviews, unlike robo-pollsters Rasmussen and PPP." My polling source had this response: 

A simple ballot test and a handful of demographics wouldn't be very long.  But even if that was only a 4 minute survey, you're still talking at least at least 6-8 grand for the raw interviewing costs without any additional markup. 

Did a Massachusetts progressive blog pay more than $6,000 for a top-of-the-line survey when maybe a half dozen other pollsters were polling the race by that point? Really? This begs the question of what Blue Mass Group really paid. And what did Kos really pay? And if the numbers are within what seems like their modest budgets (by mainstream media standards), it should have raised red flags if they did any shopping around for other pollsters. 

The Kos-R2K Affair

Daily Kos has sued Research 2000, its former pollster, for fraud. On the surface, the allegations seem a lot like the case Nate Silver made against Strategic Vision. In essence, when you're making up the numbers, odd biases and consistencies tend to creep in. You tend to favor certain numbers over others. The crosstabs, even on ridiculously small sub-samples, look too "clean." The report detailing the allegations is here

The one R2K poll on a race that I was working that now seems to make perfect "sense" in light of this new information is the poll of the California Senate race two and a half weeks before the primary that showed Tom Campbell building a 15 point lead in the GOP primary while polling on adjacent field dates showed Carly Fiorina building a 20 point lead. I recall thinking that if there had genuinely been 35 points of movement in 48 hours (absent some major cataclysmic event, which there hadn't been), that'd be virtually unprecedented in the history of polling. If one were to make up a poll lead, a 15 point Campbell lead made sense if one looked at the past movement in the polls, but not in terms of what was actually happening on the ground at that point. It all makes a lot more sense now. 

A lot of folks are trying to point to the root causes of this seeming debacle (including slamming robo-polls, which I think is off-base given the accuracy of outfits like SurveyUSA and PPP) but it will be interesting to see what the coming lawsuit(s) reveal about the relationship between Daily Kos and R2K. R2K was around prior to Daily Kos, and my vague recollection is that there was nothing out of line about its polling prior to its Kos contract. I could be wrong, but their polls seemed to play it up the middle. When an R2K poll came out in a previous election year, I didn't automatically assume a Democratic skew like I would a CBS/New York Times poll or a Newsweek poll. Yet the moment they signed up with Kos, all their results seemed to skew towards Obama and Congressional Democrats, starting with their 2008 Presidential tracking poll. Their 2010 polling was if anything worse, skewing several points toward Democratic Senate candidates, though their numbers in primaries seemed right, at least until the end when they disintegrated upon close contact with actual results. 

At some point when I raised this previously, it was mentioned that R2K was simply assuming a turnout model closer to Obama 2008. If so, who would be pushing them to do that? R2K? Or Kos?

Did Markos tell R2K to produce fraudulent polls showing Democrats up? Clearly not. Could R2K have simply been too eager to please their client, producing skewed results and making stuff up to boot? That seems more likely. Either way, R2K's newfound pro-Democratic skew had the effect of skewing the polling averages in a way that even Strategic Vision (which performed "better" on 538's Pollster Ratings) didn't. 

Another question to me is the volume of polling that R2K produced for Kos. I have a hard time finding a pollster who was this prolific for an individual client as R2K was for Kos. SurveyUSA has been equally if not more prolific in past cycles -- though not so much this one -- but their clients are different all over the country, usually local TV affiliates. Likewise, pollsters like PPP (D), another highly regarded automated polling operation, will release polling as a promotional vehicle for themselves -- and will potentially pay for it by picking up political clients, or selling questions on a survey otherwise deemed for public release. 

The bottom line is that polling, even automated polling, is expensive, and especially at the volumes R2K and Kos were doing. It's hard for me to believe that Kos's polling bills wouldn't have run into the deep six figures, which seems like an awfully big chunk of his $1 million (give or take) in revenue. I'm not the expert here, but it seems to me that more deep-pocketed media organizations haven't commissioned nearly this much polling (national networks release like, what, once a month?). Perhaps the unit cost was getting to be too low, and R2K's margins were getting squeezed by their arrangement with Kos, so they simply made it up. Either way, the damage to the credibility of the polling industry and the polling's effect on conventional wisdom, was fait accompli. 

UPDATE: Research 2000 claimed they did live interviews, and were not robo-polling. Live interviews are naturally more expensive. Which means they must have pitched Kos on a ridiculously low cost per poll. Was this not in itself a red flag?

CT 5. GOP candidate Justin Bernier used Kos's fraudulent pollster

By now the world has heard that the principal behind the Daily Kos website is suing his old pollster for selling him doctored surveys.

Now one is likely to grab the popcorn and watch the top liberal blogger eat crow and take his pollster to court, but a Republican also hired Research 2000 to get polls promoting his agenda.  And much as Kos determined his pollster was manufacturing polls to tell him what he wanted to hear, I suspect that the same game was afoot here in CT 5.

In January, candidate Justin Bernier released a poll to the press with great fanfare. This poll purported to tell people that surprisingly Bernier, a first time candidate. was more electable in the 5th District than an incumbent state Senator, Sam Caligiuri.  

According to results released by Maryland-based polling company Research 2000 on Jan. 10, voters in the state’s 5th Congressional District favor Bernier over rival Sam Caligiuri by a margin of 36 percent to 15 percent

Somehow, an elected official from the district's largest city would lose two to one to a first time candidate who wasn't a statewide celebrity. Go figure.  The smell test simply wasn't passed with this poll.

If the intent of the Research 2000 poll was to cause Republican activists to flock to Bernier over Caligiuri it was an epic fail. Caligiuri defeated Bernier over 2 to 1 at the convention causing Bernier to soldier on to a primary August 10. Either 200 of 300 delegates didn't believe the poll or an accurate poll would find them unrepresentative of the communities that sent them to the convention.   My nickel then was on the bad poll theory.  I've been proven right. 

Perhaps Bernier's decision to primary Caligiuri was based on believing this poll. If so , he ought to reconsider what he is going to achieve continuing a losing contest.  At the very least he ought to do what Kos did and admit he was scammed, and unintentionally tried to sell the press and the public a bill of goods.

The only beneficiary from this is going to be liberal Democrat incumbent Chris Murphy. Which also suggests that using Kos's pollster might perhaps have given the Left an opportunity to toss some disinformation into this race.  When weird things happen, I ask who they help.  The Research 2000 poll was intended to hurt the strongest GOP candidate in CT 5. Occam's razor anyone? 

Syndicate content