Two of the podcasts I listen to every week are “FiveThirtyEight Politics” and “The Pollsters”. They diverge from my typical true crime and comedy genres, but are dream podcasts for a news junkie-market researcher.
Beyond analyzing polls about politics and other topics, both shows assess and highlight potential issues with research design, including how the survey was conducted, methodologies used and even appropriate placement and wording of questions. As someone who writes questionnaires and wants to ensure I’m getting unbiased and complete answers, I’ve recently found topics about survey design and execution particularly interesting.
On an episode of “FiveThirtyEight Politics”, the host analyzed Emerson College’s Democratic Presidential nominee polling process and its impact on nominee, Pete Buttigieg. As reported, Buttigieg received zero percent support in the January 2019 poll, but then tracked up in late March, when he received 11 percent of the vote in Iowa and a third-place rank. The podcast considered how the poll, conducted both by phone and online, used different methodologies and possible impact on results. As reported by “FiveThirtyEight Politics”, over half of the respondents took the poll by phone, where unlike the online version, the list of candidates was not randomized. Buttigieg was always the first option read over the phone, which could have led to a greater number of respondents picking his name and ultimately leading to his jump in popularity, although he did appear to have similar support from online respondents.
Regardless of this example, it’s interesting to ponder whether a poll, given its general respectability and press coverage, can begin rolling a snowball down a hill and change perceptions and future polling/outcomes. Once Buttigieg became third in the Iowa polls, he received additional media attention. The way Emerson College conducted this survey—its order of response options, a smaller sample size and a larger margin of error—are all possible factors aiding in increasing voters’ awareness and Buttigieg’s newfound popularity. Or, he could have found other coverage naturally, leading to the same results.
In “The Pollsters”, a theoretical discussion around questionnaire design captured my attention. The subject was Pew Research’s new probe on the accuracy versus efficiency in the “check all the apply” question format in surveys. According to Pew Research’s findings, data is more likely to be accurate if there are forced-choice answers, yet, respondents can tire through surveys and begin to click-through or drop out if there are too many questions. Knowing this leads to a conflict in survey design: Is it better to include the “check all that apply” instruction despite risking too many answers being selected or ask fewer, but pointed questions, limiting the number of topics broached?
These topics are all a matter of details and decisions, ultimately, but the crux of the matter is being aware of how these choices may impact results. Although presenting how surveys are conducted is not typically front-page news, it can be fascinating to deconstruct them and consider all aspects of survey design and how seemingly minor changes might have a bigger impact than anticipated.