An Experiment in Survey Skew and Bias| Romain - 24/03/2020
An Experiment in Survey Skew and Bias
How should I phrase certain questions? What mistakes should I be avoiding? These are just two of the many questions you might ask yourself before writing a survey, in order to make sure you recruit the right respondents and avoid skewing the results.
To help answer your most pressing survey-writing questions, we conducted an experiment: we gave our panellists two surveys. The first survey was written in a way that would get skewed results, and the second in a more neutral way. Each survey was completed by 1,600 people in France, aged between 18 - 65 and representative of the population.
The results might surprise you! Let’s take a look... ????
AVOID: Complex vocabulary
Yes, it might seem obvious, but our experiment provided concrete evidence that you won’t get the same results if you ask about, for example, the benefits of ‘thiamine’ (scientific name) compared to asking about the benefits of ‘vitamin B1’ (commonly-used name).
We drew two conclusions from our test:
- Participants who answered the survey that used scientific names were more likely to say that ‘thiamine’ has no health benefits. The word might not be familiar to them and, to be honest, sounds a bit scary!
- Participants were twice as likely to dodge the question (answer ‘I don't know’) when asked about the benefits of ‘thiamine’.
Seeing as we like to be thorough, we tested several combinations - and came to the same conclusions each time:
Thiamine vs. Vitamin B1
Riboflavin vs. Vitamin B2
Niacin vs. Vitamin B3
Pantothenic acid vs. Vitamin B5
Pyridoxine vs. Vitamin B6
And to think... we revealed to 1,600 panellists that they actually eat all of these substances in their cereal for breakfast!
AVOID: Double negatives
‘On the whole, French people are interested in politics’ vs. ‘On the whole, it would be incorrect to say that French people are not interested in politics’.
Our first conclusion was that it’s complicated even trying to write a sentence using a double negative - so you can imagine the effect this has on results.
51% of participants agree with the first statement compared to 64% who agree with the second statement. However, both statements mean the same thing.
AVOID: Including the answer in the question
This is particularly important when writing your screener. If the participant starts to understand what you’re looking for, they’re going to say whatever they can to make it past your screening process. That’s why it’s important to make sure you’re not giving the game away in your screener questions.
For example, if you want to survey participants who have hot chocolate for breakfast, instead of asking them a ‘Yes/No’ question, provide them with a list of drinks to choose from.
We found that by hiding the required answer among several other options, 28% of participants were recruited as hot chocolate drinkers. When we asked a ‘Yes/No’ question, however, we recruited 43%!
Want to delve deeper into the subject of trick questions? Read our article here: article link
AVOID: Changing the order of satisfaction scales
We experimented with satisfaction scales by asking how our participants felt about a well-known coffee brand. For half the participants, the scale was presented from positive to negative (“Excellent brand” to “Terrible brand”). The other half were shown the scale from negative to positive (“Terrible brand” to “Excellent brand”).
In this case, there were no discrepancies and the results were the same.
However, within a survey, it’s best to keep all of your satisfaction scales in the same order. Techniques like this can be used to identify participants that are not taking the survey seriously, but you do risk losing genuine and highly-engaged participants in the process.
AVOID: Option lists that are too short
Imagine you’re asking participants to describe their opinion about full English breakfasts. If your response options are mostly made up of positive adjectives, then responses will naturally be more positive than if you provide an equal number of positive and negative adjectives.
Moreover, panellists want to be able to express their opinion fully and will happily resort to selecting ‘Other’ if they believe the options provided don’t reflect how they feel. This means you could end up having to code a high volume of open-ended responses.
So, when writing primarily closed-ended questions, make sure you provide participants with plenty of adjectives that represent all of their views.
AVOID: Having text-only surveys
Nowadays, it’s very easy to add images or videos to online surveys - that’s why you should be making the most of it!
Sometimes, they help illustrate a question, which allows for more accurate results.
If you’re asking a brand awareness question, for example, it helps to use logos. Some people are more visual than others.
For a brand like Nike, people don’t necessarily need to see its logo to recognise the brand. We found that brand recognition for Nike is the same with or without use of its logo. However, when it comes to ‘lesser-known’ brands like Intel, a logo can be useful. Brand recognition for Intel was at 83% with the brand’s logo, and 75% without the logo.
AVOID: Influential imagery
This is exactly what we tested when we asked our panellists how they felt about French actor Gérard Depardieu. Half the panellists were shown a neutral photo of Depardieu, and the other half were shown a photo of Depardieu holding his Russian passport. The participants presented with the neutral photo felt significantly more positive about Depardieu (62% vs. 50%).
AVOID: Neglecting the random ordering option
Random response ordering is one of the oldest techniques in the survey book.
In some cases, responses do need to follow a logical or alphabetical order to help the participant (for example, if you ask your participants about which country they live in, it’s easier to find if the list is in alphabetical order).
However, when you’re asking participants to describe their perfect shoe using a list of 19 options, you’ll need to programme the options to appear in a random order each time. Otherwise, you risk the options at the top of the list being selected more often than those at the bottom.
When lists are long, participants usually can’t remember every option, and so some options will be selected less often and results can be skewed.
AVOID: Poorly-constructed price questions
How do you write questions about price? This area deserves a whole article to itself! Do you use an open numerical question? Do you use a single-response question with price brackets? There are several ways to go about it.
For this experiment, we tested the use of price brackets by asking our panellists how much they would be willing to pay for a pod coffee machine.
Half of the participants were presented with 6 brackets where the first option was “Less than €40”. The other half were presented with 7 brackets where the first option was “Less than €30” and the second option was “Between €30 and €39”.
Logic dictates that the percentage of participants choosing “Less than €40” should be the same as the total percentage of participants choosing “Less than €30” and “Between €30 and €39”. Surprisingly, that wasn’t the case.
20% of the first group of participants answered “Less than €40”, whereas 26% of the other group answered “Less than €30” or “Between €30 and €39”.
The conclusion? It’s best to keep your surveys as simple as possible. Some topics are already complex enough without creating technical difficulties for your participants.
Remember that your participants are under no obligation to take part, so if a survey is poorly-written or skewed, it can generate inaccurate data and unhappy panellists.