What makes a bad question?
A badly designed question is one that has one or more of the following characteristics
- Ambiguous wording
- Consists of two or more questions stuck together
- Too obvious what answer the researchers want people to provide
- Does not allow people to accurately express what they think, feel or have done.
Examples of each of these faults are shown below.
Ambiguous questions
Q. In the last six months, how often did you eat tomatoes?
☐ Once a week
☐ Twice a week
☐ Three or more times a week
☐ Don’t know
Apart from the near impossibility of accurately recalling the number of tomatoes you have eaten, there is another major problem. What is meant by ‘eat tomatoes’? Eating a salad tomato would probably count, but what about eating a slice of pizza, or a portion of pasta with tomato sauce, or a plate of stew that contains chopped tomatoes? What about a dollop of ketchup?
This is an example of a question where the definitions are difficult or impossible to understand. What counts as ‘eat tomatoes’ to you may not be the same as it means to me. Undoubtedly the person who wrote the question knew what they meant. But there is a very high probability that those trying to answer it won’t, or worse still, will assume the key term mean something different to what was intended.
It is vital that you use language which is clear and unambiguous for those providing the answers. Terms such as ‘interactive’ ‘workshop’ ‘public engagement’ and even ‘learning’ are terms we use in our work but which your audience members probably don’t use, or use in different ways. Whenever possible pilot your questions with a few members of the public – even friends and family.
Double-barrelled questions
Q. Do you think your / your colleagues’ salary reflects the experience needed or work involved?
☐ Yes
☐ No
☐ Don’t know
This is at least two questions stuck together – possible four. Think of it like this. If I tick the ‘yes’ box, do I mean:
- Yes - my salary reflects the experience I need
- Yes - my colleagues’ salaries reflect the experience they need
- Yes - both my salary and that of my colleagues reflect the experience needed to do our jobs
It is impossible to know which of these three answers my ‘yes’ reflect. Any data collected would be useless. And ‘experienced needed’ may not be the same as ‘the work involved’. If they are different concepts then we actually have four questions stuck together.
Questions that contain terms such as ‘or’, ‘and’, ‘/’ are probably double-barrelled questions. Break them up into their consistent questions and present them one at a time.
By the way – this is another example of an ambiguous question because what does the term “colleagues” mean? Is it everyone who works for your employer, or just the people who work in the same department or in the same office, or at the same pay grade, or something else? The person who wrote the question may have known but nobody else can be sure.
Leading questions
Q. Taking all the factors into account, including the commitment to reduce noise, a willingness to cap overall flight movements, a plan to increase jobs and to increase the current operating hours, do you support the airport’s proposals?
☐ Yes
☐ No
☐ Don’t know
It is not difficult to guess what answer the people conducting the research would like you to provide. This is an egregious example of a loaded question where– intentionally or unintentionally – the person writing the question is strongly suggesting which answer to provide. You need to avoid using loaded questions since the people you interview will be trying to guess what answer you are hoping to hear. Your interviewees will be very wary of giving what they think, you think are ‘wrong answers’. For more on how to tackle this problem see How to get thoughtful, honest answers to your questions
Questions with the wrong answer options
Q. How would you rate your overall flight experience?
☐ Excellent
☐ Very good
☐ Good
☐ Fair
☐ OK
What box would you tick if you had a below average experience, let alone a terrible one? This is a surprisingly common fault – sometimes unintentional, sometimes done on purpose.
When asking a rating scale question you must provide as many options for critical responses, as you offer for complimentary ones. People responding to your questions must have the opportunity to be mildly or very critical. Merely providing options such as ‘OK’ or ‘Poor’ is nowhere near sufficient.
Another type of badly designed multiple choice question is one that fail to provide options that relate to what people actually do.
For example:
Q. How often do you eat/drink at the Gallery’s café?
☐ Weekly ☐ Monthly ☐ Twice a year ☐ Annually
How would someone answer this question if they have not visited the gallery for three years, or five, or ten, or ever before. It is likely most visitors to this art gallery would not be able to answer this question.
A third type of badly designed multiple choice question is one where the answer options do not match the questions being ask. In the example shown below teachers bringing classes of students to a science centre were asked:
Q. Please rate the following aspects of the workshop using the scale, 5 being “the best”, 0 “the worst”.
Were the pupils engaged?
0 1 2 3 4 5
Do you think the activity worked well?
0 1 2 3 4 5
Was the workshop the right length of time for your pupils?
0 1 2 3 4 5
Please rate this workshop
0 1 2 3 4 5
The options on the rating scale do not make sense when compared to the questions being asked. For example how could ‘the best’ or ‘the worst’ align with the questions ‘Were the students engaged?’; or ‘Do you think the activity worked well’? These should have been asked as yes / no / don’t know options, or with rating scale options such as strongly agree / agree / disagree / strongly disagree.
The question about the length of the workshop is particularly poorly designed. It does not provide a way for teachers to distinguish between the workshop being too long, or too short. How would you know which of these they mean if they selected options 0-3? This question should have been asked in the form: ‘Was the duration of the workshop: too long; about right; too short?’
Of the four questions only the last one ‘please rate this workshop’ aligns with the rating options. But again this question is deeply ambiguous – ‘the best / worst’ compared to what?
What does a good question look like?
A well designed question is: unambiguous using clear, simple language; it allows the full range of experiences, opinions and feelings to be expressed; it covers just one topic; and it does not suggest which answer to choose.
When you design questions for surveys, interviews, focus groups and self-completion questions use the following check-list.
- Are words you use are as unambiguous as possible – will the people answering the question interpret the words in the same way you do? Are you inadvertently using jargon that your audience might misinterpret? Have you piloted your questions with people not working on your project?
- Are you inadvertently asked two (or more) questions stuck together? Word search for ‘and’ ‘&’ ‘or’ and ‘/’
- Does the wording of your question inadvertently suggest what answer to give?
- Are the rating scales are balanced with equal numbers of positive and negative options?
- Do the answer options in multiple choice and rating scale questions cover the full range of possible experiences / opinions within your audience. NB this may include adding a neutral option in the middle of a scale and a ‘don’t know’ / ‘doesn’t apply to me’ option at the right-hand end.
- Do the options on the rating scale make sense when compared to the questions being asked?
- Are the answer options on multiple choice questions distinct from one another? Check that they do not overlap so that people are not sure which to select