Testing questionnaires
Ross Woods. Rev. 2018, '23
All questionnaires need testing with people before you use them widely. You might understand your questions perfectly, but you can't know how others might interpret them. Time spent in testing is well-spent. Imagine the frustration of having to start over from scratch because your questionnaire was found to have a serious flaw late in the fieldwork.
Expect to be surprised that some people will not understand even the most obvious (to you) of questions. Then, when you ask, they produce the most logical and obvious responses that had never occurred to you. Your questions could:
- use unfamiliar terms
- sound complicated
- sound vague
- sound unfamiliar or foreign (especially if you're using another dialect or language)
- encompass more than one issue
- make people feel embarrassed, frustrated, or stupid.
Other factors
- Questionnaires with closed questions need thorough, extensive testing because respondents must choose between the answers you provide. Consequently, your list of answers must allow all possibilities. Don't even think of using the questionnaire until you know it works very well.
- Questionnaires with open questions are much more flexible, because you tend to use them as a basis for interviews to which you might spontaneously add follow-up questions. It might even be possible to let the questionnaire evolve during fieldwork so that you can follow up on emerging trends.
- Have you given incentives, such as a simple payment or a ticket in a draw.
- Have you given disincentives? For example, many respondents would be reluctant to buy a postage stamp and then mail the questionnaire.
- What is a realistic response rate for your proposed kind of delivery? Typical response rates vary greatly, from under 2% to nearly 100%, depending on how you provide it. For example, paper surveys delivered by post as junk mail have a low response rate. However, if you have a segment in a meeting where everybody present fills out a questionnaire and hands it in at the time, most people do. But if they can take it home and bring it back next week, many people don't return them.)
Field-testing steps
After proof-reading, test your first draft with colleagues to eliminate the most obvious glitches.
Then do your testing with persons of the target population who have not yet seen the materials nor been asked your questions in any other form. People who already know the subject matter or saw previous drafts of materials won't be tripped up as if they were doing the questionnaire from scratch.
Note how much extra help you give people. Even an unwitting use of body language or perceptible attitude may alter people's responses. Don't help people if they must be able to do the questionnaire with no help at all.
If possible, observe them for non-verbal clues, anything that makes them hesitate because they are confused. Obviously, you will also ask for verbal feedback, but the non-verbal feedback can be more useful. If something is unclear, some will stop and re-read it, which is quite observable. In some cases, they will be clearly frustrated. Others will just decide to skip that bit
.
The sources of confusion will probably be:
- innocent questions that are not intended to indicate confusion
- re-reading things to figure out what the materials say
- confusing language
- unexplained assumptions
- language, literacy or numeracy difficulties
- out-of-sequence explanations
- not enough instructions on what to do
- confusing procedures
- too much or too little information
- incorrect information
- boredom: lack of options
- boredom: activities that are too too simple for the target population.
- frustration: activities that are too difficult for the target population.
Check for these things:
- Do respondents like to respond in writing? Some demographic groups won't fill in a form but will be very expansive in an interview.
- Is the length acceptable to respondents? Or is it too long? Do people get fed up before they finish?
- Is the language easy to understand? Are questions short and simple?
- Have you asked only one thing in each question?
- Does each question have only one meaning?
- Do any questions seem to be trick questions where you're fishing for another answer:
How much is a one-dollar candy?
(The answer is so obvious that it can be frustrating. Some people distrust plain simple questions and presume you have a hidden agenda.)
- Is it unintrusive?
“What was your income in the last financial year?” (This question is too intrusive for most people.)
- Does it give respondents reason not to believe your promise of confidentiality? (This is especially important if the questionnaire has intrusive questions.)
- Are there reasons why someone would rather lie? For example, they might want to cast themselves in a more positive light.
- Have you covered all options? E.g.
- “How many noses do you have? Mark one of these options: 2. 3. 4.” (Respondents have only one nose.)
- Do any questions hold unwelcome assumptions that force people into uncomfortable choices:
Have you stopped beating your wife? Yes. No.
(Whether you answer yes or no, it still means that you have been beating your wife.)
- “How long have you been an ax murderer? Mark one of these options: a. Less than one year. b. More than one year.” (This presumes that you are an ax murderer.)
Would you rather eat a brown slug or a gray slug?
(I don't want to eat slugs of any color.)
You should see how long it takes them. Respondents normally should be able to do a written survey questionnaire in 15-30 minutes. Some are less.
Corrections
Then make corrections and test your questionnaire again on a new group of people from the target population who have never seen the questions before, and collate suggestions for improvement. You need to use different people each time so that they each do the questionnaire for the first time; they can no longer give their first impression of the meaning of the questions. If necessary, you can repeat this kind of test more widely with other new groups of people from the target population.
Decide when to stop. You can keep improving forever, so stop when your questionnaire is good enough, that is, when the incoming suggestions are trivial and you are satisfied with it.
A note on terminology
Terminology is not standardized, and some researchers use the terms field test and pilot study as synonymous. Others make the following differentiation:
Field test: expert review
Pilot study: tested with members of the target population.