You agree that closed and leading questions are bad don’t you…?

Asking a leading, loaded or closed question is undesirable when it comes to researching how your customers really behave. These kinds of questions (often asked unwittingly) risk giving you the answers that you or your development team want to hear, but are likely to mask real, relevant and reliable insights that you can work with to improve the design. Leading the witness is basically a bad idea and you should do all you can to avoid it.

At Bunnyfoot we frequently use ‘formative’ face-to-face usability testing, where we observe people carrying out realistic tasks whilst they use something (like a website or app or prototype). This is used to detect usability issues within a design and can also be used as a way of exploring their needs and behaviours.

Tests are carried out 1-on-1 with a moderator and the test participant in the same room and sometimes in order to try to clarify what has happened or get more insight into a need or issue the moderator may ask the participant open questions about their experience. In the middle of a testing session the moderator needs to take care not to introduce leading, loaded or closed questions – it takes experience and practice. Below are some tips to help you if you are considering carrying out your own research, or that might help you spot if your research is being carried out to a good standard.

How to spot if you are asking a leading, loaded or closed question…

Leading or loaded questions make someone think in a certain way. They either include the answer, point the listener in a certain direction, or include some form of carrot or stick to send them to the ‘right’ answer.

Leading or loaded questions can appear in a variety of different ways:

  • Assumptive questions include an assumption about the outcome, for example ‘How late do you think that the item will be delivered?’ This question assumes that the item is not going to be delivered on time.
  • Linking a personal point of view and asking them if they agree, for example ‘The newer version is better, isn’t it?’
  • Questions clearly asking for agreement, which coerce the other person into saying ‘yes’, for example ‘Do you agree that this new site is better than competitor XYZ?’
  • Closed questions are usually much shorter and more focused and are therefore easier to answer as the choice of responses are limited, usually ‘yes’ or ‘no’ – these risk masking the true picture by forcing a binary choice.

Open questions require a person to pause, think, and reflect

Answers to open questions can include detailed descriptions of events, personal feelings, opinions, or ideas about a subject. When using open-ended questions, the control of the conversation transfers to the person being asked the questions therefore making them feel more relaxed.

Open-ended questions typically begin with words such as ‘What’, ‘Why’ and ‘How’, or phrases such as ‘tell me about…’ or “describe…”.
Some examples include; ‘What did you just do?’, ‘Why do you say that?’, ‘What would you think it does?’ ‘Where are we now?’.

Screen Shot 2015-10-01 at 16.40.20

Other useful tips to remain open (neutral) during a research session

  • The Echo or Parrot technique – you repeat back the last phrase or word that the participant said while using an interrogatory tone. Using the exact words the participant said ensures you do not bias them by making a suggestion or describing anything within the interface. For example if the participant says “The purple banner is odd…” – you say “The purple banner is odd?”
  • The Boomerang technique – you turn the participants question or comment back around to the participant. Using a generic, non-threatening question to divert the participant’s question back to them rather than attempt to half answer the question.
    For example if the participant asks you a question about the user interface you can say:
    “That’s a really great question what do you think?” (Positive reinforcement for them to think about the answer themselves) or “What would you do if you were at home and you came across this?” (Reminder that in reality you will not be there to answer questions like this for them).
  • And one we catch ourselves using a lot, the Columbo technique – be smart but act dumb in a similar way that Columbo used to catch his criminals by enticing them to underestimate his investigative skills. While we are not trying to catch anyone as Columbo was, we are trying to create tasks and questions that coax participants into acting naturally and telling he truth.
    Just ask part of a question, and trail off, rather than asking a thorough question for example if the participant says “Will clicking here add it to my basket?”
    You say “hmmm, you were wondering (pause) if (pause)…..”
    The participants will fill in the rest.
  • Silence. If your moderator during a usability test is talking too much it probably indicates they are inexperienced or there is something wrong with the test planning. Observing real behaviour is far better than asking questions about it, and interrupting real behaviour with constant questions is poor practice and is to be avoided. In addition, a well timed ‘almost becoming awkward’ silence in any type of one to one research stimulates participants into filling in the gaps (and of course avoids leading the witness). So simply shutting ones mouth can often be the best technique of them all.

Want to learn more?