When surveys are too adaptive
One of the strengths of Web, CATI and CAPI surveys is their adaptability. We can create a questionnaire which tailors itself in obvious and hidden ways to a respondent's answers. While in a sense the respondent is "driving" this process, they're doing so after a researcher carefully lays out the cones, sets up hay bales on dangerous curves, and straps the respondent into their go-cart with a safety harness and helmet.
I recently completed a survey where they forgot some of the safety precautions. It was an interesting experience, weaving around the debris from prior respondents while simultaneously dropping cones for my own course.
The survey started normally enough. There was a grid of statements for me to rate with a couple type-in fields for any additional attributes I'd like to rate. Click Next.
My first impression on the next screen was "Cool...there are my additions incorporated in the grid" (yes I am a survey geek). Then I started reading the grid and realized that not only were my additions there, so were those of other respondents! And again, at the end of this grid I was given the opportunity to add new elements.
I can understand the rationale a bit. Instead of assuming I knew 95% of what my respondents were thinking (or 100% for those who completely shun write-in fields), I'd let my customers organically build and refine the key attributes.
One small problem: respondents can write really bad questions.
When survey designers write questions, they scrutinize the phrasing, watching for jargon, ambiguous or loaded words, and checking that only one issue is addressed at a time. The additions by respondents, on the other hand, mixed issues, use hyperbole, and occasionally went off on tangents. There was also an interesting phenomenon where the grid had the "same" issue multiple times. This was because new respondents were adding their own flavor when the existing option had nuances with which they didn't agree.
As an experiment in survey techniques, I think this was very interesting and useful. As a collection of real data, I expect the analyzers are going to have some challenges consolidating the results. In addition, one of the reasons surveys are statistically reliable is the consistent experience from respondent to respondent. In this case, the experience of the 200th respondent was radically different from that of the 5th, so how comparable are their answers?
For your own surveys, there are a few alternate techniques you might consider.
Need ideas of what to ask?
If you're unsure what factors people care about, it's time for some qualitative research. Yes, a professionally moderated focus group is expensive, but there's a range of qualitative options, some of which cost little to execute:
- In person focus groups
- Virtual real-time focus groups
- Time shifted virtual focus groups using discussion boards
- Phone interviews
- Pilot survey of ~50 respondents where they write in all their rating factors
Have too many factors for rating?
Sometimes the problem is that we have too many ideas of what people care about and don't want to assume we know the "top" issues. In this case, you can set up the rating grid so it randomly selects perhaps 20 of the factors for each respondent. While every respondent will not have had the same set of comparisons to make, in aggregate you'll get a clear picture of priorities—assuming you have a solid sample.
Really love the organic approach?
Consider quarantining write-in responses instead of immediately incorporating them into the survey. You'll need to have a trained staff member check a few times a day to encode answers to a consistent and neutral format. (Do keep the original response for an audit pass.) If you're using e-mail invitations to drive the survey, you'll need to break the drops into smaller than usual chunks or too many answers will come in before they can be coded.
Need a Hand?
A little help can add a lot of polish—or just save hours and headaches:
Think I shall rename you—Ann ‘Lifesaver’ Ray.
Randy Gregg
Corporate Performance Resources