Planning for benchmarks & trending
I've been helping a client develop an assessment which they will deploy across many companies. It's relatively easy to run a survey today for one particular firm, but when you want to slice the data you'll have two years from now, it gets a bit more interesting.
What do you want to pull from the data?
- Are you going to want to see how an individual's responses have changed over time?
- Will you be looking at ratings on the same dimension from year to year?
- Are you planning to benchmark Company A against their industry or all results?
Brainstorm every way you might want to slice and dice. A year from now, when you look at your accumulated data and decide it would be nice to compare groups, you'd better already have the breakdown factor. An extra field or three today costs little, but retrofitting values is often impossible.
Focus on limiting customization
This is an ongoing struggle. We want to make the survey fit each client—getting them the best information is a good thing. However, the more we tailor to one instance, the less we can compare those results to the accumulated data.
We have a minor edit to question 27
Is it a small enough change that the meaning is the same? If so, should this become the new standard phrasing? Are you documenting your changes, so you'll know which records were asked which version?
If it does significantly change the meaning (sometimes it only takes one word), it's no longer an edit. Instead, you need to treat it as a removal of the old question 27, and addition of a whole new question which happens to be on the same topic. Note you'll be able to roll the new question into aggregate scores for the topic, you just can't treat it as the same question for trending or cross-tabulations.
What about our special program?
Maybe the firm rolled out a major training program or product line this year which they want to ask about. Or they have a niche division which is critical to their operations, but not part of 99% of firms. Adding something unique to one instance of the survey is actually less disruptive than edits. You'll still have all the trending questions for comparisons, and nobody will expect that type of comparison on the special section.
Our firm needs exactly these demographics
If you only ask a handful of firms if they have a Professional Services Group, then you won’t be able to develop much of a benchmark between firms for people in similar roles.
One way to tackle it is to develop a master list of every department you can conceive, from which you pull the sub-set you need for a particular client. The other approach is to have a handful of broad categories which fit every group under the sun.
This all comes back to the question of how you're using the data. If you're looking at divisional performance from year to year within one organization, departments tailored to that firm will be important. If you're trying to give clients a benchmark against your accumulated results, then more generic categories are actually better.
Allow a shake-down period
You're unlikely to have a perfect crystal ball. While the trending is valuable, if you notice a problem in the survey instrument, it's more important to fix it and move on than to set flaws in stone. If you're planning on a broad distribution, such as my client to many firms over many months, you may want to view the first few instances as a pilot/beta program.
Be kind to your database administrators and analysts
Remember Y2K? Even minor changes can make major ripples in data sets. Work with your techies to understand what types of changes are easy or hard. And if they start looking spooked, ask what they suggest instead.
Ann worked with us on an accelerated project... helping us ‘soup to nuts’ in defining our research objectives, methodology, survey design and ‘out of the box’ survey promotion ideas. She was a very fast learn, interacted beautifully with our client, and truly delivered on all commitments... exceeding expectations in a very short time period.