I recently came across an article in the latest edition of Quirk’s titled “A Systematic Method for Checking Online Questionnaires” written by Jerry Arbittier of SurveyHealthCare. Read it here. He discussed the importance of testing your online survey before launch, which we have covered before here in the Bunker with this post. The article offers six quality control checks before launching your online survey using the acronym BOWLSR. No, it’s not the Boss from ageless Super Mario Bros. video games.
The healthcare friendly acronym BOWLSR stands for:
- Blank Screen
1) Blank Screen – the author discusses the necessity of making all questions in an online survey a “must answer.” Although this holds place in the vast majority of situations, pick your poison here carefully. When an online survey contains many open-ends or long lists, the likelihood of dropouts increase significantly. By requiring a must answer for all questions, you stand the chance of a frustrated respondent dropping out. I am not saying that you should not make any questions must answer, but you may want to think about just picking the ones that are the core questions to your market research, and force feedback on those.
2) Order – this is a good tip. You’ll definitely want to randomize choices when appropriate. Obviously, you wouldn’t want to randomize responses that are in a sequential order, but everything else is fair game. Adding another level of randomness to your online survey will increase reliability and trust in your results by eliminating bias of “first seen” or “last seen” in a series.
3) Words – pretty simple here, be concise and get to the point – another topic we’ve covered here on the Bunker Blog. To take an excerpt from that post:
If you have to choose between sounding smart and sounding stupid – choose stupid.
“This goes along with the concise style of writing. Although you may be an expert in a particular field, not all of your participants who take your survey will be. The questions need to address your needs, but should not confuse the survey taker in the process. So instead of using a phrase like “assessment and examination of your purchasing practices,” just write “we’d like your feedback on your purchases.”
4) Logic – the author suggests creating different test cases within your survey to run through and test logic to ensure it is working correctly. Again, a core piece of testing your online survey.
5) Skip – the author suggests reviewing the survey script first (if you haven’t worked directly in the design) to understand the skip patterns before testing the survey as if it were live.
6) Range – here, the author discusses ‘quantity’ entries for a survey (e.g., if a doctor enters that he/she sees 500 patients in a month, on the follow-up question they should not be allowed to state they see 550 asthma patients a month). Ranges are one of the more often overlooked pieces of survey design. Make sure you check these.
Overall, I think the article offers six solid tips for testing an online survey – a good article for Quirk’s. RMS recommends conducting a soft launch of your online survey before fully fielding your study. It helps identify any unforeseen issues and allows you to correct them before it’s too late. It also gives you an opportunity to follow-up with the soft launch respondents to inquire on the subject line of the invite, how enjoyable the survey was, and suggestions for improvement to maximize your response rate.