Ever had issues like these – projects riddled with poor response rates; screeners that disqualify nearly all potential respondents; respondents wondering why they are being contacted about something they are supposed to be familiar with?  Most likely anybody that has worked in market research has had some sort of experience with issues like this.   In fact, the Bunker could create a single blog entry on the endless ways to approach such issues to mitigate or prevent them from happening again, but for the sake of this blog post, we’re going to take a different angle and show you that these issues are not only problems, but findings too.

hidden Market Research findings

Sometimes the results for market researchers are right in front of us – the complications we incur can be findings in and of themselves.  The idea is certainly not to use these problems as excuses, but to take into consideration these issues when reporting findings to your client in the end, as they can prove to be quite pertinent. 

So how can you translate project management issues into relevant findings?

  • Let’s start with poor response rates (everyone’s favorite).  If the sample is client provided, less than optimal response rates can occasionally be attributed to a lack of engagement with the customer.  For example, your organization decides to survey some of its members and the response rate ends up being a lot lower than what you had expected, forcing you to work for every last complete.  The membership base as a whole may not be as involved or engaged as the organization had previously thought.  In fact, this survey might be the first time the customers have ever received something from you.  Poor response rates in the same manner can be attributed to apathy towards a product/service.  Not having a specific opinion one way or another may be just enough  cause to not respond at all.  Hence, the built-in bias we occasionally find with some surveys, especially any type of self-selection methodology.

 

  • Secondly, with both qualitative and quantitative research, you should be monitoring the screening process and taking note of disqualified participants. Your dispositions present an often overlooked key piece of data. One example might be the recruitment of a “targeted” audience.  For example, during a focus group recruit, the caller center ends up screening out 90 percent of the sample because of one specific screener question.  This provides an obvious clue that you are greatly limiting your pool by trying to reach (as shown by your recruitment process) a hard to reach audience.  Keeping tabs of disqualifying data can tell you a lot about the population you are screening for participation.  Maybe the people you are looking for just aren’t out there.

 

  • Thirdly, another topic of concern might be the demeanor of your respondents – meaning how willing they are to help the telesurveyor, or how amiable they are on the phone.  This is sometimes analyzed through the feedback on the survey, but it will become clearer when auditing calls or discussing the project with your telesurveyors or call center supervisor. If respondents are confused or unaware of a certain concept or company name put forth in the survey, this data needs to be reported.  It’s even better if you can catch this in the prequalifying to make adjustments to the script.  Looking in-between the lines for hidden findings like this give you a good idea of the awareness/reach a product/service or company has.

 

  • The RMS team uses mystery shopping for our competitive assessment projects to garner rates, service offerings, etc. of our client’s competition.  Another problem we often run into is that the information is difficult to get and can often vary from person to person depending on who the mystery shopper speaks with.  In one shop, we can talk to a front-line person who quotes us $600 monthly rent for the senior living apartment, and in the next visit, the admissions director will quote us $850 for that same apartment.  To improve the quality of competitive data, it often requires multiple verification shops. Instead of narrowing your scope to report data only, often the difficulty in getting the data and variance from one shop to another is a very important finding.  The competitors are not doing a good job of relaying their information to potential customers – this presents an area of opportunity for our clients.

So these issues can cause a lot of headaches with market researchers, but it’s important to take a step back and see if the issue in itself is painting a larger picture.  Here at RMS, we have a close relationship with our on-site call center QualiSight – as it is part of our office!  This allows our team to stay on top of issues like these as they occur.  So next time you have a project update meeting with your client and your online survey is only getting a 2 percent response rate instead of 20 percent, try to look on the bright side – it might be an issue, but the Bunker states it’s also a finding!  Now as the market research analyst, you need to do some digging to find out why.