The hidden challenges of quality market research
Discovering truth through quality market research is not simple, and navigational mistakes can be costly. Unseen complexities lurk beneath the surface, resistant to our impressive arsenal of tools and methodologies.
This post examines these hidden challenges, revealing that the secret to quality research is more than just advanced tools and techniques — it involves a blend of human expertise, collaboration, and careful management. Dive in to discover a balanced approach that factors in these hidden elements for a successful outcome.
Some challenges simply require time & hard work
In an acclaimed 1986 paper, the late Fred Brooks famously stated “we see no silver bullet” —ie, no single development — that given software engineering’s essential complexity could alone lead to a tenfold improvement in fewer than 10 years.
Though Brooks is best known for his work with IBM, his observation works for market research, too. Some challenges simply require time, hard work, group collaboration and expertise no matter what tool, technique, or methodology you apply.
Interestingly, Brooks (who, by the way, founded the computer science department down the road in Chapel Hill) divides complexity into two types: essential and nonessential (or accidental). The modern research process has evolved to be powered by amazing tools and communication mediums. These tools greatly help in addressing the accidental complexities but barely move the needle on essentials, and most introduce new accidental complexities along the way.
Successful studies include a lot of often-overlooked tasks
I was reminded of Brooks’s ideas while attending a roundtable discussion with a research team a couple weeks ago. The roundtable focused on sharing learnings from the various tracking research studies Bellomy is running for clients. The people in the room were primary researchers and research assistants, and while the tasks they discussed could sound mundane, they are essential to achieving a quality outcome.
Here's a sampling:
- Monitoring inbound emails for unexpected respondent needs and requests, including unusual ways of requesting to opt out that automation will not detect.
- Reviewing verbatim responses for suspicious comments, complaints about the survey experience itself or possible AI-generated responses.
- Checking surveys for signs of speeding/cheating and verifying against norms and standards.
- Reviewing daily email reports for signs of decreased open or click rates.
- Addressing email issues that arise related to ESPs such as Gmail or Yahoo wrongly flagging communications.
- Monitoring quotas for timely completion, communicating with clients and sample vendors when quotas are not met to acquire new sample, identify root causes, etc.
- Running crosstabs to check data quality.
- Responding to client requests for questions or changes.
- Assembling staff to review changes, verify understanding, assess impact and working through those challenges.
- Testing survey changes in a staging environment before going live.
- Testing email links on various email accounts and devices to confirm consistency and technical function across mobile and desktop experiences as well as across several ESPs.
- Verifying automation jobs executed, reaching out to clients and their IT teams when expected files were not received/sent, etc.
- Updating questionnaires and other materials for monthly/routine changes.
- Recording audit-required project management tasks and hours.
After meeting with the research staff, I asked the data, programming, and operations staff for their list of routine tasks. They included many other seemingly mundane yet critical tasks, ranging from monitoring email reputation scores to updating client data, merging data from third parties, communicating with client IT teams to address security changes and many more.
Sometimes “solutions” mean additional work
Ultimately, the well-organized, well-managed collaboration of the research team, the analytics team, the data and programming teams and the operations team is critical to achieving and maintaining high-quality outcomes. While tools help and emerging AI may further streamline operations and automate some of these tasks, things like human-in-the-mix approaches and client surprises will never go away.
Modern DIY research tools promise easy research and quick-turn results, and in many ways they deliver. However, none has eliminated a wide range of essential complexities, and often they introduce additional effort that vendors typically fail to mention.
Clients sold on complete DIY solutions often realize too late that while they hold enormous power to achieve their goals, they also have taken on a tremendous amount of additional work that was not explained or planned for. After living through the DIY experience, these customers commonly turn to a full-service-solution partner.
Bellomy ‘self-service’ strikes a balance
All is not lost, though. DIY tools have a place and a purpose and have served to move traditional market research from limited niche use cases to a dazzling array of business opportunities and departments. This democratization of research has had positive outcomes, including awareness of customer needs being more widely accepted and embraced. Yet more is needed to maintain high quality, prevent customers from oversampling or becoming numb to messaging, and ensuring researchers can do the hard work of socializing insights to maximize business success.
Bellomy meets these challenges by leveraging the strengths of DIY with the established expertise of our human staff combined with modern AI and automations. We call this approach self-service —not DIY — because while a user can do things on their own, often it’s not the best option.
Equipped with Bellomy’s research cloud tools, our clients are empowered to perform tasks independently while also partnered with our human experts, who can tackle tasks when expertise and available resources are critical. This self-service approach effectively blends DIY and full-service to maximize opportunities for success and quality research outcomes.
If you require high-impact, outcome-motivated research and need to balance DIY vs. full-service options, you might consider a self-service approach like Bellomy offers.
Written by Matthew Gullett, Bellomy’s SVP of Insights Technology. An employee of more than 20 years, Matt loves thinking and writing about AI and is a driving force behind Bellomy AI Analytics.