Market, Consumer, Customer and UX research explore the same general concept: the human experience.
In these fields of research, we might ask:
- How do people perceive our brand? What are their thoughts and feelings?
- How do people behave following an ad campaign?
- Do people enjoy interacting with our frontline employees? Do they enjoy their time while shopping in our store?
- How do people interact with our new software? Is it difficult or easy to use?
Each of these questions explores the respondent’s current human experience, in one way or another. Savvy decision makers can use this feedback to ultimately improve the end user experience - be it in-person, in-app, or online.
So, how do we capture data that tells us something about the human experience?
Quant vs. Qual
We can think of research methods in two overarching categories: quantitative and qualitative. Quantitative measures provide us with decision or judgement data that can be easily translated into numbers. For example, 80% of responders chose logo A over logo B. Or, On average, responders rated familiarity with the brand a 3.5 on a 5 pt scale.
Qualitative data provides us with language-based, descriptive responses. For example, respondents may describe their thoughts and perceptions of a given brand, or may speak directly about their experience using a product.
But, qualitative data is expensive and laborious. Focus groups and one-on-one interviews require manpower both to facilitate groups and analyze the large qualitative data sets that result. Participants are also usually compensated for their time, often starting at $50 per person. The sample sizes for qualitative data tend to be small due to the higher cost and time requirements. This makes obtained insights unreliable when applied to a larger population or real-world setting.
On the flip side, quantitative data can be collected in very large sample sizes, giving a higher degree of confidence when using data to guide decisions. Quantitative measures are often collected using online survey methods that can be deployed at anytime, anywhere in the world, and at a low cost.
Online surveys still have their own challenges that can prevent the collection of genuine actionable data, such as fraudulent bot data and “professional” survey testers. Most of all, traditional survey methods miss out on the rich data that qualitative methods offer. Some researchers may add open-ended written responses to surveys in an attempt to collect qualitative data. However, written responses require a higher level of effort from the survey respondent, especially on smartphones and tablets which are less favorable than laptops or desktops for typing out long-form responses. This high activation energy results in inferior data quality, defeating the whole purpose of doing qualitative research.
Ultimately, both quantitative and qualitative measures are important when conducting a well-balanced research study. So how do we bridge the gap between these methods? How do we tackle the issues of online survey research and obtain high quality open-ended qualitative data, while keeping costs low?
The answer is obvious: voice enabled surveys.
What Are Voice Enabled Surveys?
Voice enabled surveys essentially allow researchers to collect audio responses (a.k.a., voice memos) or video clips directly within an online survey. Respondents simply need to click a “Record” button and start talking!
Advantages to voice surveys
Audio or video responses naturally require less activation energy than written responses, and allow for the respondent to convey meaning more effectively through linguistic cues (e.g., tone, prosody, sarcasm). We see these advantages in voice memos - increasingly, more individuals are sending voice memos in place of texts (Vogue even cited 2021 as the “Year of the Voice Note”), simply because it is easier.
Voice enabled surveys are also much more accessible to a general population, tapping into a portion of the population who may be better at expressing their thoughts verbally, rather than in writing.
The savings in energy expenditure and improvements in accessibility end up resulting in responses that are 3x longer, contain significantly more descriptive language, and are more representative (when compared to traditional open-text responses).
Voice survey software: where quantitative meets qualitative
More recently, software has been developed to specialize in these qualitative, asynchronous methods. These software boast fantastic AI-enabled features: automatic transcription, multi-modal sentiment analysis, and topic classification, allowing researchers to work through large qualitative data sets more efficiently. But, most fail to incorporate a wide range of quantitative question types. Quantitative measures are important for grounding qualitative insights: data sets are much more rich when qualitative insights can be connected to measures like first impressions or scaled judgements.
Phonic’s voice and video surveys provide those grounding quantitative measures alongside qualitative question types. Surveys can be created with drop-downs, sliders and Likert scales (to name a few) while also adding audio or video response types throughout, resulting in incredibly flexible and powerful surveys. Relationships between quantitative questions and AI-enabled qualitative analyses (i.e., multi-modal sentiment and topic classification) can be explored directly in the platform. Want to know how many Premium users talked about the product being “too expensive” when asked why they downgraded their subscription? No problem! And, this number can be compared directly to those from the Starter tier who cited the same topic.
Restoring Human Connection in Research
Aside from creating accessible surveys with high-quality insights (and AI-driven data analyses), voice surveys bring Market, Consumer, Customer and UX research back to their roots: the human experience. Spoken language is a large part of what makes us human, allowing us to effortlessly express our thoughts, ideas, and emotions. If we want to know about how a person thinks and feels about a product, idea or topic in general, what better way than to have them respond with their voice?