How to find “Language-question fit” for your surveys
The reason some surveys yield valuable feedback while others spew out worthless garbage often lies in just a few words.
For example, in a recent Roast for a home décor brand, I encountered this question:
Q1: Can you describe the impact your rug has on your living space?
Two things jumped out at me.
One was the word “Impact.” It’s associated with collisions and measurable outcomes, not the harmonious contribution a rug brings to a room.
The other was “living space.” It sounded like a real estate brochure, rather than the lived-in spaces where shoppers placed their rugs.
I recommended restructuring the question to prompt respondents to first visualize a rug in their home, and then to describe what they thought the rug “did” for the room it was in.
Here’s the revised question:
Q1: Do you have a rug in your home?
•Yes
•No
Q2: What room is your largest rug in?
Only respondents who select “Yes” to Q1
Room:__________ .
Q3: What would you say your rug does for your living room*?*Pipe in response from Q2. Only respondents who select “Yes” to Q1.
Response: _____________________________ .
Why the verb “to do”?
It encourages respondents to think about the various roles their rug serves, ranging from functional to aesthetic, rather than just specific uses such as adding warmth, protecting floors, and tying the room together.
Now, I could get even more pedantic but I think you get the point.
Platforms like Survey Monkey and Typeform tout the countless templates available to users.
But have you ever noticed how detached they sound, as if the questions were written by a chatbot?
The key to fixing them is recognizing this sterile, analytical tone and replacing it with better language.
If you've ever found that challenging and wanted some help, then consider booking a Roast.
Just click the link below.
I’d love to help.
https://www.sammcnerney.com/45-dollar-survey-roast
Cheers,
Sam