When Surveys Go Wrong
- john90345
- 6 days ago
- 11 min read

How Bad Questionnaires Damage Trust, and Drive Bad Decisions
A survey feels harmless.
A link in an email. A handful of rating scales. Maybe a “tell us more” box at the end.
But a customer feedback survey is not a neutral research tool. It’s a brand touchpoint. It’s a moment where you either reinforce trust (“they’re listening”) or you break it (“they don’t understand me, or worse, they’re testing me”).
This week, a spa guest experience survey in Winnipeg became a warning story: it began with service-related questions and then moved to values-based statements, one of which mentioned immigration as a threat to “the purity of the country.” The company withdrew the survey and took responsibility, but not after significant brand damage, including national negative press.

That’s an extreme example, but the underlying risk is not rare.
Because bad survey questions don’t just create awkward moments. They create bad decisions, the expensive kind, because leaders will still treat the output as “data.”
Here’s the hard truth: A really bad survey can cost you twice:
Reputation and trust, and
Strategy and money, because you act on flawed insights.
Let’s unpack exactly how that happens, why it’s so common, and what “survey design best practices” look like when you’re protecting both your decisions and your brand.
A survey isn’t just research. It’s customer experience.
Most organizations separate “research” from “experience.”
They’ll invest heavily in brand, service, and training, and then treat the survey as an operational add-on: a templated email, a standard questionnaire design, a vendor-produced link.
But customers don’t experience it that way.
To the customer, the survey is part of the experience. It arrives immediately after an interaction—when emotions are still warm, or frustration is still fresh. In that moment, the survey becomes a test of your tone, your intent, and your respect.
And the stakes are real.
PwC has reported that 32% of customers would stop doing business with a brand they love after just one bad experience.If you send a survey that feels disrespectful, manipulative, or out of place, you’re not just measuring experience—you’re creating the bad experience.
That’s the brand-side cost.
Now let’s talk about the decision-side cost, which is often worse.
The hidden economics of bad research: “false certainty” is expensive
When a survey is poorly designed, the problem is not just “some error.” The problem is false confidence.
Leadership teams love clean numbers:
“NPS is down 6 points.”
“80% of customers agree.”
“This is the top driver.”
“The data says we should…”
But surveys can generate figures that appear accurate but are in fact very unreliable, especially when the questionnaire design introduces bias or the sample is not representative.
And in a business context, flawed data has a measurable cost.
Gartner has cited research that poor data quality costs organizations an average of $12.9 million per year. If you’re tempted to “save money” by rushing a survey, it’s worth pressure-testing that assumption first. I break down realistic research cost ranges, and where teams underinvest (and pay later), in How Much Does Consumer Research Cost?
Not all of that is “survey data,” obviously, but surveys are one of the easiest ways to inject bad data into decision-making because they’re widely used, quickly fielded, and often lightly governed.
This is the decision trap:
A team needs direction.
They run a quick customer feedback survey.
The instrument is flawed (biased questions, forced answers, irrelevant items).
The results come back fast, in a tidy dashboard.
The organization makes changes with confidence.
The changes don’t work, or they create new problems, because the “insight” was misleading.
That’s how a survey becomes a strategic liability.
This isn’t just a “survey wording” issue—it’s an insight-foundation issue. If you want the upstream view of what breaks strategy before the first campaign brief, read Great strategy only begins with effective data and research.
Why “bad survey questions” do more damage than you think
Most people assume survey risk looks like this:
“We asked a weird question. Some people got annoyed.”
Bad surveys create a chain reaction with three forms of damage:
1) Brand damage (the visible damage)
Customers feel judged, manipulated, or disrespected.
The survey gets screenshotted and shared.
Trust erodes—especially among the customers you most need to retain.
2) Data damage (the invisible damage)
Bad surveys distort the dataset in predictable ways:
Nonresponse bias: the people most put off don’t respond.
Breakoff bias: the people most offended exit mid-survey.
Satisficing: respondents rush, straight-line, or click random answers to “get it over with.”
Defensive responding: sensitive or loaded wording makes people answer strategically, not honestly.
Now the dataset is no longer “what customers think.”It’s “what the remaining, self-selected customers were willing to tolerate.”
3) Decision damage (the expensive damage)
Leaders act on the data anyway.
And when you combine thin response rates with biased measurement, you get the worst outcome: high conviction, low validity.
Response rates are often lower than teams realize and that magnifies bias
Here’s another uncomfortable reality: many customer surveys rely on limited participation.
SurveyMonkey’s benchmark guidance has long indicated that a “good” online survey response rate typically falls within the 10%–30% range (varies by audience and channel). And in “always-on” customer experience programs, response rates can be even lower.
Delighted (which runs customer experience surveys at scale) has published benchmark data showing average response rates (in their dataset) around 6% for email, 8% for web, and 16% for iOS SDK surveys.
Why does this matter?
When the response is thin, any bias in the questions becomes more pronounced. A small wording issue isn’t a rounding error—it can become the story you tell yourself about the market.
This is how a survey goes from “a listening tool” to “a steering tool” even when it shouldn’t.
The #1 questionnaire design mistake: “agree/disagree” batteries
If you’ve ever seen a survey that says:
“Please indicate how much you agree or disagree with the following statements…”
…you’ve seen one of the most common sources of survey bias.
Pew Research has documented how “agree/disagree” formats can create acquiescence bias—a tendency among some respondents (often less informed or less engaged) to agree with statements regardless of content.
That matters in business because those batteries can:
inflate positivity (“people agree more than they actually feel”),
mask nuance (“agree” doesn’t tell you the trade-off),
and create false differences between groups.
Better survey design replaces “agree/disagree” with:
forced choices between alternatives,
frequency and behavior measures,
or construct-specific scales (“how satisfied were you with X?” rather than “I was satisfied with X”).
This is not academic. It changes decisions.
If your survey is telling you customers “agree” with a vague statement, you’re not learning what to fix. You’re collecting polite noise.
The 7 most common “bad survey question” patterns (and how they break decisions)
Here are the mistakes I see repeatedly across CPG, retail, nonprofit, and service businesses:
1) Irrelevant questions (scope drift)
A guest experience survey should measure the experience.When it drifts into unrelated values, questions, politics, or ideology, it creates “why are you asking me this?” whiplash.
Impact: brand distrust + higher breakoff + contaminated dataset.
Fix: ruthlessly tie every question to a decision you’re willing to make.
2) Loaded language (hidden persuasion)
Words like “purity,” “threat,” “too much,” “obviously,” or “should” are not neutral. They create a frame that pushes respondents.
Impact: defensive responding + reputational risk + unreliable insight.
Fix: neutral, plain language. If you can’t make it neutral, you probably shouldn’t ask it.
3) Forced answers with no “prefer not to answer”
For sensitive or identity-related items, forcing responses doesn’t improve accuracy. It increases falsification and resentment.
Impact: lower completion, worse data quality, more backlash.
Fix: allow skip patterns, include “prefer not to answer” where appropriate, and be explicit about why you’re asking.
4) Double-barreled questions
“Rate the friendliness and speed of service.”Two concepts. One answer.
Impact: unusable insight and false clarity.
Fix: split it. Or choose the one that matters most.
5) Unbalanced scales
If your scale options subtly push positivity (“Good / Very Good / Excellent”), your data is biased.
Impact: inflated satisfaction; wrong priorities.
Fix: balanced, symmetric scales; clear anchors.
6) Missing the behaviour layer
Opinion surveys without behaviour are dangerous. People will tell you what they wish they did, not what they did.
Impact: You optimize for stated preferences rather than revealed preferences.
Fix: include behaviour questions: “What did you do last time?” “What did you choose instead?” “What almost stopped you?”
7) No governance
A third party can help you run research. They cannot own your brand risk.
Impact: “We didn’t write it” becomes irrelevant the moment it’s your logo in the email.
Fix: governance: review standards, red-line topics, sign-off chain, and a pilot requirement.
Why bad surveys sometimes create the very backlash they’re trying to avoid
There’s a modern misconception:
“We need to measure what people really think, even if it’s sensitive.”
Sometimes that’s true, in public opinion research, policy contexts, or academic work with rigorous ethics and framing.
But in a customer feedback survey, the customer did not opt into a civic debate. They opted into service, value, or a product experience.
When a brand inserts values-based statements into a post-visit survey, customers ask:
“Are you profiling me?”
“Are you trying to normalize a viewpoint?”
“Why is this here?”
“Is this what the company believes?”
And that’s where brand trust erodes quickly.
To make things even more complicated, consumers are increasingly unlikely to provide direct feedback through formal channels, so your survey may not capture the most honest responses. Qualtrics has noted that consumers are becoming less willing to give companies direct feedback about bad experiences, prompting brands to seek insights where customers already are (social media, reviews, etc.).
So, if your survey experience feels off, customers may not complain to you. They may complain about you elsewhere.
The practical fix: treat survey design like product design
If you want surveys that produce valid insight and protect your brand, you need a repeatable “quality pre-flight checklist.”
Here’s the version I use because it’s fast enough for real organizations.
The 10-step Survey Quality Check List (use this before anything goes live)
1) Define the decision Write one sentence: “If the data shows X, we will do Y.”
If you can’t define the decision, don’t run the survey.
2) Define the audience Who are you surveying and why? “All customers” is rarely the right answer.
3) Define the moment Right after visit? A week later? After issue resolution? Timing changes sentiment and recall.
4) Remove scope drift Every question must earn its place. If it doesn’t inform the decision, delete it.
5) Ban loaded language Rewrite anything that suggests “correct” answers.
6) Replace agree/disagree batteries Use construct-specific measures.
7) Add the behaviour layer. At least 2 questions about real actions, not only opinions.
8) Allow non-response where appropriate. Use skip logic; include “prefer not to answer” when sensitive.
9) Pilot with 10–20 real humans. Not internal colleagues. Watch confusion, irritation, dropoff.
10) Governance sign-off. Insights + Brand + Legal (when needed). One owner. One final approval.
If you institutionalize this, you prevent 90% of “survey went wrong” scenarios.
The governance piece: outsourcing research is not outsourcing accountability
In the case you shared, the survey was developed “in collaboration with” a third-party research firm. That’s common.
But here’s what leaders need to internalize:
If the survey damages trust, the public doesn’t blame the vendor. They blame the brand.
This is why “survey governance” matters:
standards for language and tone,
a list of red-line topics for certain survey contexts,
a review chain that includes brand risk awareness,
and a policy for demographic or values-related questions (when they are appropriate, how they’re framed, and why they’re asked).
If your company doesn’t have this, your survey program is effectively operating with unmanaged reputational exposure.
How to ask sensitive questions without creating brand risk (when you truly must)
There are legitimate situations where sensitive questions are appropriate:
equity and access measurement (e.g., are we serving diverse communities fairly?),
safety and inclusion monitoring,
program design for underserved segments,
research with informed consent.
But the rules change in a customer experience survey. If you truly must ask sensitive questions, do it with:
context: why you’re asking, how it will be used,
choice: “prefer not to answer,”
separation: don’t embed it inside a service survey without explanation,
ethics: avoid ideological framing, loaded statements, or dog-whistle language.
The goal is to reduce harm and increase validity. If you can’t do that, don’t ask.
A better approach: move from “survey as event” to Voice of Customer as system
One reason surveys fail is that organizations treat them as a single instrument rather than one input into a broader VoC (Voice of Customer) system.
A robust VoC approach includes:
1) Transactional feedback (short, immediate)
2–5 questions
focused on the experience and recovery opportunities
fast triage
2) Relational feedback (quarterly/biannual)
deeper drivers
segment comparisons
trend tracking
3) Behavioral signals
repeat rate, churn, basket, bookings, cancellations
complaint themes and call reasons
4) Unstructured feedback
reviews, social, open text, frontline notes
coded into themes
When you combine these, surveys don’t have to carry the full burden of insight.
And importantly: when survey response is low, your system still sees what’s happening.
The multicultural / inclusion lens: surveys fail faster in diverse markets
Most companies underestimate how quickly surveys break down in diverse markets, particularly where multiple languages, cultural norms, and trust factors vary.
A few practical realities:
Certain formats (“agree/disagree”) can amplify bias across educational levels and language comfort.
A question that sounds neutral in one cultural frame can sound accusatory or coded in another.
Translation is not localization. Direct translations can create unintended meaning.
Demographic questions require care. Without context, they can read like profiling.
If you want valid insight across diverse audiences, “questionnaire design” has to include:
plain-language writing,
localized phrasing review,
culturally aware pilots,
and multiple completion modes (mobile-friendly, accessible, short).
This isn’t about being “politically correct.” It’s about data quality and decision accuracy.
In Canada, cultural nuance isn’t a “nice-to-have” in research—it’s table stakes for accuracy. I go deeper on why culture needs to sit at the core (not the sidelines) in Canada’s New Ad Reality: Culture at the Core.
A simple rule for leaders: if it can be screenshot, it must be defensible
Here’s a rule I use with executive teams:
If any single survey screen can be screenshot and shared, it must stand on its own as respectful, relevant, and defensible.
That rule prevents a surprising number of issues, because it forces you to ask:
“Is this question necessary?”
“Is it neutral?”
“Does it match the customer context?”
“Would we feel comfortable defending it publicly?”
If the answer is “no,” fix it before it ships.
Checklist: survey design best practices you can implement this week
If you want a short, actionable starting point, do these five things:
Cut your survey length by 30%.
Every extra question increases breakoff and satisficing.
Replace agree/disagree with better formats
Use alternatives, behaviours, and clear anchors.
Add a “Why are we asking this?” line for any sensitive item.
Context increases trust and improves accuracy.
Pilot with real customers.
A 10-person pilot is the cheapest insurance you can buy.
Create a one-page governance policy.
Owner, sign-off chain, red-line topics, and an escalation rule.
Do those, and you’ll immediately reduce brand risk and improve the reliability of your insight.
FAQS
What are examples of bad survey questions?
Leading questions (“Don’t you agree…”), loaded language (“threat,” “purity”), double-barreled items, forced answers without opt-out, and irrelevant questions that don’t match the survey’s purpose.
How do bad surveys affect decision making?
They introduce measurement bias and nonresponse bias, producing results that look precise but don’t represent reality—leading teams to prioritize the wrong fixes, waste budget, and miss real drivers.
What is acquiescence bias in surveys?
It’s the tendency of some respondents to agree with statements regardless of content, especially in agree/disagree formats. Pew Research explains why this can distort results.
What is a typical survey response rate for customer feedback?
It varies widely by channel and audience. Benchmarks often cite ranges like 10–30% for online surveys, while some always-on CX programs may see lower.
How do I reduce survey bias?
Use neutral language, avoid agree/disagree batteries, add behavior questions, offer “prefer not to answer” when appropriate, pilot test, and apply governance.
Closing: the goal isn’t “more data.” It’s better decisions.
A customer feedback survey should do two things:
help customers feel heard, and
help leaders make better decisions.
When a survey fails, it often fails on both fronts.
If you only remember one line from this piece, make it this:
Surveys don’t just measure trust. They create it, or destroy it.



Comments