You invest a good few hours into writing and designing your NPS survey, carefully picking the questions and wording them just how you want them.
When you’re finally done, you choose who to send it to and fire it off, crossing your fingers for positive responses. Then you wait…
…And you wait, and you wait some more. Hardly anyone fills it out. Why did this happen?
When sending out surveys, most companies worry about being inundated with reports of bad customer experiences. Despite your service team’s best efforts, the post-purchase part of the customer journey is always a blind spot unless the customer reaches out again or reconverts.
But behind the unpleasant possibility of negative responses from customers lies a potentially more disappointing outcome: A lack of responses altogether. If only a very small subset of your customer base responds, whether they report a good, bad, or neutral experience, you can’t do much with that data.
How can you structure your surveys to get a higher rate of good-quality responses, so your survey results actually reflect the general sentiment of your customer base? What is a good survey response rate to aim for? What’s a good NPS score? And what are some reasons why your surveys may not be performing as expected? Let’s dive in.
Jump to:
What is a good survey response rate?
Why is survey response rate and quality important?
How to improve survey response rate
Why do I have a low survey response rate?
Why is NPS score important?
How to get the most accurate picture of customer sentiment
What is a good survey response rate?
When it comes to survey response rates, there is no one number everyone should shoot for; your NPS response rate goals should be based on your industry, audience, and survey delivery method. Survey response rate benchmarks can vary depending on who you ask and where you look, and are influenced by a number of factors, so it’s best to view your benchmark as a range or a minimum percentage to hit.
Also, it’s important to keep in mind that response rate alone doesn’t fully reflect the quality of survey data. Surveys are meant to gauge insights from your customer base in its entirety, so responses should be representative of your customer demographics. More on that in a minute.
According to CustomerGauge, the average NPS survey response rate for B2B brands is 12.4%, but rates can vary anywhere between 4.5% and 39.3%. And Delighted, a feedback program by Qualtrics, has reported that companies sending out NPS surveys should look to hit a response rate of at least 20%. The average survey response rate - for all surveys, not just NPS surveys - reported by Delighted users ranges between 6-16% depending on the survey channel.
Factors to consider when setting a NPS survey response rate benchmark include:
- Survey delivery method. Email surveys and SMS surveys tend to perform better than website or in-app surveys.
- Your target audience. Younger people (those who are less than 65 years old) may be more likely to respond to online surveys than older groups.
- Whether the survey is internal to the company or external. Internal surveys tend to have much higher response rates.
- B2B vs. B2C. B2B response rates may be generally higher than B2C, but this is likely due to the fact that B2C surveys tend to have larger sample sizes.
- The size of your audience. If you’re surveying a large group of people, your response rate may be a bit lower than if you had a smaller sample size. You may be trying to reach as many people as possible, but as we’ll get into later, a larger sample size may not always be better.
- Your industry. Customers may be more likely to engage with some industries more than others.
Why is survey response rate and quality important?
Most of the time, a low number of survey responses relative to the number of sends means the data is less accurate. The point of surveys is to use a smaller group to represent the opinions of a larger one - i.e., your entire customer base or a significant subset of it. Few responses mean the data may not be statistically valid.
The lower your response rate, the higher your margin of error will be. Simply put, a margin of error represents how closely your survey results reflect the views of your customer base as a whole, and offers a range for how much your calculations may differ from reality.
Without getting too into the weeds with statistical formulas, the margin of error is calculated using:
- Your confidence level - How confident you are that the data accurately reflects the opinion of the customer base. This is usually set at 95%, and is represented in the calculation by a value known as a z-score.
- Standard deviation - How dispersed the range of values in your data is, or how much the highest and lowest ends of your full dataset deviates from the average.
- Sample size - The number of people who have completed your survey.
A small sample size relative to your audience size is unlikely to be representative of your entire customer base, so your margin of error will have to be larger.
The following is the formula for calculating margin of error: