Conversational interfaces are especially well-suited for gathering feedback. In any other interface, the user satisfaction question disrupts the experience by taking the user into another window or opening a new channel of communication. Keeping people in their current conversation with your chatbot reduces friction for getting their input. Under these conditions, you can expect higher response rates. It would be an oversight to not take advantage of the conversational interface to get user feedback, so here are three potential questions to ask at the end of every customer interaction.

“How easy was it to interact with Company X?”

Customer Effort Score (CES) measures how easily someone was able to get their support issue addressed. We all want our problems solved with minimal effort on our part, and if a customer has a hard time with a company’s support, it’s unlikely they will stay a customer. Though it won’t provide any definitive conclusions, it’s a good indicator of loyalty.

The question is posed on a scale from “very difficult” to “very easy.” If a customer replied that they faced extreme difficulty while interfacing with the chatbot, that session would be worth examining to uncover potential pain points that they had issues with.

“How would you rate your satisfaction with your experience?”

The results from this question make up your Customer Satisfaction score (CSAT). CSAT is one of the most straightforward methods of gauging how customers are responding to your chatbot. Customers are asked to rate their experience on a scale, usually 1-5 or 1-10. Since it directly measures satisfaction levels in a very simple way, it is one of the most commonly used satisfaction metrics. It’s also easily benchmarked since the American Customer Satisfaction Index provides benchmarks among various categories.

CSAT allows active customers to give a quick evaluation following the end of their interaction with the chatbot, whether good or bad. To calculate the score for the most accurate reading, follow a “top-2 box” measure, which only takes into account the two highest possible ratings. Divide the number of top-2 responses with the total number of responses, and that is the percentage of satisfied customers.

If a large portion of customers are reporting low satisfaction from their experience, it means the chatbot is not working as it should to solve their problems. To see what went wrong, looking into the sessions that caused these low satisfaction ratings in order to identify and fix obstacles.

“How likely are you to recommend us to a friend?”

Otherwise known as the Net Promoter Score (NPS), the theory behind this metric follows the idea that people are going to be honest when it comes to sharing recommendations with a friend than if you asked them if they liked their experience. After all, when someone shares a recommendation, they are staking their reputation on it. They have to really believe that it’s good in order to vouch for it, so it’s a good measure of customer loyalty.

NPS is measured on a scale of 1-10 and like the CSAT score, it is another top-2 measure. Those top-2 responses are called “Promoters”, and everyone else is either a “Detractor” or a “Passive.” Passives like the chatbot, but not enough to recommend it. These ratings don’t affect the score. Detractors, on the other hand, did not like their experience and these are the ratings that will lower the score. To find out what your NPS is, subtract the number of Promoters by the number of Detractors and divide that by the total number of responders.

A “good” NPS is also relative. Generally, anything above zero is considered good and anything above 50 is excellent. However, if you have an NPS of 50 and your competition has an NPS of 60, it might be a good time to think about how to improve the customer experience.

Leverage data for a better experience

Measuring a chatbot’s performance is vital in determining where it’s succeeding and where it needs more attention, and there’s no better source than the customers. Knowing how much they like the chatbot is the first step, but it’s necessary to take that data to inform future iterations. Dashbot’s User Satisfaction Reports will allow you to dig into every conversation that resulted in both positive and negative ratings.

Pinpointing which user conversation to examine expedites the process of discovering pain points, meaning it’ll take less time to optimize that section. The User Satisfaction Report will catch obstacles to good ratings faster and show a clearer path to happy customers.

About Dashbot

Dashbot is an analytics platform for conversational interfaces that enables enterprises to increase satisfaction, engagement, and conversions through actionable insights and tools.

In addition to traditional analytics like engagement and retention, we provide chatbot specific metrics including NLP response effectiveness, sentiment analysis, conversational analytics, and the full chat session transcripts.

We also have tools to take action on the data, like our live person take over of chat sessions and push notifications for re-engagement.

We support DialogFlow, Alexa, Google Assistant, Facebook Messenger, Slack, Twitter, Kik, SMS, web chat, and any other conversational interface.

Contact us for a demo