The KPIs of Language Part 1: 3 Customer-Centric Metrics to Track Today

March 24, 2021

Often, when customer service (CS) teams embark on a multilingual translation effort, they want to know how to measure success. One of the clearest indicators of a CS organization’s success as a whole are customer-centric metrics such as: 

  • Customer Satisfaction (CSAT) Scores
  • Net Promoter Scores (NPS) and
  • Customer Effort Scores (CES).

When these KPIs are measured in the context of language, CS leaders can see a clearer picture of how native-language translations impact customer happiness and willingness to recommend the brand to others. The subjectivity of translations makes them difficult to measure in terms of translation quality alone. (However, there are many efforts underway to improve the quality measurement process, including our own COMET framework for automated machine translation.)

Even so, quality metrics must be viewed in context of the customer’s actual experience. What’s the best way to find out? Turns out, it’s as easy as asking.

CSAT as a tool to collect language insights

CSAT scores are the gold standard of customer-centric metrics. These scores can be incredibly useful in the context of language. Specifically, CSAT can be a powerful data collecting tool that comes with actionable insights that shape your CS strategy. Here are a few examples of how to examine CSAT:

  • Average CSAT per language: When CSAT is parsed by language, you can start to see trends on a broader level. For example, certain language teams may be handling larger backlogs. As a result, CSAT scores could suffer due to longer wait times. In many cases, multilingual machine translation can help a single expert agent cover a variety of languages, improving these scores per language. 
  • Average CSAT scores per agent: Measuring average CSAT scores per agent can help identify improvements over time and hone in on team members who may need additional training or performance improvement recommendations. Since hiring expert agents is such a challenge, evaluating CSAT before and after implementing a new tool can show how an agent’s performance improves by automating previously manual tasks. However, make sure that you benchmark a score before the new tool is in place. Benchmarking can help identify how to replicate processes across agents, or identify outliers and understand behaviors. For example, you may find that after you implement a translation tool, Japanese CSAT scores are still low. From there, you could implement a training program to ensure that English-speaking agents understand how to respond to Japanese language queries so the machine translation tool works more effectively.
  • CSAT per location: CSAT per location can show you which BPO providers are performing the strongest in terms of language. For example, say you have one team in Mexico and one in Portugal. Even though both speak English as their primary language and use the same machine translation tools, the Portuguese BPO may have better CSAT scores. You can view both BPO centers in terms of their cost vs. CSAT scores and determine how much of a tradeoff you’re willing to make.  

NPS to measure team performance with language

The NPS has been used in the business world for over a decade to understand how likely a customer is to recommend your product or service to a friend or peer. NPS can be used to identify customers who may spend more with your organization (or Promoters), or those who have a negative impression of your brand and can potentially be turned around (or Detractors). For CS, NPS can be a powerful indicator of team performance.

A little known fact is that 40 percent of customers will not buy from a brand in another language. Another 74 percent say they’re more likely to buy from a brand a second time if they’re offered post-sales support in their native language. Something as simple as offering native language support could be a major boon to your NPS and turn those Detractors into Promoters. 

However, NPS isn’t a catch-all for every brand to use, and frequency really matters. For example, a behind-the-scenes B2B software company may deal with the same customers again and again. For that reason, measuring NPS too regularly may not make sense. However, a direct-to-consumer company should measure NPS after the customer has time to adjust to the product. Some companies make the mistake of measuring NPS directly after purchase, which may not give the customer enough time to weigh in accurately.

CES: a powerful metric combined with CSAT

According to Gartner, effort is the strongest driver of customer loyalty. Nearly all (96 percent) customers with a high-effort service interaction become more disloyal compared to 9 percent who have a low-effort experience. High-effort experiences could include interactions such as channel switching, repetition of information, generic service, or transfers. 

Measuring CES is as easy as asking customers to respond to a prompt ranking how easy it was for the company to handle their request. Getting service seamlessly in their native language through a low-friction website helpdesk, email or chat interaction can help reduce effort. Even better, low effort can increase other customer-centric metrics such as NPS. According to the same Gartner study cited above, NPS is 65 points higher for top-performing, low-effort companies than for high-effort companies.

CES and CSAT can be measured together to understand the impacts of language on customer satisfaction. For example, measuring CES at the beginning can show how native-language requests reduced customer effort. CSAT, on the other hand, can be measured on a monthly basis, depending on the type of product or service you offer. Both B2B and consumer companies should measure CSAT after every customer interaction (and then filter by agent, language and location to gain additional insights). B2B companies can take it a step further by measuring end-user satisfaction in addition to these two KPIs.

Mastering both customer-centric and operational KPIs

When measuring the success of a CS team’s language operations, it’s important to weigh these customer-centric KPIs alongside operational metrics focusing on language flexibility, performance improvements and cost-savings. In part two of our blog series, we’ll focus more on internal operational metrics that drive team performance and success, such as first call resolution and average handle time.

About the Authors
Sophia Malina is a Customer Success Manager and Diana Afonso is Head of Customer Happiness

About the Author

Profile Photo of Diana Afonso
Diana Afonso

Diana Afonso is Director of Technical Support at Unbabel. With over eight years of experience in the customer support field, she has a strong history of success in empowering and engaging customer support teams, driving quality, and acting as the voice of the customer to deliver a rewarding customer experience.