Today’s data-driven business world knows one thing well: what doesn’t get measured doesn’t get done. And vice versa: what gets measured, gets managed. So how do we apply those popular truisms to content quality management in a global, multilingual scenario? And what are the common pitfalls to watch out for?
In day-to-day conversations at the workplace, terms like “data”, “metrics”, “information”, and “KPIs” are often used interchangeably. However, understanding the distinctions between concepts that are behind all those terms pays off big time when trying to manage and improve content quality and content performance across multiple teams, companies, languages, and markets. So let’s first set the terminology straight (as we always should whenever working on global content quality), and then see how to apply a data-driven approach to your global content Quality Management practice.
The map of the data-driven world is shaped like a pyramid
One way to interpret the idea of being data-driven is through the model called a “DIKW pyramid” (short for Data – Information – Knowledge – Wisdom):
The raw signals we gather from the outside world are data. They are meaningless unless context is provided. For example, what does the number 3576 mean for your content quality management program? Hard to tell.
Now, what if I told that 3576 is the number of visitors to your corporate website? That’s more structured and has a specific context. However, it’s still far from being useful if you’re trying to make sense of how your global content performs and whether it’s good or not. So let’s agree that is still data which is trying hard to get to the next level.
OK, taking that one step further still: 3576 unique visitors have signed up for our free product trial in France during the last 4 weeks from the product web page. Now we finally have some information. It gives you a solid basis from which to draw further insights. For example, by asking questions like “How does that number compare across geographies?”.
So now you take the top 4 of your target markets with their respective languages – Spanish (Argentina), English (United States), German (Austria), French (France) – and compare this information for each of them. You notice that the first two locales (both based in the Americas) have brought in a larger absolute amount of trial users, while the second two locales (both based in Europe) have a higher % increase of trial users month-over-month, despite having lower absolute numbers. At this point, we have detailed knowledge about that particular situation.
Finally, you realize that you don’t have to do anything at all about this difference between Americas conversion rates and Europe conversion rates – because it is, in fact, not statistically significant. As you also know that all other regions and languages have smaller traffic when compared to the top 4, you decide to stop tracking this metric entirely for the next 6 months and focus on something more useful instead. That’s best described as wisdom.
Note: In the business world, an applied version of the DIK(W) pyramid is data – metrics – indicators. Here, metrics roughly correspond to information, and indicators roughly correspond to knowledge. The most important 1-3 indicators for a business area are often called Key Performance Indicators, or KPI for short. Why do we want to select and prioritize just a small handful of indicators? Because each measurement has its own cost, and because it helps us avoid “death by dashboards”.
Leading metrics and lagging metrics
Suppose we have knowledge about how well our multilingual digital content matches our pre-defined requirements. In other words, we’ve defined indicators of our content quality. Those indicators can be based on various models for atomistic and holistic quality measurement, or even a combination of such models. We actually can find out the value of quality indicators before we make our content public and expose it to our audiences. In fact, that’s what most companies do as part of their Quality Management strategy for digital globalization programs.
We also have knowledge about the business impact that the very same multilingual content has achieved after being published and read by our end users. In other words, we’ve defined indicators of our content performance. In the world of digital globalization, those indicators can be based on various web content marketing metrics (for example, bounce rates, clickthrough rates, and conversion rates). However, we can only find the values of those indicators post factum: once the global content has been pushed out to the big wide world, there’s no turning back.
Now comes the big question: how do those two indicators, content quality and content performance, relate to each other? For example, does a better quality score for Spanish (Argentina) localized email content always come with an increase in the clickthrough rates for those email campaigns? In other words, does content quality predict content performance?
If yes, we say that our content quality indicator is the leading metric. Content performance indicator then becomes the lagging metric. We’re using the values for the leading metric (quality) to get a notion of what the lagging metric (performance) would likely be in future.
Word of caution, though: correlation doesn’t always equal causation. The fact of content performance always going up (or down) together with content quality does not yet mean that one is the direct result of the other. There might be other independent factors that influence both quality AND performance (just like with ice cream sales and deaths by drowning). So finding out that, for your multilingual content, quality and performance are indeed correlated is just the first step on the long way to discovering the real nature of this intricate, yet strategically important, relationship.
Capture and analyze all data on quality, not just pieces of it
So you might ask: how do I make this journey shorter? How do I get to the bottom of what’s influencing my content performance and understand whether content quality is indeed the culprit? That becomes especially hard if you can’t read & understand most of the languages that your team localizes and publishes your global content in.
Unfortunately, there is no universal answer to this question. However, one useful piece of advice is to approach content quality from a holistic perspective. Focusing on just one aspect of multilingual content quality (e.g. only the translation quality, or only internal review feedback, or only human expert judgment) and ignoring everything else is highly hazardous because this is NOT how your end users and readers will perceive your content in the real world.
Instead, try to get the whole 360-degree picture by capturing the entire range of sources from which you already get, or can get, any data on quality of your multilingual content. This gives you a better chance of spotting any lurking variables affecting the quality-performance relationship. Here are some ways to do this:
- If your global content is a software app and you’re localizing the user experience (and the UI in particular), blend the software testing results with linguistic quality inspection results.
- If you’re producing technical content that gets translated into several languages, combine the source language quality measurement with the target language quality measurements.
- If you’re crowdsourcing translations for your customer support portal, merge your senior translators’ or language moderators’ feedback with your end user translation quality ratings (e.g. 1-5 stars).
- If you’re applying Machine Translation for your user-generated content, combine automatic metrics (including quality estimation) with human assessment.
- If you’re doing a third-party evaluation of localized content that was done by another Language Service Provider, juggle your editor’s review results with the output from automatic translation QA tools.
- If you’re using in-country reviewers to revise & approve your multilingual copy, make sure you’re capturing every single piece of their feedback (even if it had been sent through a text message, in the middle of the night, to the mobile phone of your boss that has been offline at the time). Then compare their feedback to sentiment analysis that captures what your customers say.
How do you currently compare your content quality measurements with your content performance metrics? What are some of the results that you’ve recently seen? Does quality correlate with performance, or do each of those live their separate lives? Are there other variables besides quality that influence content performance? Share your experience in the comments section!