Driving content performance is just like washing your apples

This apple has just been washed. Or was it?The Apple Analogy (the fruit, not the company)

An apple a day keeps a doctor away. But is it always the case?

Imagine a day at the office. A colleague of yours, say, Helen, has brought in some apples from the grocery store. They are on the coffee table near the water cooler. Some are green, and some are red. All are shiny and glossy and look real yummy.

As you walk by, you reach out and grab an apple. It’s still 1 hour till lunch time, but you’re already starving. Your first instinct is to start eating it immediately. As your hand is moving to your mouth, suddenly a thought strikes you: did anyone wash the apples yet? You have no idea, and the lady who brought them is nowhere to be found. It surely looks clean, but should I really risk it?

Grudgingly, you head into the kitchen, turn on the water and rinse the apple thoroughly. Just in case.

As you’re heading back to your cubicle, you bump into Helen. She notices the apple in your hand and the water dripping from it. “My, you shouldn’t have bothered. I’ve already washed them in the morning”, she says.

Bummer. You’ve just wasted 10 minutes of your life doing work that has already been done. But then again, how could you know?

Comparing Apples to… Global Content!

Exactly the same thing regularly happens in global content supply chains due to lack of transparency. Our extended content authoring, copywriting, translation & localization teams continuously wash the same “apple” over and over again. It’s very easy for people to ignore the fact that their “apple” has already been washed multiple times before, as they have little visibility into each other’s information, processes, and added value. Here are some typical scenes of “re-washing the apple” from the daily life of internationalization & localization projects:

  • Didn’t we already check for the correct brand terminology for our latest German email promo campaign on the translation vendor side? Who cares, we’ll just have our in-country office check it again. They’ll do it manually, of course, for that extra highly-paid wasted time. Yeah, probably they are taking this time away from supporting field sales in their daily jobs – but for sure it’s not my problem, is it?
  • Did someone already review the Right-to-Left layouting for that 171-page Arabic product datasheet? I am not sure, let’s just send it to our LQA vendor and they will take care of it. Only 10 linguistic hours and we’re done. It’ll take a week, you say? National holidays? No problem. Let’s just delay our product launch in the MENA region that we’ve been preparing for the last 4 months.
  • Anybody tested the last build for Swahili mobile app for truncated UI strings? What, Linguistic Testing already covered that and found no bugs? I’m not sure I trust those outsourcers, they’re based in China after all. Let’s have Brian our senior test engineer go through all 13 571 screens once again before we push the final build to App Store. You know, just to make sure they didn’t miss anything.

Sensitivity towards Quality Attributes, or: Should We Always Wash Them Again?

So, why aren’t global content professionals, especially managers, around the world concerned with lots of money and time being wasted on their global content just like that?

That’s because of the perceived risk, which in turn is caused by lack of trust. Just as you won’t eat your apple until you’re 100% sure it’s clean (in effect, high quality), they don’t want to publish their global content until they are 100% sure it’s high quality according to some explicit (or, more frequently, implicit) requirements.

But what if you always knew up front whether your apple has been washed or not? If it has been washed completely, or from one side only? If a special washing liquid has been used, or just regular tap water? If a certified expert has been washing your apple, or a simple passer-by? That’s what real supply chain transparency should enable you (as well as anyone else) to learn effortlessly.

And here’s an even more interesting question: what if you knew that eating an unwashed apple has, say, only a 0.0001% chance to negatively affect your health?

Exactly the same concept can be applied to the connection between content quality and content performance. Let’s call it sensitivity. Here’s how it works:

  • If your readers and users are sensitive to a certain aspect of content quality, that may affect content performance to an extent, and you will see your global content business KPIs change.
  • If they are not sensitive to this aspect at all, content performance and KPIs are likely to stay the same no matter how much money you invest into improving content quality in this aspect.
  • Kano model is an excellent way to think about customer sensitivity to various attributes, features, and aspects of your products, services, and content.

Knowing all that, perhaps you might not care THAT much about re-washing that apple anymore. Right? Let us know in the comments.

4 ways for global content experts to slice & dice quality

A tool for slicing & dicing content quality methods into convenient bite-sized pieces
A tool for slicing & dicing content quality methods into convenient bite-sized pieces

Today, we are venturing into a more technical territory than usual. This article will likely be most interesting to globalization experts, as well as those specifically looking to deepen their knowledge of applied global content quality management. What happens to quality evaluations and how do they transform as we go into the nitty-gritty of multilingual content quality management frameworks? How do we make sense of these frameworks and their interrelations? How can we come back again to the level where it all makes sense to the people un-initiated in localization quality matters (that is, to 99.9% of the world population), and talk to them in their language?

Over the years, localization industry has come up with several well-structured methodologies to define, categorize, and measure various aspects of quality related to multilingual translated content. Notable frameworks in this space include TAUS Dynamic Quality Framework/DQF, Multidimensional Quality Metrics/MQM, and Logrus Quality Triangle. However, it’s easy to lose sight of the forest behind the individual trees. How do those frameworks fit together? How do established localization industry processes relate to those frameworks? What connection do all of the above have to the Ultimate Content Quality Question: does our global content actually impact the desired business KPIs?

Here is a method we might use to further structure and refine our thinking about some of the global content quality management approaches, processes, methods, and techniques. It relies on 4 categories:

  1. Contextuality*: bilingual versus monolingual4 axes for classifying content quality evaluation techniques
  2. Technology: machine versus human
  3. Expertise: untrained versus professional
  4. Granularity: atomistic versus holistic

Note that these categories are not mutually exclusive. That is, each quality method or technique actually belongs to all 4 categories at once. The only question is just where exactly it is inside each category. If we imagine each as a horizontal scale, is it close to the left side or closer to the right side? You’ll find details & examples below.

Those of us who are mathematically inclined may want to think of each category as an individual dimension in a 4-dimensional space. A four-coordinate vector (c, t, e, g) will then represent a position of a technique or method within that space.

Normal people, on the other hand, should just read on. I promise it will all make sense eventually 🙂

Contextuality*: Bilingual versus Monolingual

*OK, I admit: I’ve just made this word up. My text editor underlines it in bright red as I’m typing this article. Would welcome your suggestions on how to call it.

This spectrum is about the volume and nature of information that’s taken into account when making a judgment about content quality.

On the monolingual end of the spectrum, we consider just the content itself, in the language it appears in at the moment of evaluation. Proofreading for spelling & grammar mistakes is a good example of a monolingual process.

On the bilingual end of the spectrum, we can also refer to the original version of the content in its native language (the all-mighty source!) and are able to compare the translation with the source. Editing for translation accuracy is a good example of a bilingual process.

Technology: Human versus Machine

This spectrum is, surprisingly, exactly what it sounds like.

Some types of content quality evaluations or assessments are produced by actual people working with your content (for instance, stylistic copyediting, revision, end user feedback, or usability testing).

Others are produced by software that analyzes your content (for instance, MT quality metrics, automatic translation quality checks, readability statistics, or sentiment analysis).

There are also certain quality procedures that may be performed either way, with varying efficiency, reliability, and costs (for example, manual vs automated software localization testing or manual vs automated spelling and grammar checks).

Expertise: Untrained versus Professional

This spectrum is applicable mostly to the human end of the above Technology spectrum. However, with some creativity one can find a way to apply it to machines as well. For the mathematically inclined among us, let’s agree we’re leaving this as an exercise to the reader 🙂

Here, on one side, we have methods relying on dedicated, educated, well-trained evaluators (for example, Error Typology reviews where professional linguists, typically after undergoing extra training, classify atomistic issues according to some metric).

On the other side, we find approaches that rely on evaluators without any particular expertise or training (for example, crowdsourced quality evaluation methods or votes, as well as acceptance testing techniques and end-user feedback). In a global setting, it’s usually implied those individuals possess language skills.

Granularity: Atomistic versus Holistic

This spectrum is the most developed and the most popular in the global content quality management domain. Feel free to skip to Content Performance Metrics subsection if you are very familiar with the atomistic vs holistic dichotomy.

Atomistic Quality

On the atomistic end of this spectrum, we operate at the microscopic level of “content atoms”. That is, individual sentences, words, and characters that make up a piece of content in a particular language.

  • The negative impact of any quality issues on this level is usually limited to the confines of the sentence (or, at most, the paragraph) where the issue has occurred.
    • An important exception to the above rule is showstopper issues, best captured by the Logrus Quality Square model. Showstopper errors actually impact the holistic level (see below), despite being atomistic by nature.
  • Example process steps operating mostly on this level:
    • Language Quality Inspection/Error Typology reviews (e.g. using MQM taxonomy, DQF, older models like LISA QA Model, or arbitrary translation error taxonomies)
    • proofreading for spelling, grammar, and style
    • editing (certain types)
    • software localization testing (in many occasions)
    • Machine Translation quality metrics, e.g. BLEU and METEOR
    • DTP QA
  • How people might talk about this level:
    • “This translated term doesn’t fit the context of this sentence.”
    • “This comma is not needed here”
    • “Completely ignored the grammar. Should be future tense, not the past”
    • “The word A was mistranslated as B”

Holistic Quality

On the spectrum’s opposite, holistic end, we operate with overall perceptions and impressions that a piece of content as a whole has on the end reader or end user.

  • Holistic level relates to people acquiring desired knowledge, performing desired actions, or changing their attitudes in the desired way after coming in touch with your content.
  • In other words, this is all about user and reader experience, not the “atoms” that make it up. Think of it as a total that exceeds the sum of its individual parts.
  • Example process steps operating mostly on this level:
    • Accuracy (Adequacy) and Fluency reviews of the entire text (as opposed to individual sentences)
    • ratings (e.g. 1-5 stars)
    • end user feedback (in some cases)
    • in-country review (in some cases)
    • usability testing
    • “overall feedback” sections of Error Typology reviews
  • How people might talk about this level:
    • “This doesn’t sound like a native speaker.”
    • “Was this translated by a robot or something?”
    • “I don’t understand the point they are trying to make”
    • “So many errors I couldn’t find the right button to click and deleted the app in disgust”
    • “It. Just. Doesn’t. Work.”

Content Performance: the Pinnacle of Holistic Quality Evaluations

Out of all holistic metrics, some are actually “more holistic” than others. Content performance metrics evaluate the overall, ultimate success of global content in any given language. They provide a sense of whether the content has actually reached the desired outcomes for the business that has commissioned it, as well as for the people who have consumed it or interacted with it. Example include conversion rates, customer satisfaction, learning outcomes, Return on Investment, and many others.

Content performance metrics are, thus, among the most important ones to measure wherever technically possible and practically feasible. They also are rarely made available to the entire global content supply chain, lowering the transparency significantly. This lack of opacity should also be very concerning to all managers: the very people whose work strongly drives those key metrics (for instance, individual authors and linguists) often don’t have any access to this powerful form of feedback.

Note: While quality factors (such as atomistic and holistic quality) are not the only ones influencing content performance, they obviously play an important role in it and are best viewed in the same context.


Hopefully, this overview was helpful to structure some of your thoughts around the many methods in the localization quality evaluation toolkit. What made a lot of sense for you, and what didn’t? What do you agree with? What do you find controversial? We’d LOVE to read your comments!

1 Thing That Truly Matters for Global Content Quality

A Very Personal Analogy: Holistic Approach to Coffee

Only from above one can gain a holistic view of things

I have a small confession to make: I love drinking coffee. It’s a rare day for me that goes without a cup of a cappuccino or a latte or a “Raf coffee” (a surprisingly good local variety which features cream instead of milk, mixed with espresso and then whipped together). I’ve been to a lot of coffee joints around my city, and always eager to try coffee from a new place every time I go out – as well as during my travels in other parts of the world. My most recently uncovered world coffee treasures happened to be in Greece (where they like their coffee particularly strong and flavorful), as well as in Latvia (where they like adding a healthy dose of Riga Black Balsam, a famous local herbal liqueur, to their cups o’joe).

Of course, I can’t avoid subconsciously comparing every new cup with all the previous cups I’ve had before in my life. They all have strongly shaped my “user experience” as a coffee “user”. Is this new cup good or bad? Is it better or is it worse than the ones before it? Should I ever buy coffee here again, or should I spit in utter disgust and pour it away? However, until recently I’ve never really pondered what is it, exactly, that makes my coffees good or bad? Why do I like some, and dislike the other?

I had, at best, a vague notion that it has something to do with variables such as coffee beans, grinding, roasting, water, milk, temperature, pressure, the espresso machine in use, and the skill of each individual barista. I also have heard that beans are being planted, grown, and harvested by one group of people, transported by another group, ground by yet another party, roasted and brewed separately still. In other words, coffee making depends on a rather long and complicated supply chain with a sophisticated multi-step process (not unlike creating global content, mind you).

But you know what? I’ve realized that I don’t really care that much about WHY my cup of coffee is good, and HOW to make it good – as long as I have an easy way to get the type of coffee that I like for a fair price. As a consumer, I have no intention of becoming a professional barista – heck, I don’t even intend to brew my own coffee at home! So understanding the “why” and “how” of making good coffee is far from the top of my priority list.

For all I know, it might have been prepared for me in a myriad of different ways. Maybe there’s just 1 omnipotent wizard behind my perfect cup, or maybe there’s a small army of a 100 highly specialized workers across interconnected global organizations – it doesn’t matter that much to me at the point of consumption. In other words, I do care only for the holistic experience of drinking an enjoyable cup of coffee and don’t really think about all the work that went into it while I drink it. Maybe it appears somewhat morally misguided, but that’s just how our human brain is wired to deal with the inherent complexity of the universe.

Quality of Global Content is also Holistic

Now, it’s not hard to see that any global content, from an end user’s (reader’s) perspective, is very much like coffee (and, by the way, it doesn’t really matter if your content is, in fact, a mobile app, a website, a marketing newsletter, a complex enterprise software product, a user manual, or a brochure). Your readers usually don’t stop to analyze content as they experience it (contrary to us, industry professionals). Your readers are not able to easily deduce what components, processes, or supply chain elements were put together to deliver that app or that blog article. Especially when your content is available in 15 different languages and your readers are accessing a localized version (which overlays an extra ton of complexity on top of the original process for your source locale). Nor, should I add, would they ever want to deduce this.

What readers and end users do care about, and do perceive, is the overall, holistic user experience they get from using your globalized product and consuming your global content. That, and only that is their true measure of content quality. That, and only that determines whether your products and content will be successful in fulfilling its purpose and contributing to your business goals. They don’t care if you have proofread your content. They don’t care if you have done your localization testing and fixed all the major bugs. They don’t care if your content ever went to an in-country review. They don’t care if your multilingual DTP/layout process was carried out. They don’t care if there are zero accuracy and fluency errors detected in the content by a 3rd party linguistic QA. They don’t even care whether you have engaged translators or transcreators or copywriters.

To sum it up: your readers and end users don’t have ANY use for ANY individual part of your global content delivery process, even if it has been perfectly executed. They simply wouldn’t realize that there’s more than one part to start with. Your readers and users want it all, together, in a nicely wrapped and timely delivered package.

Sounds obvious, doesn’t it?

If it does, though, could anyone please tell me this: why is the Globalization, Internationalization, Localization, and Translation industry still so obsessed with separately measuring various types of atomic-level quality attributes? Why do entire localization programs (and even entire service providers) run on a myopic notion that ensuring just one aspect of content quality (e.g. linguistic quality, or functional quality, or cosmetic/visual quality) is all it takes to for content to truly succeed globally?

What if your copy’s language is perfect, but a simple layout error makes the entire web page unreadable? What if your language-agnostic Software QA team has reported perfect results on a test run for your localized app, however, the entire text in there is in a different language than intended? What if your layout and visuals are stunning like a gift from the ancient Greek gods to humanity, but the content inside this perfect layout is not culturally relevant and downright offensive to your audience in a particular geography?

How much longer can we afford to focus on just 1 single part of content quality at a time, while ignoring all the others? Maybe it’s time for us to take a holistic view, combine all the aspects and types of quality (not just linguistic) into a single big picture, and make better decisions as a result?

After all, that’s what our readers and users do on a daily basis with our content and our products. Their emotions and their actions are our most important quality evaluation of all. So let’s make sure our content always scores a “PASS” with flying colors.