Build a scalable localisation workflow with robust quality control processes to ensure consistent, high-quality content across all markets.
In 2019, a global hotel chain launched a multilingual website with translations produced by an automated system and lightly reviewed by a junior contractor. Within days of going live, social media users in several markets began posting screenshots of egregious errors: room descriptions that promised "free dead animal" instead of "free mini bar," concierge services that offered "funeral arrangements" rather than "event planning," and a spa menu that listed "skin removing treatment" for what was meant to be "exfoliation." The mistakes were embarrassing, widely shared, and damaging to the brand's reputation across half a dozen countries simultaneously. The root cause was not translation quality in isolation: it was a broken localisation workflow that lacked proper quality control gates, subject-matter expertise in the review process, and any mechanism for catching contextual errors before content went live. A systematic approach to localisation workflow and QC could have prevented every one of those errors.
A well-designed localisation workflow is a repeatable pipeline that moves content from creation to publication in multiple languages without sacrificing quality, consistency, or speed. The workflow typically begins with source content creation, where writers follow guidelines that anticipate localisation — avoiding idioms, cultural references, and ambiguous phrasing that create unnecessary translation difficulty. From there, content enters the translation management system (TMS), where it is assigned to linguists based on language pair and subject-matter expertise. The TMS should leverage translation memories — databases of previously translated segments — to automatically reuse approved translations and reduce both cost and inconsistency over time.
The translation stage should include explicit briefs for each linguist that specify audience, tone, content type, and any market-specific guidance. Linguists should be native speakers of the target language who live or have lived in the target market, and they should be assigned to content categories that match their expertise. Translating a medical product page requires different linguistic skills than translating a fashion e-commerce campaign, and a TMS that routes work to the right linguists is a key quality differentiator. After initial translation, the workflow should include at minimum a bilingual review (checking accuracy against the source) and a unilingual review (reading the target text on its own terms for naturalness, fluency, and brand voice alignment). These two steps catch different types of errors and should always be performed by separate reviewers.
The workflow does not end at linguistic review. In-context review — where translated content is previewed in its final layout, on the actual platform or device — is essential for catching formatting, truncation, and visual-context errors that are invisible in a spreadsheet or translation editor. Many translation management systems offer in-context preview capabilities, and for web content, staging environments should be set up for each locale so reviewers can navigate the site as a user would. The final gate is in-market review by a local stakeholder who can verify cultural relevance, regulatory compliance, and brand fit. This four-stage pipeline — translation, bilingual review, unilingual review, in-context validation — is the minimum viable workflow for professional localisation at scale.
Quality assurance in localisation requires both process-level controls and automated tooling. Process-level controls include style guides and glossaries that ensure consistent treatment of terminology, brand names, and voice across all languages and content types. A style guide for each language should address grammar conventions, punctuation rules, formality levels (particularly important for languages like Japanese, Korean, and Thai that have complex honorific systems), date and number formats, and treatment of proper nouns. A central glossary defines how key brand terms, product names, and industry-specific vocabulary should be rendered in each target language, reducing inconsistency and eliminating debates with linguists about preferred translations.
Automated quality checks catch the errors that human reviewers can miss, especially at scale. Translation management systems offer a range of QA checks including: missing or inconsistent translations, tag mismatches (e.g., an HTML tag that was accidentally duplicated or removed during translation), incorrect or inconsistent number formatting, punctuation errors, length limit violations, and terminology compliance against the glossary. These automated checks should run after every translation stage and must be configured to match your specific quality standards, not the tool's default settings. A typical configuration will flag any issue that requires human judgment to resolve, while automatically accepting purely mechanical corrections such as punctuation alignment.
Measuring localisation quality requires a formal evaluation framework. The most widely used model is the Multidimensional Quality Metrics (MQM) framework, which categorises errors by type (accuracy, fluency, terminology, style, locale convention, formatting) and severity (critical, major, minor). Establish a target quality score — for example, a maximum of one major error per 1,000 words — and track scores over time by linguist, language pair, and content type. Regular quality audits, conducted quarterly, provide trending data that helps identify training needs, linguist performance issues, and process bottlenecks. Quality measurement is not about blame — it is about building a system that systematically improves over time.
The most mature localisation operations treat workflow and QC as a continuous improvement cycle, not a static process. Key performance indicators should cover speed, cost, quality, and business impact. On the speed side, track turnaround time from content submission to publication for each language and content type. On cost, track cost per word by language pair and vendor, and monitor your translation memory leverage rate — the percentage of new content that is matched against previously translated segments. A high leverage rate (above 50% for established localisation programs) directly reduces cost and speeds up delivery. On quality, track MQM error rates and first-pass approval rates for each stage in the workflow.
Business impact metrics connect localisation activity to commercial outcomes. Measure engagement metrics — page views, time on page, bounce rate — for each locale and compare them against the source-language baseline. Measure conversion rates for localised landing pages, product pages, and marketing campaigns. Track customer support ticket volumes in each language; a spike in support tickets after a localisation release often signals a quality problem that automated checks missed. These business metrics provide the justification for localisation investment and the data needed to prioritise which content, markets, or process improvements will deliver the highest return.
Finally, build feedback loops from the field back into the process. In-market teams, customer support agents, and sales representatives interact with localised content every day and often spot issues before anyone else. Create a simple mechanism — a shared tracker, a Slack channel, a regular review meeting — for capturing their feedback and routing it into the localisation workflow. When an in-market sales rep reports that a translated product page is confusing prospects, that feedback should trigger a review that updates the translation in the TMS and prevents the same error from recurring. A localisation workflow that learns from every market interaction becomes faster, cheaper, and more effective with every cycle, turning a cost centre into a competitive advantage for global growth.
You need a formal QA process from the very first language you localise into. Even a single-language localisation project can produce errors that damage brand credibility and waste the entire localisation investment. The complexity of QA scales with the number of languages, but the basic principle — verify accuracy, fluency, and cultural relevance before publishing — applies regardless of scale. Start with a simple bilingual review and in-context check for your first language, then add automated QA and formal frameworks like MQM as you expand to additional markets.
Industry best practice suggests a ratio of roughly three translators for every two reviewers, assuming the translators are experienced professionals. The key principle is that reviewers must be separate from translators — a translator who self-reviews rarely catches their own blind spots. For high-stakes content (marketing, legal, customer-facing product copy), consider adding an additional in-market reviewer. For lower-stakes content (internal communications, administrative pages), a single bilingual reviewer may suffice. The ratio should also account for reviewer qualifications: a reviewer with subject-matter expertise is worth more than a generalist reviewer, even if the generalist is faster.
Translation memories should be updated after every translation project so that new, approved translations are immediately available for reuse. Set up automated TM updates in your TMS so that content that passes QC is automatically added to the memory with the correct metadata (client, project, domain, date). Glossaries should be reviewed and updated quarterly, or whenever brand terminology changes. A common mistake is treating the glossary as a static document created at the start of the localisation program and never revisited. Active glossary management — adding new terms, deprecating outdated ones, resolving ambiguities flagged by linguists — is essential for maintaining consistency as your brand and markets evolve.