Home
About
Contact
Knowledge Hub
FAQs
Logo
Classroom Courses
Online Courses
Training Schedule
Training Venues
Enterprise solutions
Careers

Stay Updated with Our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Regent Logo

Fulham Palace Road, London, W6 8JA 77

Monday to Friday 9 am – 5 pm | Sat-Sun: Online support only
+44 20 45 773 002
info@regentstc.com

Training Venues

London
Dubai
Paris
Istanbul
Singapore
Amsterdam
Kuala Lumpur
Barcelona

Useful Links

Contact us
Privacy Policy
Terms & Conditions

Follow Us

FacebookInstagramXLinkedin
Regent footer gif

Copyrights © 2026 Regent. All rights reserved.

v2.3.2
  1. Home
  2. >Knowledge Hub
  3. >News
  4. >Translategemma Introducing The Open Ai Translation Model
TranslateGemma: Introducing the Open AI Translation Model

TranslateGemma: Introducing the Open AI Translation Model

Translate faster. Reach the world.

TranslateGemma is a new collection of open translation models based on Gemma 3, available in 4B, 12B, and 27B parameter sizes. It enhances communication across 55 languages, making it accessible to users regardless of their location or device.


Notably, Google has developed a suite of compact, high-performance open models that maintain quality without sacrificing efficiency by leveraging knowledge from advanced large models.


Surpassing Two-Fold Larger Models

The 12B TranslateGemma model exhibits superior efficiency compared to the Gemma 3 27B baseline, as demonstrated by the evaluation using MetricX on the WMT24++ benchmark.


For developers, the new model offers high-fidelity translation quality with fewer than half the parameters of the baseline, resulting in increased throughput and reduced latency while maintaining accuracy.


Moreover, the 4B model matches the performance of the larger 12B baseline, proving effective for mobile inference.


TranslateGemma was tested on the WMT24++ dataset, which includes 55 languages from diverse families. It significantly lowered the error rate compared to the baseline Gemma model across all languages, demonstrating enhanced quality and efficiency.


Based on Gemini

Achieving this density of intelligence involves a specialised two-stage fine-tuning process that distils the intuition of Gemini models into an open architecture.


  • Supervised Fine-Tuning (SFT) enhanced the Gemma 3 models by utilising a diverse parallel dataset of human-translated texts and synthetic translations from advanced Gemini models, focusing on broad language coverage and fidelity, especially for low-resource languages.
  • Incorporating a novel Reinforcement Learning (RL) phase, it has enhanced translation quality by utilising an ensemble of reward models, including MetricX-QE and AutoMQM, to achieve more contextually accurate and natural translations.


TranslateGemma: Introducing the Open AI Translation Model


Unrivalled Linguistic Coverage

TranslateGemma has been thoroughly trained and tested on 55 language pairs, demonstrating reliable, high-quality performance in both major languages like Spanish, French, Chinese, and Hindi, as well as various low-resource languages.


Likewise, TranslateGemma has been trained on nearly 500 additional language pairs beyond the core languages, serving as a robust foundation for adaptation. It is designed for researchers to fine-tune state-of-the-art models for specific language pairs or enhance quality for low-resource languages.


While confirmed evaluation metrics for the extended set are not yet available, the full list is included in the technical report to promote community exploration and research.


Superb Multimodal Skills

TranslateGemma models maintain the robust multimodal capabilities of Gemma 3. Tests on the Vistra image translation benchmark indicate that enhancements in text translation beneficially influence image text translation, despite the absence of specific multimodal fine-tuning in the training process.


Leaners can increase performance and achieve their career goals with the help of Regent Training Centre's Customised coaching and hands-on training, which develops skills and boosts confidence.


It Operates All Over

TranslateGemma establishes a new benchmark for open translation models, effectively combining high performance with notable efficiency, and is offered in three sizes for various deployment scenarios.


  • 4B Model: Designed for mobility and edge deployment.
  • 12B Model: Runs easily on consumer laptops, providing research-grade power for local development environments.
  • 27B Model: Designed for optimal fidelity, this model can operate in the cloud on a single H100 GPU or TPU.


Eventually, the release of TranslateGemma offers researchers and developers robust tools for various translation tasks, aiming to enhance cross-cultural understanding. The community is encouraged to explore and build upon these models.


Read more news:

  • Canada Invests in Developing a More Affordable & Reliable Energy Future
  • Amazon Deliveries Take Off as Tests Begin
  • Apple Selects Google's Gemini to Operate AI-Powered Siri Arriving This Year

Posted On: January 21, 2026 at 06:00:19 PM

Last Update: January 31, 2026 at 09:46:37 PM


Posted: January 21, 2026 at 06:00:19 PMLast Update: January 31, 2026 at 09:46:37 PM
Previous NewsNext News
Share on

Articles You Can’t Miss

Handpicked content to fuel your curiosity.

Apple Selects Google's Gemini to Operate AI-Powered Siri Arriving This Year

Apple Selects Google's Gemini to Operate AI-Powered Siri Arriving This Year

Amazon Deliveries Take Off as Tests Begin

Amazon Deliveries Take Off as Tests Begin

Canada Invests in Developing a More Affordable & Reliable Energy Future

Canada Invests in Developing a More Affordable & Reliable Energy Future

;