
TranslateGemma is a new collection of open translation models based on Gemma 3, available in 4B, 12B, and 27B parameter sizes. It enhances communication across 55 languages, making it accessible to users regardless of their location or device.
Notably, Google has developed a suite of compact, high-performance open models that maintain quality without sacrificing efficiency by leveraging knowledge from advanced large models.
The 12B TranslateGemma model exhibits superior efficiency compared to the Gemma 3 27B baseline, as demonstrated by the evaluation using MetricX on the WMT24++ benchmark.
For developers, the new model offers high-fidelity translation quality with fewer than half the parameters of the baseline, resulting in increased throughput and reduced latency while maintaining accuracy.
Moreover, the 4B model matches the performance of the larger 12B baseline, proving effective for mobile inference.
TranslateGemma was tested on the WMT24++ dataset, which includes 55 languages from diverse families. It significantly lowered the error rate compared to the baseline Gemma model across all languages, demonstrating enhanced quality and efficiency.
Achieving this density of intelligence involves a specialised two-stage fine-tuning process that distils the intuition of Gemini models into an open architecture.

TranslateGemma has been thoroughly trained and tested on 55 language pairs, demonstrating reliable, high-quality performance in both major languages like Spanish, French, Chinese, and Hindi, as well as various low-resource languages.
Likewise, TranslateGemma has been trained on nearly 500 additional language pairs beyond the core languages, serving as a robust foundation for adaptation. It is designed for researchers to fine-tune state-of-the-art models for specific language pairs or enhance quality for low-resource languages.
While confirmed evaluation metrics for the extended set are not yet available, the full list is included in the technical report to promote community exploration and research.
TranslateGemma models maintain the robust multimodal capabilities of Gemma 3. Tests on the Vistra image translation benchmark indicate that enhancements in text translation beneficially influence image text translation, despite the absence of specific multimodal fine-tuning in the training process.
Leaners can increase performance and achieve their career goals with the help of Regent Training Centre's Customised coaching and hands-on training, which develops skills and boosts confidence.
TranslateGemma establishes a new benchmark for open translation models, effectively combining high performance with notable efficiency, and is offered in three sizes for various deployment scenarios.
Eventually, the release of TranslateGemma offers researchers and developers robust tools for various translation tasks, aiming to enhance cross-cultural understanding. The community is encouraged to explore and build upon these models.
Read more news:
Posted On: January 21, 2026 at 06:00:19 PM
Last Update: January 31, 2026 at 09:46:37 PM
Handpicked content to fuel your curiosity.