Tencent Unveils Hunyuan-MT-7B and Hunyuan-MT-Chimera-7B
Tencent’s Hunyuan team has made a significant stride in multilingual machine translation with the open-source release of two powerful models: Hunyuan-MT-7B, a high-performance translation model, and Hunyuan-MT-Chimera-7B, an innovative ensemble model.
These state-of-the-art models were officially announced in late August and open-sourced on September 1, 2025, earning broad acclaim for delivering competitive results against both open and closed-source systems at global benchmarks.
Key Features
Hunyuan-MT-7B
Parameter Size: 7 billion parameters, offering higher efficiency than typical large AI models which often exceed 10B parameters.
Language Support: Mutual translation across 33 languages, including major world languages and less-resourced Chinese minority languages (Tibetan, Mongolian, Uyghur, Kazakh).
Task Range: Optimized for both high- and low-resource translation tasks, ensuring robust, accurate results across common and rare languages.
Deployment Efficiency: Designed for speed and low computing costs, suitable for everything from cloud servers to resource-constrained edge devices.
Integration: Already in use across Tencent products like Tencent Meeting, WeChat Work, and QQ Browser.
Hunyuan-MT-Chimera-7B
Ensemble Model: The first open-source translation ensemble optimizer, which combines outputs from multiple translation engines and refines them using reinforcement learning and aggregation, producing a single, optimal translation output.
Specialization: Particularly strong in scenarios requiring the highest translation fidelity, including the professional and enterprise sector.
Benchmark Performance
Academic Competitions
WMT2025 Results: Both models outperformed competitors in 30 out of 31 language pairs at the world’s largest annual machine translation competition.
Compared to Google Translate, performance gains ranged from 15% to 65%, and outperformed larger proprietary systems like GPT-4.1, Claude 4 Sonnet, and Gemini 2.5 Pro, especially in low-resource languages.
Industry Benchmarks
On the Flores200 test set, Hunyuan-MT-7B delivered results on par with closed-source giants, achieving industry-leading scores for BLEU and chrF++ evaluation metrics.
Model Efficiency
Compression and Optimizations: With model quantization options (FP8), inference performance is improved by 30%, allowing rapid translation even under constrained resources.
Deployment Frameworks: Natively compatible with TensorRT-LLM, vLLM, SGLang, and provides ready-to-use Docker containers for developer convenience.
Deployment & Efficiency
Lightweight size enables fast inference and efficient translation processing on limited hardware, enabling deployment on both servers and edge devices.
Qualcomm-backed AngelSlim compression tool reportedly improves inference speed by ~30% compared to uncompressed alternatives.
Already in use across Tencent products: Meeting (video conferencing), Enterprise WeChat, QQ Browser, enhancing user experience with localized, accurate translation.
Background
The two new models were revealed ahead of Tencent’s leading performance at the WMT2025 General Machine Translation shared task, an event recognized as the world’s premier evaluation platform for translation systems.
Hunyuan-MT-7B ranked first in 30 out of 31 language pairs in this competition, outclassing widely-used systems from Google, OpenAI, and other top providers.
Accessibility and Pricing
Open Source Access: Both Hunyuan-MT-7B and Hunyuan-MT-Chimera-7B are freely downloadable from Hugging Face, GitHub, and ModelScope.
Licensing: The Tencent Hunyuan Community License, based on Apache 2.0, allows research and commercial use with restrictions for deployments exceeding 100 million monthly users and in certain regions (ex. the EU, UK, South Korea).
News Gist
Tencent’s Hunyuan team has launched Hunyuan-MT-7B and Hunyuan-MT-Chimera-7B, open-source translation models excelling at WMT2025.
With support for 33 languages, low-resource coverage, RL-based optimization, and enterprise-ready deployment, they outperform Google, OpenAI, and Anthropic in multilingual translation benchmarks.
FAQs
Q1. What did Tencent recently release?
A1. Tencent unveiled Hunyuan-MT-7B, a 7B parameter translation model, and Hunyuan-MT-Chimera-7B, an ensemble translation model.
Q2. When were these models announced?
A2. Both models were officially released on September 1, 2025, ahead of Tencent’s WMT2025 competition success.
Q3. What makes Hunyuan-MT-7B unique?
A3. It delivers high efficiency with only 7B parameters, supports 33 languages (including minority Chinese languages), and is optimized for edge and cloud deployment.
Q4. How does Hunyuan-MT-Chimera-7B differ?
A4. It’s the first open-source ensemble translation model, combining multiple outputs using reinforcement learning to deliver higher fidelity translations.
Q5. How do these models perform compared to competitors?
A5. They ranked 1st in 30 of 31 WMT2025 categories, outperforming Google Translate, GPT-4.1, Claude Sonnet, and Gemini 2.5 Pro.
Q6. Are the models free to use?
A6. Yes, both are open-source and downloadable from Hugging Face, GitHub, and ModelScope, with licensing restrictions for extremely large-scale deployments.