- Qwen3 introduces a new hybrid reasoning system
- Qwen3 demonstrates outstanding performance in reasoning, instruction-following, agent integration, and human alignment and supports 119 languages and dialects
- All Qwen3 models (ranging from small to massive, including Mixture-of-Experts architectures) are open-sourced and freely available to the global developer community via platforms like Hugging Face and Github
- Over 100,000 derivative models already created and 300 million downloads to date
Alibaba has launched Qwen3, the latest generation of its open-sourced large language model (LLM) family, setting a new benchmark for AI innovation.
The Qwen3 series features six dense models and two Mixture-of-Experts (MoE) models, offering developers flexibility to build next-generation applications across mobile devices, smart glasses, autonomous vehicles, robotics and beyond.
All Qwen3 models – including dense models (0.6B, 1.7B, 4B, 8B, 14B, and 32B parameters) and MoE models (30B with 3B active, and 235B with 22B active) – are now open sourced and available globally.
Hybrid Reasoning Combining Thinking and Non-thinking Modes
Qwen3 marks Alibaba’s debut of hybrid reasoning models, combining traditional LLM capabilities with advanced, dynamic reasoning. Qwen3 models can seamlessly switch between thinking mode for complex, multi-step tasks such as mathematics, coding, and logical deduction and non-thinking mode for fast, general-purpose responses.
For developers accessing Qwen3 through API, the model offers granular control over thinking duration (up to 38K tokens), enabling an optimised balance between intelligent performance and compute efficiency.
Through internal testing, it is alleged that the Qwen3-235B-A22B MoE model significantly lowers deployment costs compared to other similar models currently on the market, achieving Alibaba’s internal commitment to providing accessible, high-performance AI.
Breakthroughs in Multilingual Skills, Agent Capabilities, Reasoning and Human Alignment
Trained on a massive dataset of 36 trillion tokens, which is double that of its predecessor Qwen2.5, Qwen3 delivers significant advancement on reasoning, instruction following, tool use and multilingual tasks.
Key capabilities include:
- Multilingual Mastery: Supports 119 languages and dialects, with leading performance in translation and multilingual instruction-following.
- Advanced Agent Integration: Natively supports the Model Context Protocol (MCP) and robust function-calling, leading open-source models in complex agent-based tasks.
- Superior Reasoning: Surpasses previous Qwen models (QwQ in thinking mode and Qwen2.5 in non-thinking mode) in mathematics, coding, and logical reasoning benchmarks.
- Enhanced Human Alignment: Delivers more natural creative writing, role-playing, and multi-turn dialogue experiences for more natural, engaging conversations.
Thanks to advancements in model architecture, increase in training data, and more effective training methods, through internal testing Qwen3 models achieve top-tier results across industry benchmarks such as AIME25 (mathematical reasoning), LiveCodeBench (coding proficiency), BFCL (tool and function-calling capabilities), and Arena-Hard (Alibaba’s benchmark for instruction-tuned LLMs).
To develop the hybrid reasoning model, a four-stage training process was implemented, which includes long chain-of-thought (CoT) cold start, reasoning-based reinforcement learning (RL), thinking mode fusion, and general RL.
Open Access to Drive Innovation
Qwen3 models are now freely available for download on Hugging Face, Github, and ModelScope, and can be explored on chat.qwen.ai. API access will soon be available through Alibaba’s AI model development platform Model Studio. Qwen3 also powers Alibaba’s flagship AI super assistant application, Quark.
Since its debut, the Qwen model family has attracted over 300 million downloads worldwide. Developers have created more than 100,000 Qwen-based derivative models on Hugging Face, making Qwen one of the world’s most widely adopted open-source AI model series.