首页/Qwen

Qwen

Text

Qwen3 Next 80B A3B Thinking

Qwen3-Next uses a highly sparse MoE design: 80B total parameters, but only ~3B activated per inference step. Experiments show that, with global load balancing, increasing total expert parameters while keeping activated experts fixed steadily reduces training loss.Compared to Qwen3’s MoE (128 total experts, 8 routed), Qwen3-Next expands to 512 total experts, combining 10 routed experts + 1 shared expert — maximizing resource usage without hurting performance. The Qwen3-Next-80B-A3B-Thinking excels at complex reasoning tasks — outperforming higher-cost models like Qwen3-30B-A3B-Thinking-2507 and Qwen3-32B-Thinking, outpeforming the closed-source Gemini-2.5-Flash-Thinking on multiple benchmarks, and approaching the performance of our top-tier model Qwen3-235B-A22B-Thinking-2507.

Text

Qwen3 Coder 480B A35B Instruct

Qwen3-Coder-480B-A35B-Instruct is a cutting-edge open-source coding model from Qwen, matching Claude Sonnet’s performance in agentic programming, browser automation, and core development tasks. With native 256K context (extendable to 1M tokens via YaRN), it excels at repository-scale analysis and features specialized function-call support for platforms like Qwen Code and CLINE—making it ideal for complex, real-world development workflows.

Text

Qwen3 235B A22b Thinking 2507

The Qwen3-235B-A22B-Thinking-2507 represents the newest thinking-enabled model in the Qwen3 series, delivering groundbreaking improvements in reasoning capabilities. This advanced AI demonstrates significantly enhanced performance across logical reasoning, mathematics, scientific analysis, coding tasks, and academic benchmarks - matching or even surpassing human-expert level performance to achieve state-of-the-art results among open-source thinking models. Beyond its exceptional reasoning skills, the model shows markedly better general capabilities including more precise instruction following, sophisticated tool usage, highly natural text generation, and improved alignment with human preferences. It also features enhanced 256K long-context understanding, allowing it to maintain coherence and depth across extended documents and complex discussions.

Text

Qwen3 235B A22B Instruct 2507

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.

Text

Qwen 2.5 72B Instruct

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.

Text

Qwen3 235B A22B

Achieves effective integration of inference and non-inference modes, enabling seamless switching between modes during conversations. The model's inference capability significantly surpasses that of QwQ, and its general capabilities exceed those of Qwen2.5-72B-Instruct, reaching the state-of-the-art (SOTA) level among models of the same scale.

Text

Qwen2.5 VL 72B Instruct

Qwen2.5-VL, the latest vision-language model in the Qwen2.5 series, delivers enhanced multimodal capabilities including advanced visual comprehension for object/text recognition, chart/layout analysis, and agent-based dynamic tool orchestration. It processes long-form videos (>1 hour) with key event detection while enabling precise spatial annotation through bounding boxes or coordinate points. The model specializes in structured data extraction from scanned documents (invoices, tables, etc.) and achieves state-of-the-art performance across multimodal benchmarks encompassing image understanding, temporal video analysis, and agent task evaluations.

Text

Qwen3 32B

Achieves effective integration of inference and non-inference modes, allowing seamless switching between modes during conversations. Its inference capability matches that of QwQ-32B with a smaller parameter size, and its general capabilities significantly surpass those of Qwen2.5-14B, reaching the state-of-the-art (SOTA) level among models of the same scale.

Text

Qwen3 30B A3B

Achieves effective integration of inference and non-inference modes, allowing seamless switching between modes during conversations. Its inference capability matches that of QwQ-32B with a smaller parameter size, and its general capabilities significantly surpass those of Qwen2.5-14B, reaching the state-of-the-art (SOTA) level among models of the same scale.

Text

Qwen3 Next 80B A3B Instruct

Qwen3-Next uses a highly sparse MoE design: 80B total parameters, but only ~3B activated per inference step. Experiments show that, with global load balancing, increasing total expert parameters while keeping activated experts fixed steadily reduces training loss.Compared to Qwen3’s MoE (128 total experts, 8 routed), Qwen3-Next expands to 512 total experts, combining 10 routed experts + 1 shared expert — maximizing resource usage without hurting performance. The Qwen3-Next-80B-A3B-Instruct performs comparably to our flagship model Qwen3-235B-A22B-Instruct-2507, and shows clear advantages in tasks requiring ultra-long context (up to 256K tokens).

Text

Qwen MT Plus

Qwen-MT is a large language model optimized for machine translation, built upon the foundation of the Tongyi Qianwen model. It supports translation across 92 languages — including Chinese, English, Japanese, Korean, French, Spanish, German, Thai, Indonesian, Vietnamese, Arabic, and more — enabling seamless multilingual communication.

Text

Qwen3 8B

Achieves effective integration of reasoning and non-reasoning modes, allowing seamless mode switching during conversations. Its reasoning capability reaches state-of-the-art (SOTA) performance among models of the same scale, and its general capabilities significantly outperform those of Qwen2.5-7B.

Text

Qwen2.5 7B Instruct

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. - Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. - Long-context Support up to 128K tokens and can generate up to 8K tokens. - Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Embedding

qwen/qwen3-embedding-0.6b

Embedding

Qwen3 Embedding 8B

Qwen3 Embedding 8B Model is the latest proprietary embedding model from the Qwen family, specifically optimized for text embedding tasks. Built upon the dense foundational architecture of the Qwen3 series, it fully inherits the base model's exceptional multilingual capabilities, long-context comprehension, and advanced reasoning skills. The Qwen3 Embedding series delivers groundbreaking performance across multiple embedding applications, including text retrieval, code search, text classification, document clustering, and bitext mining.

Rerank

Qwen3 Reranker 8B

Qwen3 Reranker 8B Model Series is the latest proprietary model from the Qwen family, specifically designed for ranking tasks. Built upon the dense foundational models of the Qwen3 series, it fully inherits the exceptional multilingual capabilities, long-context understanding, and reasoning skills of its base model. The Qwen3 Reranker series achieves significant breakthroughs in ranking tasks, supporting 100+ languages, with 8 billion parameters and a 32K context length.

Contact Us