首页/Qwen3 235B A22B Instruct 2507
qwen/qwen3-235b-a22b-instruct-2507

Qwen3 235B A22B Instruct 2507

qwen/qwen3-235b-a22b-instruct-2507
Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.

特性

按使用量付费

$0.15/$0.8
每100万Token(输入/输出)

使用以下代码示例来集成我们的API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.jiekou.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="qwen/qwen3-235b-a22b-instruct-2507",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=16384,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

信息

提供商
Qwen
量化
fp8

支持的功能

上下文长度
131072
最大输出
16384
结构化输出
支持
函数调用
支持
Input Capabilities
text
Output Capabilities
text