Qwen: Qwen3 235B A22B Instruct 2507
Fiabilité
20%
qwen/qwen3-235b-a22b-2507
Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code,...
🧠 Intelligence & Données
Knowledge Cutoff:
2025-06-30
Tokenizer:
Qwen3
Moderation:
✅ Non
📅 Cycle de vie
Ajouté le:
21/07/2025
Spécifications
-
Provider & Modalité qwen text->text
-
Fenêtre de contexte 262,144 tokens
-
Max Output Tokens 0
-
Support des Outils (Tools) ✔️ Fonction Calling
🔍 Modèles similaires
| Modèle | Provider | Input | Output | Contexte | |
|---|---|---|---|---|---|
|
Google: Gemma 4 26B A4B
google/gemma-4-26b-a4b-it
|
$0.1300 | $0.4000 | 262,144 | → | |
|
Google: Gemma 4 31B
google/gemma-4-31b-it
|
$0.1400 | $0.4000 | 262,144 | → | |
|
Qwen: Qwen3.6 Plus (free)
qwen/qwen3.6-plus:free
|
qwen | $0.0000 | $0.0000 | 1,000,000 | → |
|
Z.ai: GLM 5V Turbo
z-ai/glm-5v-turbo
|
z-ai | $1.2000 | $4.0000 | 202,752 | → |
|
Arcee AI: Trinity Large Thinking
arcee-ai/trinity-large-th...
|
arcee-ai | $0.2200 | $0.8500 | 262,144 | → |