Inception: Mercury 2
Fiabilité
20%
inception/mercury-2
Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs l...
🧠 Intelligence & Données
Knowledge Cutoff:
Inconnue
Tokenizer:
Other
Moderation:
✅ Non
📅 Cycle de vie
Ajouté le:
04/03/2026
Spécifications
-
Provider & Modalité inception text->text
-
Fenêtre de contexte 128,000 tokens
-
Max Output Tokens 50,000
-
Support des Outils (Tools) ✔️ Fonction Calling
🔍 Modèles similaires
| Modèle | Provider | Input | Output | Contexte | |
|---|---|---|---|---|---|
|
OpenAI: GPT-5.3 Chat
openai/gpt-5.3-chat
|
openai | $1.7500 | $14.0000 | 128,000 | → |
|
Google: Nano Banana 2 (Gemini 3.1 Flash Image Preview)
google/gemini-3.1-flash-i...
|
$0.5000 | $3.0000 | 65,536 | → | |
|
AionLabs: Aion-2.0
aion-labs/aion-2.0
|
aion-labs | $0.8000 | $1.6000 | 131,072 | → |
|
Z.ai: GLM 5
z-ai/glm-5
|
z-ai | $0.7200 | $2.3000 | 80,000 | → |
|
Arcee AI: Trinity Large Preview (free)
arcee-ai/trinity-large-pr...
|
arcee-ai | $0.0000 | $0.0000 | 131,000 | → |