Inception: Mercury
Fiabilité
20%
inception/mercury
Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Claude 3.5 Haiku while matching their performance. Mercury's speed enables developers to provide respons...
🧠 Intelligence & Données
Knowledge Cutoff:
2025-01-31
Tokenizer:
Other
Moderation:
✅ Non
📅 Cycle de vie
Ajouté le:
26/06/2025
Spécifications
-
Provider & Modalité inception text->text
-
Fenêtre de contexte 128,000 tokens
-
Max Output Tokens 32,000
-
Support des Outils (Tools) ✔️ Fonction Calling
🔍 Modèles similaires
| Modèle | Provider | Input | Output | Contexte | |
|---|---|---|---|---|---|
|
Inception: Mercury 2
inception/mercury-2
|
inception | $0.2500 | $0.7500 | 128,000 | → |
|
OpenAI: GPT-5.3 Chat
openai/gpt-5.3-chat
|
openai | $1.7500 | $14.0000 | 128,000 | → |
|
Google: Nano Banana 2 (Gemini 3.1 Flash Image Preview)
google/gemini-3.1-flash-i...
|
$0.5000 | $3.0000 | 65,536 | → | |
|
AionLabs: Aion-2.0
aion-labs/aion-2.0
|
aion-labs | $0.8000 | $1.6000 | 131,072 | → |
|
Z.ai: GLM 5
z-ai/glm-5
|
z-ai | $0.7200 | $2.3000 | 80,000 | → |