Mistral: Mixtral 8x7B Instruct
Fiabilité
20%
mistralai/mixtral-8x7b-instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
🧠 Intelligence & Données
Knowledge Cutoff:
2023-12-31
Tokenizer:
Mistral
Moderation:
✅ Non
📅 Cycle de vie
Ajouté le:
10/12/2023
Spécifications
-
Provider & Modalité mistralai text->text
-
Fenêtre de contexte 32,768 tokens
-
Max Output Tokens 16,384
-
Support des Outils (Tools) ✔️ Fonction Calling
🔍 Modèles similaires
| Modèle | Provider | Input | Output | Contexte | |
|---|---|---|---|---|---|
|
Reka Edge
rekaai/reka-edge
|
rekaai | $0.1000 | $0.1000 | 16,384 | → |
|
Mistral: Mistral Small 4
mistralai/mistral-small-2...
|
mistralai | $0.1500 | $0.6000 | 262,144 | → |
|
LiquidAI: LFM2-24B-A2B
liquid/lfm-2-24b-a2b
|
liquid | $0.0300 | $0.1200 | 32,768 | → |
|
LiquidAI: LFM2.5-1.2B-Thinking (free)
liquid/lfm-2.5-1.2b-think...
|
liquid | $0.0000 | $0.0000 | 32,768 | → |
|
LiquidAI: LFM2.5-1.2B-Instruct (free)
liquid/lfm-2.5-1.2b-instr...
|
liquid | $0.0000 | $0.0000 | 32,768 | → |