mistral
chat
embedding
Test your Mistral API key.
Validate a Mistral API key, list Mistral Large / Codestral / embedding models, and benchmark latency against the Paris-hosted endpoint.
Stateless proxy — keys never logged, stored, or persisted. What happens to your key →
Detected
Mistral
What this key does
A Mistral API key authenticates the la Plateforme REST API: chat completions, embeddings, fine-tunes, and Codestral. The wire protocol is OpenAI-compatible (Bearer auth, /v1/chat/completions shape) so SDKs that target OpenAI usually work with a base URL swap.
How to get a Mistral API key
- Sign in at console.mistral.ai.
- Add a billing source if you haven't (free trial credits exist but burn fast).
- Open API Keys → Create new key.
- Copy the key and paste it here to validate.
Common errors and fixes
- 401 Unauthorized: Key is invalid, revoked, or pasted with extra whitespace. Generate a new key from the provider console and try again.
- 403 Forbidden: Key is valid but lacks permission for this resource. Check project / org / workspace scope, or that billing is set up for this key.
- 429 Too Many Requests: You hit the per-minute or per-day rate limit. Wait a moment and retry, or upgrade your tier.
- 404 Not Found: The endpoint or model id changed. Check the provider docs for the current path and model identifier.
- 5xx: The provider is having issues. Check their status page before assuming the bug is yours.
Security best practices
- Store keys in an env var or secret manager — never commit them to a repo, even a private one.
- Restrict scope: prefer per-project or per-deployment keys over a single root key shared across services.
- Rotate on a schedule (90 days is a sane default) and immediately on suspected leak.
- Audit usage in the provider console after rotation to confirm the old key has zero traffic.
- Set per-key spend limits where the provider supports them, so a leaked key has a bounded blast radius.
Pricing at a glance
Mistral Small / Medium / Large are tiered roughly an order of magnitude apart. Codestral is priced for code workloads. Free tier exists but is heavily rate-limited.
FAQ
- Does Mistral support streaming?
- Yes, via Server-Sent Events on /v1/chat/completions with stream:true.
- Is Mistral OpenAI-compatible?
- Mostly. Swap the base URL to https://api.mistral.ai/v1 and the OpenAI Python/Node SDKs work for chat completions. Tool use shapes are slightly different.
- Are my data sent to Mistral used for training?
- By default, paid-tier requests are not used for training. Free-tier requests may be — check the current data-usage policy.
- What's the cheapest Mistral model?
- mistral-small-latest, then mistral-tiny / open-mistral-nemo if available on your account.
- Where are Mistral keys hosted?
- Inference is currently EU-hosted, which is useful for GDPR-bound workloads.
- Can I use Mistral on Vertex / Bedrock?
- Yes, several Mistral models are mirrored on AWS Bedrock and GCP. Those use the host platform's auth, not a Mistral API key.
Test other providers
Related reading
- API key security best practices for LLMsHow to store, scope, rotate, and revoke LLM API keys without leaking them through git, logs, or shared environments.
- Free LLM API keys for testing in 2026Which providers offer free credits, how long they last, and how to stretch them for prototyping without a credit card.