Skip to content

fireworks

chat
embedding
open-source

Test your Fireworks AI API key.

Validate a Fireworks AI key and see which open-source chat / embedding / image models you can call.

Stateless proxy — keys never logged, stored, or persisted. What happens to your key →

Detected
Fireworks AI

What this key does

Fireworks runs open-source models with custom inference optimizations. API is OpenAI-compatible. Models live under accounts/fireworks/models/...

How to get a Fireworks AI API key

  1. Sign in at fireworks.ai.
  2. Open Account → API Keys.
  3. Generate a fw_... key.
  4. Paste it here.

Common errors and fixes

  • 401 Unauthorized: Key is invalid, revoked, or pasted with extra whitespace. Generate a new key from the provider console and try again.
  • 403 Forbidden: Key is valid but lacks permission for this resource. Check project / org / workspace scope, or that billing is set up for this key.
  • 429 Too Many Requests: You hit the per-minute or per-day rate limit. Wait a moment and retry, or upgrade your tier.
  • 404 Not Found: The endpoint or model id changed. Check the provider docs for the current path and model identifier.
  • 5xx: The provider is having issues. Check their status page before assuming the bug is yours.

Security best practices

  • Store keys in an env var or secret manager — never commit them to a repo, even a private one.
  • Restrict scope: prefer per-project or per-deployment keys over a single root key shared across services.
  • Rotate on a schedule (90 days is a sane default) and immediately on suspected leak.
  • Audit usage in the provider console after rotation to confirm the old key has zero traffic.
  • Set per-key spend limits where the provider supports them, so a leaked key has a bounded blast radius.

Pricing at a glance

Fireworks competes with Together / Groq on open-source pricing.

FAQ

Is Fireworks OpenAI-compatible?
Yes — base URL https://api.fireworks.ai/inference/v1.
Why are model ids so long?
They're scoped to accounts: accounts/fireworks/models/llama-v3p1-8b-instruct.
Does Fireworks support fine-tuning?
Yes, including LoRAs.
Free tier?
Promotional credits, no permanent free tier.
How fast is Fireworks vs Groq?
Groq generally wins on Llama at low concurrency. Fireworks scales smoothly to higher concurrency.
Can I deploy my own model?
Yes, via on-demand deployment.