openai
Test your OpenAI API key.
Verify an OpenAI key in seconds, see which GPT-4o, o1, embedding, and DALL·E models you can call, and benchmark how fast OpenAI is responding to you right now.
Stateless proxy — keys never logged, stored, or persisted. What happens to your key →
What this key does
An OpenAI API key authenticates requests to OpenAI's REST API — chat completions, the Responses API, embeddings, image generation, audio transcription, and the moderation endpoint. Keys can be project-scoped or attached to a service account; both work the same way over the wire (Authorization: Bearer ...). Calling /v1/models is the cheapest way to confirm a key is alive without spending tokens.
How to get a OpenAI API key
- Sign in at platform.openai.com and switch to the project you want this key tied to.
- Open Settings → API keys and click Create new secret key.
- Choose a permission scope (Restricted is safer than All) and a project.
- Copy the key — you only see it once — and paste it here to confirm it works.
- Set a usage cap on the project so a leaked key has a bounded blast radius.
Common errors and fixes
- 401 Unauthorized: Key is invalid, revoked, or pasted with extra whitespace. Generate a new key from the provider console and try again.
- 403 Forbidden: Key is valid but lacks permission for this resource. Check project / org / workspace scope, or that billing is set up for this key.
- 429 Too Many Requests: You hit the per-minute or per-day rate limit. Wait a moment and retry, or upgrade your tier.
- 404 Not Found: The endpoint or model id changed. Check the provider docs for the current path and model identifier.
- 5xx: The provider is having issues. Check their status page before assuming the bug is yours.
Security best practices
- Store keys in an env var or secret manager — never commit them to a repo, even a private one.
- Restrict scope: prefer per-project or per-deployment keys over a single root key shared across services.
- Rotate on a schedule (90 days is a sane default) and immediately on suspected leak.
- Audit usage in the provider console after rotation to confirm the old key has zero traffic.
- Set per-key spend limits where the provider supports them, so a leaked key has a bounded blast radius.
Pricing at a glance
OpenAI bills per-million tokens for chat models and per-image for DALL·E. GPT-4o-mini is the cheapest reasoning-capable chat model in the catalog. Batch API requests are 50% off list price; cached input tokens are billed at a steep discount. See the live pricing page for current numbers.
FAQ
- How do I know if my OpenAI key is valid?
- Paste it above. We call /v1/models with your key and report status, the list of models the key can access, and round-trip latency.
- Does my key get logged anywhere?
- No. The proxy forwards the key once, returns the response, and discards it. We do not log headers, request bodies, or the key value.
- Can I tell if my key is a project key vs a user key?
- Project keys start with sk-proj-. Service account keys start with sk-svcacct-. Plain sk-... keys are legacy user keys (still supported for now).
- Why don't I see GPT-4o in my models list?
- GPT-4o is gated by tier. New accounts can call gpt-4o-mini immediately; gpt-4o-full and o1 unlock once you've spent a small amount or topped up.
- Is there a free tier?
- OpenAI gives short-lived starter credits to new accounts. They expire after a few months. After that you pay-as-you-go.
- How fast should an OpenAI key respond?
- GET /v1/models typically returns under 300ms from a healthy region. Our latency benchmark runs 5 pings and reports p50 / p95 / p99 so you can spot regional slowdowns.
- Why do I see a 401 even though the key is correct?
- Whitespace. Paste-from-Slack and paste-from-1Password sometimes wrap the key in a zero-width character. Try retyping the last few chars or pasting into a plain text editor first.
Test other providers
Related reading
- How to get an OpenAI API keyStep-by-step walkthrough to create, scope, and verify an OpenAI API key — including project keys and service accounts.
- API key security best practices for LLMsHow to store, scope, rotate, and revoke LLM API keys without leaking them through git, logs, or shared environments.
- OpenAI vs Anthropic pricing in 2026A side-by-side breakdown of OpenAI and Anthropic per-token pricing, batch discounts, and prompt-caching savings.