OpenAI-compatible APIZero data retention~$5 per 1M tokens

Choose your own
AI safety.

Standard LLMs decide what's allowed - refusing fiction with conflict, security research, medical questions, and more. Abliteration.ai puts you in charge with an OpenAI-compatible API, built-in chat, and a Policy Gateway to define your own rules. Set your own limits and stop getting blocked on legitimate work.

Why developers choose Abliteration

OpenAI-compatible

Drop-in replacement for OpenAI API. Change the base URL and keep your existing code. Works with all major SDKs.

Uncensored Models

Access less-restricted models that don't refuse legitimate requests. Perfect for creative writing, research, and specialized applications.

Policy Gateway

Define your own safety rules with policy-as-code. Enforce rewrite, redact, or escalate outcomes instead of blanket refusals.

Audit & Compliance

Export audit logs to Splunk, Datadog, Elastic, and more. Track every decision with structured metadata for compliance.

Transparent Pricing

~$5 per 1M tokens with no hidden fees. Prepaid credits never expire. Simple, predictable billing for your AI workloads.

Instant Migration

Migrate from OpenAI, Azure, Anthropic, or any provider in minutes. Our migration tool patches your code automatically.

Trusted by developers
No data retention by default
99.9% uptime SLA
Enterprise security

Live Console

Try the model in real time

Stream responses, attach images, and inspect the request body.

Model ID: abliterated-model
Ephemeral Session
Welcome to abliteration.ai — ask me anything.
Flagged categories
Block prompts that match selected categories.
0 selected
Self-harm and sexual content involving minors are always blocked.
Free preview: 0/5.Usage-based pricing

Key Features: Developer-Controlled LLM API + Policy Gateway

Developer-controlled models (uncensored options)

Developer-controlled, less-censored model delivered via API. We do not apply provider-side refusal filters; you control outputs and policy enforcement. Uncensored model available for teams that need them. You are responsible for keeping usage lawful in your jurisdiction—do not generate or distribute illegal content.

Model ID: abliterated-modelView model specs

Enterprise Policy Gateway (policy-as-code)

Apply one policy across apps, models, and agents with policy-as-code rules, rollout controls, per-project keys, quotas, and audit logs. Export audits to Splunk, Datadog, Elastic, S3, and Azure Monitor. Built for enterprise AI governance.

OpenAI-Compatible /v1/chat/completions API

Full /v1/chat/completions parity. Works with most clients that speak this format—just point them at this base URL.

Image understanding (vision) for screenshots & documents

Send images alongside text prompts to extract information, summarize what’s on screen, and answer questions about photos, charts, and UI screenshots. Use the same /v1/chat/completions interface with multimodal message parts for a single, developer-friendly image understanding API.

Usage-based token pricing for LLMs

Effective price: ~$5 per 1M tokens. Billing uses total tokens (input + output), and image inputs count as input tokens. Subscription credits reset monthly; prepaid credits do not expire. Credits are just the billing unit— see API pricing.

Zero Data Retention Policy

Default policy: no prompt/output retention. Payloads are processed transiently and never used for training. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.

Rate limits and retries

Requests are rate-limited per API key. If you exceed limits, you will receive a 429 response with a Retry-After header. Use backoff and retries, and upgrade for priority throughput when you need more capacity.

Account Authentication and API Integration

Sign in or create account

Create an account to save credits and API keys. No phone verification required.

Used for sign up, receipts, and password resets.
Use at least 12 characters with upper/lowercase letters, a number, and a symbol.
Log in accepts email or this username.
If you were referred, paste the code from your link before signing up.
Already have an account?
or
Loading single sign-on…

API Integration & Code Examples

Generate keys for programmatic access.

curl https://api.abliteration.ai/v1/chat/completions \ -H "Authorization: Bearer $ABLIT_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "abliterated-model", "messages": [{"role":"user","content":[ {"type":"text","text":"What’s in this image?"}, {"type":"image_url","image_url":{"url":"https://abliteration.ai/stonehenge.jpg"}} ]}], "stream": true }'
Browse API examples on GitHub

Support

Frequently Asked Questions

Quick answers about billing, integrations, and Policy Gateway governance.

What is an uncensored LLM API?

An uncensored LLM API gives developers access to less-censored models without provider-side refusal filters. Uncensored models are available when needed, and you control prompts, outputs, and policy enforcement; content must still comply with your local laws and policies.

What is the Policy Gateway?

Policy Gateway is an enterprise AI governance layer for abliteration.ai. It applies policy-as-code rules, quotas, rollout controls, and audit logs across apps, models, and agents.

How does Policy Gateway enforce policy?

Send requests to /policy/chat/completions with your policy_id, policy_user, and optional project ID. The gateway enforces your rules and returns decision metadata for audits.

Do I need a separate subscription for Policy Gateway?

Yes. Policy Gateway is an enterprise add-on billed monthly. It layers on top of your base token bundles.

What data does Policy Gateway store?

It stores policy configuration and enforcement metadata (decision, reason code, policy ID, project/user tags) for audits. Prompt/output retention remains off by default.

Is abliteration.ai OpenAI-compatible?

Yes. We expose an OpenAI-compatible /v1/chat/completions endpoint with the same request/response shape, so most OpenAI clients work by changing the base URL and key.

Do you retain prompts or chat logs?

No. We do not retain prompts or outputs by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.

How does usage-based token pricing work?

You’re billed on total tokens processed (input + output). Image inputs are metered as token equivalents and count as input tokens. Effective pricing is ~$5 per 1M tokens; use monthly subscription credits or one-time prepaid credits (subscriptions reset monthly with no rollover).