Documentation
Bring your own LLM
Use your own OpenAI-compatible API key for the assistant and document generation.
Last updated May 8, 2026
On Scale and above, you can point Attestly at your own OpenAI-compatible endpoint instead of the platform key. This is the right fit for tenants who:
- Already have an OpenAI Enterprise contract and want their data to flow through that contract (zero-retention, no training).
- Want to use a regional Azure OpenAI deployment for data-residency reasons.
- Run an internal LLM gateway (Portkey, OpenRouter, LiteLLM) and want Attestly to honour their internal routing/billing.
Enabling it
- Go to Dashboard → Settings → AI provider.
- Paste your API key (and base URL, if it's not
api.openai.com). - We call
models.listagainst the provided endpoint to validate the credentials before persistence. - The key is encrypted with AES-256-GCM before being written to the database. Decryption only happens inside the server-only AI client at request time. The dashboard never displays the plaintext value back to you — only the masked prefix and the date you connected.
What it controls
| Surface | Honors BYO-LLM? |
|---|---|
| AI assistant (dashboard) | Yes |
| Document generation (every regenerate, every drift retry) | Yes — threaded via tenant context through generateStructured |
Background workers (Inngest doc/generate) | Yes |
| Public trust-center rendering | N/A (deterministic) |
What we don't do
- We do not call out to your provider for any non-tenant work (e.g. marketing pages, help articles, anonymised metrics).
- We do not try to run our own scanner with your key. The scanner is deterministic; LLM calls are reserved for document generation and the assistant.
- We do not retry against the platform key if your provider 401s. You'll get a clear error in the dashboard, and the audit log records the failure.
Compatible providers
Anything that implements POST /v1/chat/completions and GET /v1/models
should work. Confirmed configurations:
| Provider | baseURL |
|---|---|
| OpenAI | leave blank (https://api.openai.com/v1) |
| Azure OpenAI | https://<resource>.openai.azure.com/openai/deployments/<deployment> |
| OpenRouter | https://openrouter.ai/api/v1 |
| Portkey | https://api.portkey.ai/v1 |
| LiteLLM (self-hosted) | your gateway URL |
Disconnecting
Settings → AI provider → Disconnect removes the key from the database and reverts to the platform key on the next request. The audit log records both the connect and the disconnect.