Documentation
Compliance copilot — overview
The in-product assistant: what it knows, how it cites, and how it stays grounded in your workspace.
Last updated May 10, 2026
Attestly's compliance copilot is the assistant pane that opens with the
Ask Attestly button on any dashboard page (Cmd/Ctrl + J). It
answers questions about your workspace and turns those answers into
actions you can take inside the product.
It is not a generic chatbot. It only operates on facts we can prove:
your scan findings, your published documents, your subprocessor
inventory, your drift alerts, and (in expert mode) the framework
references documented under assistant/expert-mode.
How it stays grounded
Every assistant turn loads three things into the model context:
- Workspace snapshot — counts, your detected subprocessors, the most recent scan summary, every document version's status, every open drift alert. This is rebuilt per turn so cached state can't go stale.
- Document RAG — the assistant retrieves the highest-scoring
sections of your published documents that match your question
(heading + keyword + phrase scoring, capped at
3.5 KBof context). So a question like "does my Privacy Policy cover CCPA opt-out?" retrieves the relevant section verbatim instead of paraphrasing. - Citation registry — every claim in the workspace context, every
retrieved doc section, and every framework reference is registered
under a stable
[ref:<id>]token. The model is instructed to attach the relevant token to every claim; the client renders each token as a clickable chip beneath the message.
If a fact is missing the model is required to say so explicitly rather than guess. We never fall back to "I think it's probably…" prose.
Standard mode vs expert mode
| Mode | When to use | What changes in the prompt |
|---|---|---|
| Standard | Day-to-day operator questions ("what's blocking me from publishing?", "summarise my AI subprocessors"). | Workspace context + RAG over published docs. |
| Expert | Framework-specific questions ("does GDPR Art. 28(2) require advance notice?", "which SOC 2 TSC do my repo controls map to?"). | Above plus the eleven-framework reference index. |
You can toggle expert mode per conversation from the header inside the
assistant pane, set a workspace-wide default under
Settings → Compliance copilot, or simply ask a question that mentions
a framework — we auto-detect mentions of GDPR articles, AI Act annexes,
SOC 2 TSCs, CCPA / CPRA, HIPAA, ISO 27001, NIST AI RMF, US state
privacy laws, DORA, NIS 2, and the EU DSA, and load the matching
reference module for that turn.
Tool / action calling
When you ask for something that mutates state, the assistant will propose a tool call rather than instruct you. You see a confirmation card with the exact arguments; clicking Run executes the action under your role. Read-only lookups (find a finding by name, open a specific document section, list open drift) auto-execute.
See assistant/tools for the full tool catalogue, role gating, and
how each call appears in the audit log.
Threads, history, and the sidebar
Every conversation is a persistent thread scoped to (tenant, user). The pane has a left rail with starring, archiving, and one-click delete. Threads remember their expert-mode setting at the moment of creation, so toggling the workspace default doesn't retroactively change a saved conversation.
Privacy and budget
- Every user prompt and every model response passes through a redaction
pass that strips emails, phone numbers, SSNs, Luhn-validated card
numbers, common API-token prefixes, PEM private keys, and IPv4 / IPv6
addresses before the LLM call. See
assistant/privacy-and-budget. - Each tenant has a monthly message budget by plan with a per-user
rolling rate limit (12 messages / minute) on top. Budget remaining
is visible at
Settings → Compliance copilot. - Every turn writes an entry to your tamper-evident audit log
(
assistant.message). Tool calls writeassistant.tool.<name>. The Ed25519-signed audit export covers all of it.
Honoring BYO-LLM
If your workspace is on Scale or Enterprise and has BYO-LLM configured
(Settings → AI provider), the assistant uses your OpenAI-compatible
endpoint — same key Attestly uses for document generation. There is no
"OpenAI-only" path on the assistant; whatever provider you've
configured is what answers questions and runs tools.