Documentation
AI Trust Center
AIBOM + EU AI Act-aligned disclosures, oversight, evaluation, and incident-response.
Last updated May 8, 2026
The AI Trust Center is the most important document Attestly generates. It's the document your enterprise prospects ask for in their security review and the document your EU regulator may eventually ask to see.
What it contains
| Section | What's in it |
|---|---|
| Executive summary | A 2–3 sentence overview, generated from your stack. |
| AI systems | One row per AI system: provider, model, inputs, outputs, risk class, human-in-the-loop. |
| Annex IV documentation | General description, data and inputs, monitoring/oversight, post-market monitoring, change management — per system. |
| Human oversight | Concrete oversight measures, opt-out path, override mechanism. |
| Evaluation | Bias and fairness, robustness, accuracy metrics, red-teaming posture. |
| Upstream provider terms | Training opt-out, data residency, provider SLA. |
| Data flows | Source → AI provider → output, with category labels. |
| Training-data policy | A statement on whether customer data is used for training (defaults to "no"). |
| Acceptable use | Prohibited inputs/outputs, automated abuse detection. |
| Incident response | Classification (P0–P3), detection, response SLA, customer-notification commitment. |
| End-user disclosure | What users are told about AI processing. |
| Child safety | Statement on minors' data. |
| Conformity | Whether Annex IV technical documentation applies. |
Every row carries a source field — see Source citations.
Risk classification
Attestly assigns each AI system a default risk class using the following heuristics:
- If the system processes
special_categorydata (Article 9 / Annex III) → High. - If the system is in the credit, hiring, or law-enforcement domain (heuristic on file paths) → High.
- If the system generates user-facing content with no human review → Limited.
- Everything else → Minimal.
You can override the default per-system in the dashboard. The override is preserved across regenerations as long as the underlying detector key doesn't change.
Schema
The generator output is constrained to this Zod schema:
const aiTrustCenterSchema = z.object({
intro: z.string(),
aiSystems: z.array(
z.object({
id: z.string(),
purpose: z.string(),
modelProvider: z.string(),
modelName: z.string(),
inputs: z.array(z.string()),
outputs: z.array(z.string()),
riskClass: z.enum(["minimal", "limited", "high", "unacceptable"]),
riskRationale: z.string(),
humanInTheLoop: z.boolean(),
source: z.object({ filePath: z.string(), line: z.number().nullable() }),
}),
),
dataFlows: z.array(
z.object({
from: z.string(),
to: z.string(),
categories: z.array(dataCategorySchema),
purpose: z.string(),
}),
),
trainingDataPolicy: z.string(),
conformity: z.object({ annexIVApplicable: z.boolean(), notes: z.string() }),
});
The model cannot introduce a field that isn't in the schema. This is the single most important safety primitive in Attestly — it's why the same scan produces the same document every time.