The requirement to disclose when users are interacting with an AI agent rather than a human, and to make the agent's decision-making process observable.
AI transparency has two dimensions. First, disclosure: users must know they are talking to an AI, not a human. This is a legal requirement under the EU AI Act and increasingly expected by users everywhere.
Second, observability: operators need to understand why the agent made specific decisions. Which tools did it call? What data did it consider? Why did it choose one action over another? Audit trails provide this observability.
Transparency builds trust. Users who know they are talking to AI set appropriate expectations. Operators who can observe agent reasoning can improve and debug it.
Deceptive AI — agents that pretend to be human — erodes public trust and is increasingly illegal. Opaque AI — agents whose decisions cannot be explained — is a liability and compliance risk.
Clawctl supports transparency through audit trails (full observability into agent actions), configurable AI disclosure messages, and tool call logging that shows exactly why the agent took each action.
Try Clawctl — 60 Second DeployYes, under the EU AI Act for AI systems interacting with people. Other jurisdictions are adopting similar rules.
Configure disclosure messages in the agent SOUL. The audit trail provides full observability into agent decision-making.
Yes. The audit trail logs tool calls, LLM reasoning, and action decisions for every interaction.
Responsible AI
The practice of deploying AI agents with intentional safeguards for fairness, transparency, accountability, and safety.
EU AI Act
The European Union's comprehensive regulation on artificial intelligence, establishing risk-based rules for AI systems including transparency, human oversight, and documentation requirements.
Audit Trail
A chronological record of every action an AI agent takes, providing accountability, compliance evidence, and forensic capability.
AI Compliance
Meeting regulatory and organizational requirements for deploying AI agents in production — including audit trails, data handling, and accountability.