The system prompt that defines an OpenClaw agent's personality, behavior, knowledge, and constraints — effectively its identity.
The SOUL is the instruction manual for your AI agent. It tells the agent who it is, how to behave, what it knows, and what it should never do. A well-crafted SOUL transforms a generic LLM into a specialized assistant with a consistent personality.
In OpenClaw, the SOUL is defined in the agent configuration. It includes the system prompt, behavioral guidelines, knowledge context, and safety constraints. The SOUL is sent as the system message in every LLM call.
Writing a good SOUL is one of the most important tasks when deploying an agent. It determines the quality of every interaction. Too vague and the agent behaves inconsistently. Too rigid and it cannot handle edge cases.
The SOUL is the single biggest factor in agent quality. Two agents with the same LLM but different SOULs will behave completely differently. It is the closest thing to "programming" an AI agent.
Clawctl supports SOUL configuration through the dashboard and openclaw.json. Version your SOUL alongside your agent configuration. Test changes before deploying to production.
Try Clawctl — 60 Second DeployAs long as needed, but typically 500-2000 tokens. Keep it focused. Every token of SOUL consumes context window space.
Yes. Update the system prompt in the Clawctl dashboard or openclaw.json and the agent picks it up.
Specific personality, clear boundaries, relevant knowledge context, and explicit constraints on what the agent should and should not do.
AI Agent Runtime
The execution environment that hosts an AI agent, managing its lifecycle, tool access, memory, and communication with LLM providers.
OpenClaw Architecture
The technical design of OpenClaw: a gateway that routes messages from 23+ channels to an LLM-powered agent with MCP tool integrations, running in a Docker sandbox.
AI Guardrails
Safety boundaries that constrain what an AI agent can and cannot do, preventing harmful or unintended actions.
Context Window
The maximum amount of text an LLM can process in a single request — including the conversation history, system prompt, and tool results.