Security

What Is AI Agent Sandbox?

An isolated execution environment that constrains what an AI agent can access — files, network, processes — preventing it from affecting the host system or other agents.

In Plain English

A sandbox is a cage for your AI agent. It can do its job inside the cage, but it cannot reach outside. If the agent tries to access files outside its workspace, make network calls to unapproved domains, or spawn unauthorized processes — the sandbox blocks it.

OpenClaw uses Docker-based sandboxing. Each agent runs in its own container with a restricted file system, controlled network access, and limited process capabilities. The sandbox is the first line of defense against both agent mistakes and adversarial attacks.

Sandboxing is especially important when agents execute code. An agent running a Python script could, without sandboxing, access anything on the host. The sandbox ensures it can only access its own workspace.

Why It Matters for OpenClaw

An unsandboxed agent with code execution capability has the same access as the user running it. If that user is root, the agent can do anything — install malware, exfiltrate data, delete the server. Sandboxing prevents this.

How Clawctl Helps

Clawctl deploys every agent in a Docker sandbox with per-agent file system isolation, network policies, and process restrictions. The Docker socket proxy ensures agents can only manage their own containers.

Try Clawctl — 60 Second Deploy

Common Questions

What does the sandbox restrict?

File system access (workspace only), network calls (allowlisted domains only), process creation (limited), and host system access (blocked).

Can the agent break out of the sandbox?

Docker sandboxing provides strong isolation. Clawctl adds additional layers: socket proxy, network policies, and egress filtering.

Does sandboxing affect performance?

Minimal overhead. Docker containers add less than 1% performance cost compared to bare-metal execution.