GPT-5.4 + OpenClaw: Complete Setup Guide for Production (2026)
OpenAI just shipped GPT-5.4. It's their most capable model yet — 1M token context, built-in computer use, and 33% fewer errors than GPT-5.2.
This guide walks through setting up OpenClaw with GPT-5.4 from initial configuration to production deployment.
Why GPT-5.4 for OpenClaw
GPT-5.4 was built for agentic work. Here's what matters for OpenClaw users:
- 1M token context window — Analyze entire codebases, long document collections, or extended agent trajectories in a single request
- Built-in computer use — First mainline OpenAI model that can interact directly with software in a build-run-verify-fix loop
- 33% fewer errors — Individual claims are 33% less likely to be wrong compared to GPT-5.2
- Token efficiency — Accomplishes identical tasks while consuming substantially fewer tokens than earlier models
- Compaction support — Trained to compress long agent trajectories while preserving key context
For most OpenClaw use cases, GPT-5.4 provides an excellent balance of capability and cost. GPT-5.4 Pro is the higher-end option for complex enterprise workflows where maximum accuracy justifies the price.
Prerequisites
Before starting:
- OpenAI API key — Get one at platform.openai.com
- OpenClaw installed — Follow the official installation guide
- Node.js 22+ — Required for OpenClaw
Step 1: Configure Your API Key
Add your OpenAI API key to the OpenClaw configuration.
Option A: Environment variable (recommended)
export OPENAI_API_KEY="sk-..."
Option B: Auth profile via CLI
openclaw configure --section model
The interactive wizard will prompt for your OpenAI credentials. You can also set it directly:
openclaw config set auth.profiles.openai:default '{"provider":"openai","mode":"api_key"}' --strict-json
Option C: Configuration file
In your ~/.openclaw/openclaw.json:
{
env: {
OPENAI_API_KEY: "sk-..."
}
}
Security note: For production, use environment variables or a secrets manager. Don't commit API keys to version control.
Step 2: Select Your Model
Configure GPT-5.4 as your primary model:
{
agent: {
workspace: "~/.openclaw/workspace",
model: {
primary: "openai/gpt-5.4"
}
}
}
Or via the CLI:
openclaw config set agent.model.primary "openai/gpt-5.4"
Model options:
| Model | Best For | Context | Speed | Cost |
|---|---|---|---|---|
| openai/gpt-5.4 | General use, agentic tasks, coding | 1M | Fast | $$ |
| openai/gpt-5.4-pro | Complex enterprise workflows, max accuracy | 1M | Slower | $$$$ |
Recommendation: Start with openai/gpt-5.4. Upgrade to Pro only if you need maximum accuracy on high-stakes tasks. The standard model handles the vast majority of agent workloads.
Want fallbacks? Set a fallback model in case OpenAI has an outage:
{
agent: {
model: {
primary: "openai/gpt-5.4",
fallbacks: ["anthropic/claude-sonnet-4-5"]
}
}
}
Step 3: Configure Your Agent
Set your agent's identity and workspace:
{
identity: {
name: "Atlas",
theme: "senior software engineer",
emoji: "🤖"
},
agent: {
workspace: "~/.openclaw/workspace",
model: { primary: "openai/gpt-5.4" }
}
}
OpenClaw uses bootstrap files in your workspace to shape agent behavior:
AGENTS.md— Operating instructions and memorySOUL.md— Persona, boundaries, toneTOOLS.md— Tool usage notes and conventions
Edit these files to define your agent's personality and expertise. They're injected into the agent context at the start of every session.
Step 4: Configure Tools
Control which tools your agent can use:
{
tools: {
allow: ["exec", "read", "write", "edit"],
deny: ["browser"]
}
}
Or via the CLI:
openclaw config set tools.allow '["exec","read","write","edit"]' --strict-json
openclaw config set tools.deny '["browser"]' --strict-json
Security principle: Only enable tools your agent actually needs. Disable everything else.
GPT-5.4's built-in computer use capabilities make it especially effective at tool-heavy workflows — it can interact with software directly, verify results, and fix issues in a loop. But more capability means tighter controls matter even more.
Step 5: Set Up Model Allowlist
Control which models are available for switching via the /model command:
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": { alias: "gpt" },
"openai/gpt-5.4-pro": { alias: "gpt-pro" },
"anthropic/claude-sonnet-4-5": { alias: "sonnet" }
}
}
}
}
This lets you switch between models on the fly without editing config.
Step 6: Test Locally
Start the OpenClaw gateway:
openclaw gateway run
Test with a message:
openclaw message send "What model are you running?"
Expected response should confirm GPT-5.4.
Check gateway health:
openclaw status
Verify tools work by asking the agent to list files or run a command.
Step 7: Production Configuration
For production, lock down the gateway:
{
gateway: {
mode: "local",
port: 18789,
bind: "loopback",
auth: {
mode: "token",
token: "your-secure-gateway-token"
},
controlUi: { enabled: false }
},
agent: {
workspace: "~/.openclaw/workspace",
model: {
primary: "openai/gpt-5.4",
fallbacks: ["anthropic/claude-sonnet-4-5"]
}
},
channels: {
whatsapp: {
allowFrom: ["+15555550123"]
}
}
}
Key production settings:
- Bind to loopback only — no external access without a reverse proxy
- Token auth on the gateway — prevents unauthorized API access
- Disable Control UI — reduces attack surface
- Set
allowFromon channels — only authorized senders can interact
Common Issues
"Model not found" Error
Model refs use provider/model format (split on first /):
- ✅
openai/gpt-5.4 - ❌
gpt5.4 - ❌
openai:gpt-5.4
"Invalid API key" Error
Check that:
- Your key starts with
sk- - The key has not been revoked
- The environment variable is set:
echo $OPENAI_API_KEY - Or your auth profile is configured:
openclaw config get auth
High Latency
If responses are slow:
- Check if you're sending >272K input tokens (higher-tier pricing and potentially different routing)
- Reduce context size by limiting workspace scope
- Use
openai/gpt-5.4instead ofopenai/gpt-5.4-profor faster responses - Check your network connection to OpenAI's API
Token Limit Errors
If you hit token limits:
- GPT-5.4 supports up to 1M context — increase input limits if analyzing large codebases
- Consider breaking tasks into smaller steps for cost efficiency
- Use the
/modelcommand to switch to a cheaper model for simpler tasks
Cost Management
Estimated costs (as of March 2026):
| Model | Input (<272K) | Output (<272K) | Input (>272K) | Output (>272K) |
|---|---|---|---|---|
| GPT-5.4 | $2.50/M | $15/M | $5/M | $22.50/M |
| GPT-5.4 Pro | $30/M | $180/M | $60/M | $270/M |
Cost optimization tips:
- Use GPT-5.4 by default — Pro only when you need maximum accuracy
- Stay under 272K input tokens — Pricing doubles above that threshold
- Leverage token efficiency — GPT-5.4 uses fewer tokens than GPT-5.2 for equivalent tasks
- Use cached input — Repeated prompts cost only $0.25/M (10x cheaper)
- Set fallbacks to cheaper models — Use Claude Sonnet or Gemini Flash as fallbacks
- Monitor usage — Check OpenAI dashboard regularly
With Clawctl
If you're using Clawctl, GPT-5.4 configuration is simpler. Sign up at clawctl.com/checkout, and the dashboard setup wizard walks you through it:
- Select OpenAI as your provider
- Paste your API key
- Clawctl validates the connection and deploys
No config files. No environment variables. Just pick your model and go.
Clawctl handles:
- Secure credential storage (AES-256-GCM encrypted)
- Production gateway security
- Token usage monitoring
- Cost alerts