Clawctl
Guides
5 min

Run Ollama and OpenClaw on One Machine (5-Minute Setup)

Your GPU runs the model. OpenClaw runs the tools. One machine, full control. Step-by-step setup for local agents that actually do things.

Clawctl Team

Product & Engineering

Run Ollama and OpenClaw on One Machine (5-Minute Setup)

You've got a GPU. You're running Ollama. You want your local model to do things—run code, hit APIs, send messages—without opening the door to every script on the internet.

OpenClaw is the execution layer. Same box. One setup.

Why This Combo Works

OllamaOpenClaw
Runs the LLM (Llama, Mistral, etc.)Runs the tools (shell, HTTP, approvals)
OpenAI-compatible API on localhost:11434Validates, sandboxes, and logs every action
You own the weightsYou own the guardrails

Your model thinks. OpenClaw acts—and only when it's allowed.

Step 1: Ollama (If You Don't Have It Yet)

curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2
ollama serve   # Exposes API at http://localhost:11434/v1

Leave it running. It already supports tool calling for models like Llama 3.2.

Step 2: OpenClaw on the Same Machine

curl -fsSL https://openclaw.ai/install.sh | bash
# Or with Docker: docker run -d -p 8080:8080 openclaw/openclaw

OpenClaw will talk to Ollama over localhost. No extra network exposure.

Step 3: Point OpenClaw at Ollama

In your OpenClaw config:

llm:
  name: ollama-local
  type: openai-compatible
  base_url: http://localhost:11434/v1
  model: llama3.2
  timeout_ms: 60000

That's it. OpenClaw sends prompts to Ollama; Ollama returns tool_calls; OpenClaw executes them in a sandbox and sends results back.

Step 4: Define a Few Tools

tools:
  - name: run_shell
    type: shell
    sandbox: true
    permissions: [read_only]
  - name: send_slack
    type: http
    method: POST
    url: https://hooks.slack.com/services/YOUR/WEBHOOK

Your local Llama can now ask to run a command or post to Slack. OpenClaw decides yes/no, runs it, logs it.

One Machine, Full Stack

  • Development: Ollama + OpenClaw on your laptop. No cloud bills.
  • Production: Same config, but deploy OpenClaw with Clawctl for SSL, auth, and audit—Ollama can stay on your GPU server.

Connect your GPU-hosted LLM → | Deploy with guardrails →

Ready to deploy your OpenClaw securely?

Get your OpenClaw running in production with Clawctl's enterprise-grade security.