Core Concepts

What Is Local LLM?

Running a large language model on your own hardware instead of calling a cloud API, giving you full data privacy and zero per-token costs.

In Plain English

A local LLM runs on your machine or private server. Your data never leaves your infrastructure. There are no per-token API costs — just hardware and electricity.

OpenClaw supports local LLMs through Ollama integration. Configure Ollama as your LLM provider, point it at your local model, and the agent uses it just like any cloud provider. Same MCP tools, same channels, same security controls.

Local models are smaller than cloud models (7B-70B parameters vs 100B+), so they trade some intelligence for privacy and cost savings. For many tasks — customer FAQ, document search, basic reasoning — local models perform well.

Why It Matters for OpenClaw

Some data cannot leave your network. Healthcare records, legal documents, financial data — if cloud API calls violate your data policy, local LLMs are the answer. They also eliminate per-token costs for high-volume use cases.

How Clawctl Helps

Clawctl supports Ollama as a first-class LLM provider. Configure it in the setup wizard just like Anthropic or OpenAI. All Clawctl security features — audit trails, approval workflows, egress filtering — work identically with local models.

Try Clawctl — 60 Second Deploy

Common Questions

Which local models work with OpenClaw?

Any model supported by Ollama — Llama, Mistral, Phi, Gemma, and more. Check Ollama's model library for the full list.

What hardware do I need?

Minimum 16GB RAM for 7B models. 32GB+ for 13B models. GPU recommended but not required.

Can I mix local and cloud models?

Yes. Use local models for routine tasks and cloud models for complex reasoning in a multi-agent setup.