Run OpenClaw Securely in Docker Sandboxes
Docker Sandboxes is a new ecosystem feature that lets AI agents and other workloads run inside isolated micro VMs. It delivers robust isolation, a developer-friendly experience, and security boundaries with a configurable network proxy that can restrict agent connections to specific internet hosts. The network proxy automatically injects API credentials (such as ANTHROPIC_API_KEY or OPENAI_API_KEY) so agents cannot access or leak them.
Docker Sandboxes also lets you install the tools an agent needs –JDK for Java projects, custom CLIs, and so on. This post shows how to run OpenClaw, an open-source AI coding agent, on local models via Docker Model Runner –no API keys, no cloud costs, fully private. And you can do it in 2-ish commands.
Quick Start
Pull a model and spin up a sandbox:
docker model pull ai/gpt-oss:20B-UD-Q4_K_XL
docker sandbox create --name openclaw -t olegselajev241/openclaw-dmr:latest shell .
docker sandbox network proxy openclaw --allow-host localhost
docker sandbox run openclaw
Once inside the sandbox, launch OpenClaw:
~/start-openclaw.sh
You now have OpenClaw’s terminal interface talking to a local gpt-oss model. The model runs in Docker Model Runner on the host, while OpenClaw operates completely isolated –it can only read and write workspace files, and all network access goes through the sandbox proxy.
Using Cloud Models
The sandbox proxy automatically injects API keys from the host environment. If you have ANTHROPIC_API_KEY or OPENAI_API_KEY set, OpenClaw can use cloud models specified in its settings. The proxy ensures credentials stay protected inside the sandbox, so you can seamlessly switch between free local models for experimentation and cloud models for production work.
Choosing a Model
List the models available to Docker Model Runner:
~/start-openclaw.sh list
Use a specific model:
~/start-openclaw.sh ai/qwen2.5:7B-Q4_K_M
Any model you’ve pulled with docker model pull becomes available.
How It Works
The pre-built image (olegselajev241/openclaw-dmr:latest) bundles Node.js 22, OpenClaw, and a networking bridge based on the shell sandbox template.
The bridge exists because of how localhost resolves inside a sandbox. Docker Model Runner runs on host localhost:12434, but inside the sandbox localhost refers to the sandbox itself. Sandboxes expose an HTTP proxy at host.docker.internal:3128 for reaching host services. Since Node.js ignores HTTP_PROXY environment variables, a small (~20-line) bridge script listens on 127.0.0.1:54321 and explicitly forwards requests through the proxy:
OpenClaw → bridge (localhost:54321) → proxy (host.docker.internal:3128) → Model Runner (host localhost:12434)
The start-openclaw.sh script starts the bridge, launches OpenClaw’s gateway (with proxy variables cleared), and runs the TUI.
Build Your Own
Want to customise the image? Here’s how to build one from scratch.
Step 1 –Create a Base Sandbox and Install OpenClaw
docker sandbox create --name my-openclaw shell .
docker sandbox network proxy my-openclaw --allow-host localhost
docker sandbox run my-openclaw
Inside the sandbox, install Node.js 22 and OpenClaw:
# Install Node 22 (OpenClaw requirement)
npm install -g n && n 22
hash -r
# Install OpenClaw
npm install -g openclaw@latest
# Run initial setup
openclaw setup
Step 2 –Create the Model Runner Bridge
A minimal Node.js HTTP server forwards requests through the sandbox proxy to Docker Model Runner on the host:
cat > ~/model-runner-bridge.js << 'EOF'
const http = require("http");
const { URL } = require("url");
const PROXY = new URL(process.env.HTTP_PROXY || "http://host.docker.internal:3128");
const TARGET = "localhost:12434";
http.createServer((req, res) => {
const proxyReq = http.request({
hostname: PROXY.hostname,
port: PROXY.port,
path: "http://" + TARGET + req.url,
method: req.method,
headers: { ...req.headers, host: TARGET }
}, proxyRes => {
res.writeHead(proxyRes.statusCode, proxyRes.headers);
proxyRes.pipe(res);
});
proxyReq.on("error", e => { res.writeHead(502); res.end(e.message); });
req.pipe(proxyReq);
}).listen(54321, "127.0.0.1");
EOF
Step 3 –Configure OpenClaw for Docker Model Runner
Merge the Docker Model Runner provider into OpenClaw’s configuration:
python3 -c "
import json
p = '$HOME/.openclaw/openclaw.json'
with open(p) as f: cfg = json.load(f)
cfg['models'] = cfg.get('models', {})
cfg['models']['mode'] = 'merge'
cfg['models']['providers'] = cfg['models'].get('providers', {})
cfg['models']['providers']['docker-model-runner'] = {
'baseUrl': 'http://127.0.0.1:54321/engines/llama.cpp/v1',
'apiKey': 'not-needed',
'api': 'openai-completions',
'models': [{
'id': 'ai/qwen2.5:7B-Q4_K_M',
'name': 'Qwen 2.5 7B (Docker Model Runner)',
'reasoning': False, 'input': ['text'],
'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0},
'contextWindow': 32768, 'maxTokens': 8192
}]
}
cfg['agents'] = cfg.get('agents', {})
cfg['agents']['defaults'] = cfg['agents'].get('defaults', {})
cfg['agents']['defaults']['model'] = {'primary': 'docker-model-runner/ai/qwen2.5:7B-Q4_K_M'}
cfg['gateway'] = {'mode': 'local'}
with open(p, 'w') as f: json.dump(cfg, f, indent=2)
"
Step 4 –Save and Share
Exit the sandbox and save it as a reusable image:
docker sandbox save my-openclaw my-openclaw-image:latest
Push to a registry so others can use it:
docker tag my-openclaw-image:latest yourname/my-openclaw:latest
docker push yourname/my-openclaw:latest
Anyone can then spin up your environment:
docker sandbox create --name openclaw -t yourname/my-openclaw:latest shell .
Wrapping Up
Docker Sandboxes make it easy to run any AI coding agent in an isolated, reproducible environment. With Docker Model Runner, you get a fully local AI coding setup: no cloud dependencies, no API costs, and complete privacy.