Air-gapped deployment (Ollama)
For regulated environments, Autopilot can run with local models via Ollama. This keeps reasoning on-prem while maintaining the same approval and evidence pack workflow.
Approach
- Run Ollama on a host reachable from the OpenClaw gateway
- Configure OpenClaw to use an Ollama model endpoint
- Install Autopilot with
--skip-tailscaleif VPN isn’t available - Keep runtime service bound to localhost
Tip: start with a smaller local model for cost, then upgrade the model when your workflow is stable.
Install without Tailscale
sudo ./install/install.sh --skip-tailscale
Then set your local provider details in the OpenClaw config.
Still safe
- Plans remain proposals
- Policies guard what is allowed
- Humans approve any action
- Evidence packs keep auditability