How to Run OpenClaw on a Raspberry Pi
Turn a £75 computer into an always-on AI agent. Here's how to set up OpenClaw on a Raspberry Pi 5.

Quick Answer
A Raspberry Pi 5 with 8GB RAM and NVMe SSD makes a solid always-on agent platform. Install Node.js, add OpenClaw, configure your integrations, and use Tulip for cloud inference rather than running local LLMs. Total setup time: 30 minutes. Annual electricity cost: around £5.
Why Run an Agent on a Raspberry Pi?
A Raspberry Pi sitting in your home is incredibly appealing as an always-on agent platform. Three reasons stand out:
Always On
Unlike your laptop, a Pi runs 24/7 without draining your battery. Perfect for background tasks: monitoring incoming emails, polling APIs, sending reminders, or processing work that doesn't need immediate responses.
Cost
The Pi 5 costs around £75 new. Electricity is negligible—roughly £5 per year. No cloud subscription needed for the compute. This is durable infrastructure that pays for itself quickly.
Privacy and Control
Everything runs on your network. Sensitive data stays local. You control what the agent can access. No vendor lock-in, no surprise pricing changes.
Hardware You'll Need
The Bare Minimum
Pi 5 with at least 4GB RAM. You can technically use Pi 4, but it'll be slower.
Recommended
Pi 5 8GB (around £60-70). NVMe SSD via the M.2 connector (30-60GB, about £20-30). Skip the SD card; it's too slow for anything serious. USB-C power supply rated for at least 5A. A case with heatsink or fan (optional, but the Pi 5 can run warm).
Network
Wired Ethernet is preferred for reliability. WiFi works but introduces latency and disconnection risk.
Step-by-Step Setup
1. Install the OS
Download Raspberry Pi OS (Lite variant to save space). Use the official Imager to write it to your NVMe SSD. Boot the Pi, run `sudo raspi-config` to enable SSH and expand the filesystem.
2. Install Node.js
OpenClaw runs on Node.js. Grab the ARM64 build from nodejs.org or install via apt. Check the version: `node --version` should show v18 or higher. Install npm too: `npm --version`.
3. Install OpenClaw
Clone the OpenClaw repository or install via npm: `npm install -g openclaw`. Initialize a new project: `openclaw init my-agent`. This creates a config folder and example files.
4. Configure Your Integrations
OpenClaw connects to external services via plugins. Common ones: Slack (incoming webhooks), Gmail (OAuth), Twilio (SMS), Zapier (bridging to 5000+ apps). Edit the config file to add API keys and secrets. Store sensitive keys in environment variables, never in version control.
5. Set Up Inference
Don't run a local LLM on the Pi. It's too slow and memory-hungry. Instead, point OpenClaw to a cloud API: Tulip, OpenAI, Anthropic, or any OpenAI-compatible endpoint. Add your API key to the config. Test with a simple query.
6. Test Locally
Run `openclaw start --dev` to test your agent. Send a message through Slack or email to trigger it. Watch the logs for errors. Debug configuration issues before moving to production.
7. Set Up Autostart
Create a systemd service file at `/etc/systemd/system/openclaw.service`:
[Unit]
Description=OpenClaw Agent
After=network.target
[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/my-agent
ExecStart=/usr/bin/node /usr/local/lib/node_modules/openclaw/bin/start.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.targetEnable it: `sudo systemctl enable openclaw`. Start it: `sudo systemctl start openclaw`. Check status: `sudo systemctl status openclaw`.
Managing Remote Access
You likely won't always be at home. Tailscale VPN is the simplest way to securely access your Pi from anywhere without exposing it to the internet.
Install Tailscale
Download the ARM64 package and install. Run `sudo tailscale up` and authenticate. Your Pi gets a private IP on your Tailscale network.
Access Anywhere
Connect any device to the same Tailscale network. SSH into your Pi using its Tailscale IP. Check agent logs, restart services, update configurations—all securely.
Limitations and Trade-offs
Compute Power
The Pi is not a server. Don't expect it to handle 1000 concurrent users or process huge datasets. It's perfect for an agent managing your personal workflow or a small team.
Internet Dependency
Your agent needs reliable internet access. A brief outage pauses your agent's work. If that's unacceptable, add offline queuing to buffer requests.
Storage
A 60GB SSD fills up fast if you're storing logs and data. Implement log rotation and archive old data to cloud storage.
When to Upgrade
As your agent grows—more integrations, more complex logic, higher volume—you'll hit the Pi's ceiling. At that point, Tulip (or similar managed platforms) handles the infrastructure. You keep the OpenClaw config; we host it.
Troubleshooting Common Issues
Agent Crashes on Startup
Check the systemd logs: `sudo journalctl -u openclaw -n 50`. Usually a configuration error or missing API key.
Slow Inference
If inference takes >10 seconds, your API provider is overloaded or the model is too large. Switch to a faster model or a different provider.
Network Timeouts
Use Tailscale for stability. If using direct internet, ensure your router isn't throttling or dropping long-lived connections.
High CPU Usage
Node.js might be spinning on something. Profile with `node --prof` and `node --prof-process` to identify hot paths.
Next Steps: When to Move to Tulip
Your Pi agent is great for learning and personal workflows. But if your agent needs to serve a team, handle higher throughput, or run critical business logic, Tulip offers:
- Automatic scaling and redundancy
- Built-in monitoring and alerting
- Managed inference (no API key juggling)
- Team collaboration and audit logs
You can export your OpenClaw config and deploy it on Tulip in minutes. Your Pi can stay as a development and testing environment.
Frequently Asked Questions
Q: Can I run multiple agents on one Pi?
A: Yes, but they'll share resources. Start with one; add more only if performance is acceptable.
Q: Is the Pi secure enough for production?
A: For personal use or small teams, yes. Use Tailscale to avoid exposing SSH. Keep OpenClaw and Node.js updated. For public-facing APIs, use a server instead.
Q: What's the difference between Pi 4 and Pi 5?
A: Pi 5 is 2-3x faster and has better thermals. Pi 4 still works but runs slower. Both need the same setup steps.
Q: Can I run the LLM locally instead of using an API?
A: Technically yes with Ollama, but the Pi is too slow. A 7B model takes 30+ seconds per response. Cloud APIs are 10-100x faster and usually cheaper.
Q: How do I back up my agent's configuration?
A: Use Git to version control your config files. Store secrets in a separate, encrypted file. Back up to GitHub or a private repository.
Q: Can I run this agent offline?
A: No, it relies on cloud APIs for inference and external integrations. Design your agent to queue requests and retry when internet returns.