April 10, 2026
/
Insights

How to Set Up OpenClaw on Mac, Windows, and Linux

A straightforward setup guide for every major operating system, from installation to running your first agent.

Author
Team Tulip

Quick Answer

OpenClaw runs on Mac, Windows, and Linux. The fastest way to install it on any platform is with Docker — one command pulls the container and you're running in minutes. Mac and Linux also support direct installation via Git. Windows works best through Docker Desktop or WSL2. Once installed, the setup process is identical across platforms: configure your model, install skills from ClawHub, and start your first agent.

Before You Start

OpenClaw needs two things regardless of your operating system: a way to run the framework itself, and a language model to power your agent. For the model, you can either connect to a cloud API like Tulip or run one locally with Ollama.

If you just want to get started quickly, Docker is the universal answer. It works identically on every platform and avoids most of the "it works on my machine" problems. If you prefer a native installation, each platform has its own path.

Setting Up on Mac

Option 1: Docker (Recommended)

Install Docker Desktop from docker.com if you don't already have it. Once Docker is running, open Terminal and pull the OpenClaw image. The container includes everything you need — no dependency management, no version conflicts. Start the container and OpenClaw's web interface will be available at localhost on the default port.

This is the recommended approach because it's clean, isolated, and easy to update. When a new version of OpenClaw comes out, you just pull the latest image.

Option 2: Direct Installation

If you prefer running OpenClaw natively, you'll need Git and Node.js (version 18 or higher). Clone the OpenClaw repository, install dependencies, and start the server. Mac handles this well since it ships with a Unix-based terminal and most development tools work out of the box.

You'll also want Homebrew for managing packages. If you plan to run models locally, install Ollama through Homebrew as well — it's a single command.

Mac-Specific Tips

Apple Silicon Macs (M1, M2, M3, M4) are actually excellent for running local models. The unified memory architecture means your GPU and CPU share RAM, so a MacBook Pro with 32GB can run surprisingly large models. Ollama is optimised for Apple Silicon and you'll get good inference speeds even with 14B parameter models.

Setting Up on Windows

Option 1: Docker Desktop (Recommended)

Download and install Docker Desktop for Windows. During installation, make sure WSL2 integration is enabled — Docker performs much better with it. Once Docker is running, open PowerShell or Windows Terminal and pull the OpenClaw image. The process from here is identical to Mac.

Option 2: WSL2

Windows Subsystem for Linux gives you a full Linux environment inside Windows. Install WSL2 through PowerShell, choose Ubuntu as your distribution, and then follow the Linux installation instructions below. This gives you the best of both worlds — native Linux performance with Windows convenience.

WSL2 is particularly good if you plan to do any development work with OpenClaw, since most of the community's scripts and tools assume a Unix-like environment.

Windows-Specific Tips

If you have an NVIDIA GPU, install the NVIDIA Container Toolkit to give Docker access to your GPU for local model inference. AMD GPU support is improving but NVIDIA remains the smoother experience on Windows. Make sure Windows Defender isn't blocking Docker's network access — this is a common gotcha that causes OpenClaw to fail silently.

Setting Up on Linux

Option 1: Docker (Recommended)

Install Docker through your distribution's package manager. On Ubuntu and Debian, this means installing docker.io and docker-compose. On Fedora and Arch, the packages have slightly different names but the process is the same. Pull the OpenClaw image and you're running.

Add your user to the docker group so you don't need sudo for every command. This is a small quality-of-life improvement that saves a lot of typing.

Option 2: Direct Installation

Linux is the most straightforward platform for native installation. Install Git and Node.js through your package manager, clone the repository, install dependencies, and start the server. If you're on a server or VPS, this is often the preferred approach since Docker adds a small overhead.

Linux-Specific Tips

Linux has the best GPU support for local models. NVIDIA GPUs work with the standard NVIDIA drivers and CUDA toolkit. AMD GPUs work through ROCm, which has improved significantly. If you're setting up a dedicated machine for running agents, Ubuntu 22.04 LTS is the most commonly used and best-supported distribution.

For headless server setups, OpenClaw runs perfectly without a desktop environment. You can access it remotely through the web interface or connect it to messaging platforms like Telegram and WhatsApp.

Connecting a Model

With OpenClaw installed on any platform, the next step is identical: connect a language model. You have two main options.

Local Models with Ollama

Install Ollama (available for all three platforms), pull a model like Qwen 3.5 14B, and point OpenClaw to your local Ollama endpoint. This gives you a completely free, completely private agent. The downside is performance depends on your hardware.

Cloud Models with Tulip

Sign up for Tulip, grab your API key, and configure OpenClaw to use Tulip as its model provider. You get access to every major open model — Llama 4, Qwen 3.5, DeepSeek R1 — with optimised inference and no hardware requirements. This is the best option for production agents or when you need more power than your local hardware can provide.

Installing Your First Skills

Skills are what give your agent its abilities. OpenClaw connects to ClawHub, which has over 13,700 skills available. Every skill is an MCP server — a standardised way to connect your agent to external tools and services.

Start with a few essential skills: web browsing for research, file management for working with documents, and a communication skill for whatever messaging platform you use. Install them through the OpenClaw interface or command line, and your agent immediately gains those capabilities.

Running Your First Agent

With OpenClaw installed, a model connected, and a few skills added, you're ready to go. Open the OpenClaw interface, create a new agent, and give it a task. Start simple — ask it to research a topic, summarise a document, or check the weather. As you get comfortable, you can build more complex workflows using SOUL.md files to define your agent's personality, goals, and tool-use patterns.

Frequently Asked Questions

Which operating system is best for OpenClaw?

Linux gives the best performance, especially for local models with GPU acceleration. Mac with Apple Silicon is excellent for local development. Windows works well through Docker or WSL2. All three are fully supported.

Do I need a powerful computer?

Not if you use cloud models through Tulip. For local models, you'll want at least 16GB RAM and ideally a GPU. For cloud-only use, any modern computer with a web browser works.

Can I run OpenClaw on a Raspberry Pi?

Yes, with limitations. A Raspberry Pi 5 with 8GB RAM can run OpenClaw with a small local model or connected to a cloud model via Tulip. We have a dedicated Raspberry Pi guide with full instructions.

How do I update OpenClaw?

With Docker, pull the latest image. With a native installation, pull the latest changes from Git and reinstall dependencies. OpenClaw's configuration and data persist across updates.

Can I run multiple agents on one machine?

Yes. OpenClaw supports running multiple agents simultaneously. Each agent gets its own configuration and skill set. On Tulip, you can scale to as many agents as you need without worrying about local resources.

Get Started

Deploy an agent, today

Run your first agent on Tulip in a few clicks
Deploy Agent
Deploy Agent