Setup Guide

OpenClaw on GPU

Running OpenClaw on GPU is one of the best ways to unlock faster performance, local AI models, multi-agent workflows, and heavy automation capabilities.

8 min read
Mar 25, 2026
Ampere Team

Running OpenClaw on GPU improves speed, supports larger models, and enables smoother automation. It helps OpenClaw run more efficiently and reliably.

This guide explains system requirements, GPU setup, local model configuration, and best practices.

System Requirements for OpenClaw on GPU

ComponentMinimumRecommended
GPU6GB VRAM (GTX 1660 / RTX 2060)8GB–16GB VRAM (RTX 3060+)
RAM8GB16GB+
CPU4 cores / 4 threads8 cores / 8+ threads
Storage20GB SSD free40GB–60GB SSD
OSLinux / Windows 11+ / Ubuntu 22.04+Ubuntu 22.04+
Node.jsv22+Latest version
Model BackendOllamaOllama (recommended)
GPU DriversNVIDIA CUDA / AMD ROCmLatest drivers

If your system doesn't meet these GPU requirements, you can skip the hardware setup and run OpenClaw instantly on Ampere.sh.

Recommended Models by VRAM

VRAMModelPull Command
4–6 GBLlama3.2 3B / Gemma3 4Bollama pull llama3.2:3b
6–8 GBQwen2.5 7B / Mistral 7Bollama pull qwen2.5:7b
12–16 GBLlama3.1 8B / DeepSeek-R1 8Bollama pull llama3.1:8b
20–24 GBGPT-OSS 20B / Qwen2.5 32Bollama pull gpt-oss:20b
48 GB+DeepSeek-R1 70B / Llama3.1 70Bollama pull deepseek-r1:70b

RTX GPUs provide the most stable and fastest OpenClaw experience.

How to Install OpenClaw on GPU — Step by Step

Step 1: Install WSL (Windows Only)

If you are on Windows, open PowerShell as Administrator and run:

wsl --install

Restart your PC, then open WSL to confirm it is working:

wsl --version wsl

Linux and Ubuntu users can skip this step entirely.

Step 2: Install NVIDIA Drivers

Check if your GPU is already detected by running:

nvidia-smi

If the command is not found, install the NVIDIA drivers and reboot:

sudo apt update sudo apt install nvidia-driver-535 -y sudo reboot

After reboot, run nvidia-smi again. You should see your GPU name, VRAM, and driver version:

nvidia-smi output showing NVIDIA RTX 3060 12GB VRAM with Driver 535 and CUDA 12.2

nvidia-smi confirming GPU detected — RTX 3060, 12GB VRAM, CUDA 12.2

Step 3: Install Ollama

Run the official Ollama installer:

curl -fsSL https://ollama.com/install.sh | sh

Verify the installation completed successfully:

ollama --version
Ollama install script detecting NVIDIA GPU and CUDA 12.2, installation complete

Ollama detects your NVIDIA GPU and CUDA version automatically during install

Step 4: Pull a Local Model

Choose a model based on your available VRAM:

# 4–6 GB VRAM ollama pull llama3.2:3b # 6–8 GB VRAM ollama pull qwen2.5:7b # 12–16 GB VRAM ollama pull llama3.1:8b # 20–24 GB VRAM ollama pull gpt-oss:20b

Run the model to confirm it loads correctly:

ollama run llama3.1:8b

In a second terminal, confirm your GPU is being used during inference:

watch -n 1 nvidia-smi
ollama pull llama3.1:8b showing download progress layers completing and model responding

Model download completes layer by layer — model responds immediately after

Step 5: Install OpenClaw

Run the OpenClaw install script:

curl -fsSL https://openclaw.ai/install.sh | bash

The installer detects your OS, installs Node.js 22, and sets up OpenClaw automatically.

Step 6: Configure OpenClaw to Use Ollama

Run the onboarding wizard — this is the easiest way to connect Ollama:

openclaw onboard

When prompted, select Ollama as the model provider and choose your mode:

  • Local — uses only your GPU models, no cloud calls
  • Cloud + Local — combines your GPU models with cloud providers

Or configure it manually if you prefer:

openclaw config set models.providers.ollama.apiKey "ollama-local" openclaw models set ollama/llama3.1:8b

To see all available models at any time:

openclaw models list

Step 7: Start OpenClaw

Start the OpenClaw gateway:

openclaw gateway start

Check that it is running:

openclaw gateway status

Open the dashboard to confirm your local Ollama model is the active provider:

openclaw dashboard

Connect a Messaging Channel

Inside the dashboard, go to Channels and connect Telegram, WhatsApp, Discord, or any other supported platform. Your messages will now be handled by the local AI model running on your GPU.

Common Issues and Fix

IssueFix
GPU not detectedRun nvidia-smi. If not found, install drivers: sudo apt install nvidia-driver-535 -y then reboot
Out of VRAM or model crashUse smaller model: ollama pull llama3.2:3b
Slow performanceCheck GPU usage: watch -n 1 nvidia-smi
Ollama not runningStart Ollama: ollama serve
OpenClaw not startingStart gateway: openclaw gateway start
No AI reply from OpenClawCheck status: openclaw status
Channel not connectedProbe channels: openclaw channels status --probe
Wrong model respondingList models: openclaw models list
Change modelSet model: openclaw models set ollama/llama3.1:8b
Permission denied errorFix permissions: chmod +x install.sh
Installation failedUpdate packages: sudo apt update && sudo apt upgrade -y

GPU drivers, CUDA errors, VRAM limits, and model compatibility can slow down your setup. Instead of troubleshooting hardware problems, you can run OpenClaw instantly using Ampere.sh.

Frequently Asked Questions

Can I run OpenClaw on GPU?
Yes. OpenClaw integrates with Ollama, which uses CUDA (NVIDIA) or ROCm (AMD) for GPU-accelerated local inference. This lets you run local LLMs faster, reduce API costs, and keep your data private.
What is the minimum GPU requirement for OpenClaw?
A minimum of 6GB VRAM is needed to run smaller 4B models like Llama3.2 3B or Gemma3 4B. For a smoother experience with larger models, 12GB+ VRAM (RTX 3060 Ti or better) is recommended.
Can I run OpenClaw on NVIDIA RTX GPUs?
Yes. NVIDIA RTX GPUs provide the best performance for OpenClaw using CUDA and Tensor Cores. RTX 3060 and above offer the most stable and fastest local LLM experience.
Can I run OpenClaw without local models?
Yes. You can connect OpenClaw to cloud AI providers like OpenAI, Anthropic, or Gemini using API keys. A GPU is only needed if you want to run local models through Ollama.
Can I use a gaming PC to run OpenClaw?
Yes. A gaming PC with an RTX 3060 or better can act as a full local AI server for OpenClaw. Keep the PC running for 24/7 availability — or use a VPS for always-on uptime.
Can I run OpenClaw on a cloud GPU?
Yes. OpenClaw works well on cloud GPU servers running Ubuntu. Install Ollama, pull your model, then install and configure OpenClaw using the same steps as a local GPU setup.
Why is my GPU memory full?
Large models require more VRAM than your GPU has available. Switch to a smaller or quantized model — for example: ollama pull llama3.2:3b for 4–6GB VRAM cards.

Don't Have a Powerful GPU?

Running OpenClaw on GPU requires high VRAM, drivers, and setup. If your system doesn't meet the requirements, run OpenClaw instantly on Ampere.sh without any hardware setup.

Get Started on Ampere.sh →