OpenClaw Raspberry Pi 5 & 4 Setup Guide | 2026 Private AI Server
The complete 2026 guide to hosting your personal AI agent 24/7 on Raspberry Pi 5 (8GB/4GB) and Pi 4. One-line installer, power supply specs, RAM requirements, and MCP Hub setup.
The complete guide to hosting your personal AI agent 24/7 on Raspberry Pi 5 (8GB/4GB) and Raspberry Pi 4. Transform your Pi into a dedicated MCP Server with always-on availability, complete privacy, and agentic workflow automation—all for just $1/month in electricity costs.
Why Raspberry Pi?
Running OpenClaw on Raspberry Pi offers incredible advantages for your private AI hosting setup:
- 24/7 Uptime: Always available AI assistant
- Low Power: ~5W average power consumption
- Cost Effective: ~$0.43-1.01/month in electricity
- 100% Private: Your data never leaves your home
- Compact Size: Fits anywhere
Value at a Glance
| Benefit | Detail |
|---|---|
| Uptime | 24/7 Available |
| Electricity Cost | ~$1/Month |
| Privacy | 100% Private |
| Initial Cost | ~$105 (hardware) |
Recommended Hardware Specs for 24/7 Stability
Raspberry Pi Model Comparison
| Specification | Raspberry Pi 5 (8GB) | Raspberry Pi 4 (8GB) | Raspberry Pi 4 (4GB) |
|---|---|---|---|
| Performance | Best (2.4x faster) | Excellent | Good |
| RAM | 8GB LPDDR4X-4400 | 8GB LPDDR4-3200 | 4GB LPDDR4-3200 |
| Use Case | Heavy local LLM workloads | Multi-user setups | Single-user basic tasks |
| Recommended | ✅ Yes | ✅ Yes | ⚠️ Minimum viable |
Why 8GB RAM is Best for OpenClaw
The 8GB RAM configuration is ideal for running advanced OpenClaw skills and local LLM models because:
- Model Context Handling: 8GB allows loading models up to 7B parameters with efficient context processing
- Concurrent Skills: Run multiple AI agents simultaneously without memory pressure
- Future-Proofing: 2026-era AI models are growing—8GB provides headroom for updates
- Swap Performance: More RAM means less swap usage, reducing wear on MicroSD/NVMe storage
For Raspberry Pi 4 4GB users: You can run OpenClaw successfully, but stick to lighter models and single-agent workflows for optimal stability.
⚠️ Critical: Power Supply Requirements for 24/7 AI Servers
Using the correct power supply is essential for preventing OpenClaw crashes during high CPU/NPU load. Many stability issues stem from inadequate power delivery.
Official Raspberry Pi 5 Power Supply Specs (5V 5A USB-C) (Required)
Official Raspberry Pi 5 Power Supply: 5V 5A USB-C (white PSU)
🔌 Don't skimp on the PSU! The Raspberry Pi 5 requires significantly more power than Pi 4, especially during AI inference operations. Third-party power supplies often can't deliver sustained 5A output, causing random reboots when OpenClaw runs heavy workloads.
Why the official 5V 5A matters:
- AI workloads spike power draw to 15-20W temporarily
- Under-voltage causes immediate service crashes
- Official PSU delivers stable power even during agentic workflow execution
- Includes a dedicated power cable (not just a generic USB-C cable)
Raspberry Pi 4 Power Supply
Official Raspberry Pi 4 Power Supply: 5V 3A USB-C (black PSU)
While the Pi 4 is more power-efficient, we still recommend the official supply for 24/7 operation. Generic 5V 2.5A supplies may work for desktop use but often fail under continuous AI server loads.
Additional Hardware Requirements
- MicroSD card: 32GB+ (A1 or A2 rating recommended)
- Case: With active cooling for Pi 5 (recommended for 24/7 operation)
- NVMe SSD (Optional but recommended): For best I/O performance and reduced wear
Pro Tip: NVMe SSD for Production Deployments
For serious private AI hosting, consider running Raspberry Pi OS from an NVMe SSD instead of MicroSD. This significantly improves:
- I/O Performance: 3-5x faster read/write speeds
- Storage Reliability: No SD card corruption risks
- Model Loading: Faster local LLM initialization
Running AI models causes frequent disk I/O. While a high-quality microSD card works for testing, an NVMe SSD via the Pi 5's PCIe interface is required for stable 24/7 agentic workflows.
MCP Hub Capabilities: Your 24/7 Home AI Gateway
Your Raspberry Pi running OpenClaw becomes a centralized Model Context Protocol (MCP) gateway for your entire home network. This 2026 architecture pattern enables:
What MCP Hub Enables
- Universal AI Access: Any device on your network can query your Pi's AI capabilities via MCP protocol
- Skill Orchestration: Chain multiple AI agents across different services (home automation, documents, APIs)
- Local Privacy: All inference happens on your Pi—nothing leaves your network
- Standardized Interface: MCP compatibility ensures your setup works with future AI tools
Real-World MCP Use Cases
[Smart Home Device] → [OpenClaw MCP Hub on Pi 5] → [Local LLM Inference]
↓
[Home Automation Execution]
Example workflows:
- Natural Language Queries: Ask your smart speaker to query documents stored on your NAS
- Automated Workflows: Trigger agentic workflow sequences based on sensor data
- Multi-Device Coordination: Your phone, laptop, and smart home all share the same AI brain
Quick Start: One-Line Installer
🚀 The Fastest Path to Your Local AI Server
This single command transforms your Raspberry Pi into a fully-functional MCP Server with agentic workflow capabilities. No manual configuration required—just paste and run.
Security Best Practice
Always inspect scripts before running them. You can view the installer source at github.com/moltbot/install.
One-Line Installer for OpenClaw
curl -sSL https://get.moltbot.org/install-pi.sh | bash
What happens next:
- Clones the OpenClaw repository
- Installs all dependencies
- Configures the service
- Sets up systemd for auto-start
- Provides pairing instructions
Step-by-Step Setup Guide
Step 1: Install Raspberry Pi OS (64-bit)
Important: Use 64-bit Lite Version
Download Raspberry Pi OS (64-bit) Lite from the official Raspberry Pi website.
Flashing the OS:
- Insert MicroSD card into your computer
- Open Raspberry Pi Imager
- Choose OS: Raspberry Pi OS (64-bit) Lite
- Choose Storage: Select your MicroSD card
- Click ⚙️ (Advanced Settings):
- Set hostname:
moltbot-pi - Enable SSH: Use password authentication
- Set username and password
- Set hostname:
- Write and wait for flashing to complete
Headless Setup
By configuring SSH in the imager settings, you can complete the entire setup remotely via SSH—perfect for server deployments without a monitor.
Step 2: Install Dependencies for Your Private AI Hosting
Connect to your Raspberry Pi via SSH and run:
# Update package lists and upgrade system
sudo apt update && sudo apt upgrade -y
# Install Node.js 20.x (using NodeSource repository)
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
# Verify installations
node --version # Should be v20.x.x or higher
npm --version # Should be 10.x.x or higher
# Install Git
sudo apt install -y git
# Install build essentials
sudo apt install -y build-essential python3
Why Node.js 20+? OpenClaw requires Node.js 20 or later for optimal performance and access to the latest JavaScript features essential for modern local LLM operations.
Step 3: Install OpenClaw - Your One-Line Installer for Agentic Workflow
Run the one-line installer to set up your complete MCP Server environment:
curl -sSL https://get.moltbot.org/install-pi.sh | bash
What the installer does:
- Clones the OpenClaw repository
- Installs dependencies
- Sets up the configuration file
- Creates a systemd service for auto-start
- Provides pairing instructions
Step 4: Gateway Pairing for Your Private AI Hosting
Complete the pairing process via SSH to activate your local LLM gateway:
# After installation completes, start the OpenClaw service
sudo systemctl start moltbot
# Check service status
sudo systemctl status moltbot
# View logs to see the pairing code
sudo journalctl -u moltbot -f
# You should see output like:
# "Your pairing code: ABC-123-DEF"
# "Visit https://get.moltbot.org/pair to complete setup"
# Keep the service running and visit the pairing URL on your computer
# Enter the code displayed in the logs
# After successful pairing, verify connection
moltbot status
# Enable OpenClaw to start on boot
sudo systemctl enable moltbot
Troubleshooting Pairing Issues
If you see "Gateway Pairing Failed" errors:
- Ensure your Raspberry Pi has a stable internet connection
- Verify Node.js version is 20+
- Check logs with:
sudo journalctl -u moltbot -n 50 - Visit our troubleshooting guide for more help
Migration Guide: Updating from OpenClaw to OpenClaw
📦 Legacy Migration Box
If you have an existing OpenClaw installation on Linux/Debian, update to the new OpenClaw CLI with these steps:
# Stop the old OpenClaw service
sudo systemctl stop clawdbot
# Navigate to the installation directory
cd ~/clawdbot
# Pull the latest OpenClaw updates
git fetch origin
git checkout main
git pull origin main
# Reinstall dependencies
npm install
# Update systemd service (if service file changed)
sudo cp systemd/moltbot.service /etc/systemd/system/
sudo systemctl daemon-reload
# Start the new OpenClaw service
sudo systemctl start moltbot
# Verify the migration
moltbot --version
# Enable OpenClaw to start on boot
sudo systemctl enable moltbot
What's preserved:
- ✅ All your existing configurations
- ✅ Installed skills and extensions
- ✅ Gateway pairings and credentials
- ✅ Custom settings and preferences
What's new:
- 🆕 Enhanced MCP Hub capabilities
- 🆕 2026-era agentic workflow engine
- 🆕 Improved local LLM performance
- 🆕 Expanded hardware compatibility
Performance Optimization
1. Use NVMe SSD (Recommended)
Running Raspberry Pi OS from an NVMe SSD instead of MicroSD significantly improves I/O performance for local LLM operations.
2. Enable Performance Mode
# Edit boot config
sudo nano /boot/firmware/config.txt
# Add these lines for Pi 4:
# arm_freq=1800
# over_voltage=6
# gpu_mem=16
# For Pi 5, the performance mode is enabled by default
# Save and reboot: sudo reboot
3. Configure Swap Space
Increase swap space to 2GB or 4GB for better memory management during AI workloads.
# Edit swapfile size
sudo dphys-swapfile swapoff
sudo nano /etc/dphys-swapfile
# Change CONF_SWAPSIZE=100 to CONF_SWAPSIZE=2048
sudo dphys-swapfile setup
sudo dphys-swapfile swapon
4. Monitor System Resources
# Install htop for monitoring
sudo apt install htop
# Run htop to see CPU, memory, and usage
htop
# Check temperature
vcgencmd measure_temp
# Check clock speeds
vcgencmd measure_clock arm
Why Raspberry Pi 5 Over Mac mini M4?
Choosing the right hardware for your private AI hosting setup depends on your specific needs. Here's why Raspberry Pi 5 excels for dedicated local LLM servers:
| Feature | Raspberry Pi 5 (8GB) | Mac mini M4 |
|---|---|---|
| Initial Cost | ~$105 (full setup) | ~$599+ (base model) |
| Monthly Power | ~$0.43-1.01/month | ~$5-8/month |
| 24/7 Operation | Designed for always-on | General-purpose desktop |
| Dedicated Device | Runs your AI stack exclusively | Shares resources with macOS |
| Form Factor | Compact (credit card sized) | Desktop footprint |
| Upgradability | Expandable storage | Limited |
| Noise | Silent (passive cooling possible) | Fan noise possible |
| Use Case | Local LLM server appliance | Daily driver + occasional AI |
The Sweet Spot: Raspberry Pi 5 is purpose-built for agentic workflow hosting—low power, dedicated resources, and zero maintenance overhead. Mac mini M4 shines as a primary workstation that can also run AI workloads, but you pay significantly more for that flexibility.
When to Choose Each
- Pick Raspberry Pi 5 if: You want a dedicated, always-on MCP Server for pure AI workloads with minimal ongoing costs
- Pick Mac mini M4 if: You need a powerful daily workstation that can also run heavy AI models when needed
Cost Analysis
One-Time Hardware Costs
| Component | Raspberry Pi 5 | Raspberry Pi 4 |
|---|---|---|
| Board (8GB) | ~$80 | ~$75 |
| Official Power Supply | ~$12 | ~$8 |
| MicroSD card (64GB, A2) | ~$12 | ~$12 |
| Case with cooling | ~$15 | ~$10 |
| Total | ~$119 | ~$105 |
Monthly Operating Costs
| Metric | Value |
|---|---|
| Power consumption | ~5W average |
| Monthly usage | 3.6 kWh/month |
| @ $0.12/kWh (avg US) | $0.43/month |
| @ $0.28/kWh (high rate) | $1.01/month |
Incredible Value
Compared to running a Mac mini 24/7 (~$5-8/month), a Raspberry Pi costs just $0.43-1/month in electricity. Over a year, that's $5-12 vs $60-96, saving you $48-91 annually. The hardware pays for itself in energy savings within 1-2 years!
Frequently Asked Questions
Does OpenClaw require a cooling fan on Pi 5?
While Raspberry Pi 5 can run passively for light workloads, we strongly recommend active cooling for 24/7 OpenClaw operation. Here's why:
- AI Inference Generates Heat: Running local LLM models continuously spikes CPU temperature
- Thermal Throttling: Without active cooling, Pi 5 throttles from 2.4GHz to ~1.5GHz under sustained AI loads
- Performance Impact: Throttled CPUs process agentic workflow tasks 40% slower
- 24/7 Reliability: Active cooling ensures consistent performance during long-running operations
Recommended cooling solutions:
- Official Raspberry Pi Active Cooler (PID controlled fan)
- Case with built-in 5V fan (40mm or larger)
- Ensure adequate case ventilation for airflow
Can I run OpenClaw on Raspberry Pi 4 4GB?
Yes, OpenClaw can run on Raspberry Pi 4 with 4GB RAM, making it an excellent entry-level option for private AI hosting. However, for optimal performance with heavier local LLM workloads and complex agentic workflow tasks, we recommend 8GB RAM. The 4GB model works well for:
- Basic MCP Server operations
- Text-based AI interactions
- Single-user scenarios
- Lightweight skill execution
Tip: If you're planning to run multiple concurrent AI agents or process larger contexts, the 8GB model provides significantly better headroom and future-proofs your setup for 2026-era AI models.
What is the best power supply for a 24/7 AI server?
The best power supply for running OpenClaw 24/7 on Raspberry Pi is the official Raspberry Pi foundation power supply:
- Raspberry Pi 5: Official 5V 5A USB-C power supply (white)
- Raspberry Pi 4: Official 5V 3A USB-C power supply (black)
⚠️ Critical Warning: Avoid third-party or generic USB-C power supplies for 24/7 AI server duty. Many can't sustain the amperage needed during AI inference spikes, causing random reboots and service crashes that are difficult to diagnose.
Why official supplies matter for AI workloads:
- AI model inference causes brief power draw spikes (15-20W on Pi 5)
- Under-voltage triggers immediate CPU throttling or system shutdown
- Official PSUs deliver stable, consistent power even under variable loads
- Dedicated power cables (not generic USB-C) ensure proper current delivery
Cost vs. Reliability: At ~$12 for the official Pi 5 PSU vs ~$8 for generic, the $4 difference prevents hours of troubleshooting stability issues.
Is Pi 5 faster than M4 for simple automations?
For simple automations and basic agentic workflow tasks, Raspberry Pi 5 offers surprising advantages over Mac mini M4:
Where Pi 5 Wins:
- Dedicated Resources: Pi 5 runs only your AI stack, while M4 shares resources with macOS and background processes
- Zero Contention: No competing applications stealing CPU cycles during automation execution
- Instant Availability: Always-on, no wake-from-sleep delays
- Network Isolation: Dedicated device means predictable response times
Benchmark Comparison (Simple Text Automation):
- Raspberry Pi 5: ~200-400ms response time (dedicated)
- Mac mini M4: ~150-300ms response time (when awake, not under load)
The Reality: While M4 has faster raw specs, real-world simple automation performance is comparable because:
- Network latency often dominates the response time
- Pi 5's dedicated architecture eliminates resource contention
- Most automations are I/O bound, not CPU bound
Recommendation: Choose Pi 5 for dedicated 24/7 automation duty, M4 if you need a workstation that can also run automations during active use.
Next Steps
Once your OpenClaw setup is complete, expand your Pi's capabilities:
- Browse 338+ AI Skills - Add powerful new capabilities to your MCP Server
- Basic Configuration - Customize your OpenClaw settings
- Troubleshooting Guide - Get help with common issues
Need Help?
Having trouble with your Raspberry Pi setup?
Quick Reference
Essential Commands
# Start OpenClaw
sudo systemctl start moltbot
# Check status
sudo systemctl status moltbot
# View logs
sudo journalctl -u moltbot -f
# Restart service
sudo systemctl restart moltbot
# Enable auto-start on boot
sudo systemctl enable moltbot
# Check version
moltbot --version
Performance Monitoring
# Check CPU temperature
vcgencmd measure_temp
# Check clock speed
vcgencmd measure_clock arm
# Monitor resources
htop
# Check disk space
df -h
Power Supply Quick Check
# Check for under-voltage events (indicates weak PSU)
vcgencmd get_throttled
# Output meanings:
# 0x0 = No throttling (good PSU)
# 0x50005 = Under-voltage occurred (replace PSU immediately)
Technical Specifications
Minimum Requirements
- CPU: ARM Cortex-A72 (4 cores)
- RAM: 4GB (8GB+ recommended)
- Storage: 32GB MicroSD (NVMe SSD recommended)
- Network: Ethernet or Wi-Fi
- Power: Official 5V 5A (Pi 5) or 5V 3A (Pi 4) USB-C supply
Recommended Specifications
- CPU: ARM Cortex-A76 (4 cores) - Raspberry Pi 5
- RAM: 8GB
- Storage: 256GB NVMe SSD
- Network: Ethernet (preferred)
- Power: Official Raspberry Pi 5V 5A USB-C power supply
- Cooling: Active cooler for 24/7 operation
Last Updated: January 2026 | OpenClaw v2.0 (formerly Clawdbot) Compatible: Raspberry Pi 4 (4GB/8GB), Raspberry Pi 5 (8GB) OS: Raspberry Pi OS (64-bit) Lite or Desktop Perfect for: Local LLM deployment, MCP Server hosting, Agentic Workflow automation, 24/7 AI server, Private AI hosting