- M4: multiple folks bought one “specifically to run OpenClaw”, often because it’s quiet, power-efficient, and good value for an always-on machine.
- M4 Pro (48–64GB): shows up in two main patterns:
1) “I want serious local inference / lots of local models available”
2) “Hybrid architecture” (gateway elsewhere + Mac mini serves local models over the network)
Intel Mac mini (still appears)
- At least one person reported running on a 2011 Intel Mac mini (16GB) running Linux, which is a strong data point that hardware can be ancient if you’re mostly using cloud models.
3) What people are doing with Mac minis (use-case clusters)
A) “Dedicated always-on gateway” (most common)
- Mac mini as a 24/7 home for the Gateway (quiet, low-maintenance, easy to leave running).
- Often paired with chat channels like Telegram/Discord/WhatsApp/iMessage (varies by person).
Why Mac mini works well here
- The gateway itself is relatively lightweight when you’re using cloud models.
- Stability + uptime matter more than raw CPU.