Can't connect internal ports to AI LLM and AI is not responding
✅Solved
CAn not seem to get my OPENCLAW to connect or stay connected to the working AI models as before. I have adjusted some setting as per documents, and there seems to be one more level I am missing. Here is the full background of what I have gone through and exactly what stage I am at and stuck at now: https://hasteb.in/w7YqOvkx44GBjNL
Please responde with your full understanding having read the document and instructions there
Solution
1) System architecture (3 ways)
A) “Boxes + arrows” view
- Windows host runs Ollama on port 11434. - WSL2 (Ubuntu-24.04) runs OpenClaw Gateway(systemd user service). - OpenClaw primary model is Ollama; fallback is NVIDIA NIM.
B) Network-path view (the actual bug)
- In default WSL NAT mode, Linux must reach Windows services via the host gateway IP(yours: 172.29.64.1). - Your failure is: WSL → 172.29.64.1:11434= TCP connect timeout(packets don’t get through). - We’re now switching to WSL mirrored networking mode, where (per Microsoft) host + WSL can connect via
localhost
localhost
, avoiding the 172.29.* path.
C) Model/timeout view
- NVIDIA NIM fallback works but can take ~91s to respond. - You set: -