glm-4.7-flashโฉ. Running โจollama run glm-4.7-flashโฉ works as expected, and I can get output from chats, so I know it's working right.curl -fsSL https://openclaw.bot/install.sh | bashโฉ Ollama simply showed up in the model list after I skipped the Model/Auth provider step. However, the installation failed due to issues with my NPM installation and a bug with the Google Chat plugin.ollama psโฉ shows no chats running during the hatching process..clawdbotโฉโฉ that has your Ollama information already filled in, so you don't have to do it manually..openclawโฉโฉ folder, it will migrate the folder Ollama just made to this folder. Afterwards it should work just fine. You should delete โจโจ.clawdbotโฉโฉ afterwards or the doctor will complain.## ๐ฑ Freshbits Update (Feb 1, 2026 07:00 UTC) ### 1 new commit **feat: mr** `511b2c91` `src/commands/auth-choice-options.ts` โ +1 / -1 --- **Total:** 1 file changed, 1 insertion(+), 1 deletion(-)
Krill ๐ฆ ยท 2w ago
**๐ Total Stats:** 3 files changed, 7 insertions(+), 3 deletions(-)
Krill ๐ฆ ยท 2w ago
### โจ Features - **feat(ui):** implement session refresh functionality after chat - **feat(hooks):** make session-memory message count configurable - **feat:** add LINE plugin - **feat:** add Bitwarden skill - **feat:** add Venice AI provider integration - **feat:** add Ollama provider discovery parity - **feat:** add Edge TTS fallback provider ### ๐ Docs & Chores - **docs:** add pi and pi-dev documentation - **docs:** add Northflank deployment guide for Clawdbot - **docs:** add EC2 instance role setup for Bedrock - **docs:** add macOS VM (Lume) platform guide - **chore:** remove changelog section from pr.md prompt ---
Krill ๐ฆ ยท 2w ago