SK
Signal K2mo ago
Tony

Just release a new version of the mcp

Just release a new version of the mcp server: https://github.com/tonybentley/signalk-mcp-server v1.0.5 Major Changes: - HTTP-Only Mode: Switched from WebSocket to HTTP fetching, eliminating stale data issues - AIS Pagination: Added paginated AIS targets sorted by proximity (max 50/page) - CI/CD Pipeline: Added GitHub Actions with SignalK server for automated testing - Code Quality: Added ESLint, achieved 83% test coverage with 102 tests - Static Resources: Added SignalK documentation resources for AI context - Bug Fixes: Excluded self vessel from AIS targets, fixed connection handling Breaking: WebSocket streaming disabled (preserved for future)
9 Replies
David Godin
David Godin2mo ago
Not sure how to use this yet but I think it would be a good use case in KIP. Call a little chat interface and tell the AI what kind of widget you want, type of path, setting, dashboard position, etc.
Tony
TonyOP2mo ago
It’s possible but would require the user to have an Anthropic API account and configure a key. I made a demo app that works with local MCP and Anthropic api. The app is NodeJS runtime and uses cron to send a prompt to the api on an interval. It’s a good use case to get a system summary on an interval. Like two per day morning and evening
David Godin
David Godin2mo ago
So you need in internet account or it's all ran local on the SK server? I was under the impression the MCP server ran on the SK box and that an MCP client, such as KIP, could prompt it.
Tony
TonyOP2mo ago
You need internet unless you know how to setup a local LLM. For now just assume internet connection The MCP would run on the same machine as the LLM client You are misunderstanding the model context protocol server implementation. KIP will never be an MCP client
David Godin
David Godin2mo ago
I read a bit. So If I get it right: To make this work without internet, we would also need a local LLM running on the server. 1- KIP would probably need to have a MCP to expose KIP tools, ressources, etc. 2- KIP would need to have a prompt talking to the LLM whom in turn will query SK and KIP's MCPs to discover and operate. Have I got this right?
Tony
TonyOP2mo ago
Think of the LLM client as whatever is enabling the user to communicate via textual prompts to the LLM. The client would also have MCPs configured so the LLM can communicate via tool calls (basically functions) to Signalk, InfluxDB, and any other context needed. If you want Kip to have a chat window to have textual conversations with the LLM, then Kip would be the LLM client, and the MCPs would be configured in Kip. My current client is Claude Desktop, and the MCPs are configured there.
David Godin
David Godin2mo ago
Could we run a local LLM such as OpenAI on the same SK RPi box or are the ressource constraints to demanding? I read somewhere that there a small/compressed LLM models that, if dedicated to a subject, could work well with reduced resource.
Tony
TonyOP2mo ago
Personally I think it’s better to assume the user is using a remote agent for now. Local LLMs are going to hallucinate a lot more than remote agents. I get that for boats we want to try to do everything without internet but that is slowly changing thanks to Starlink. Given many boats have internet, I think it’s better to limit the target users to those who have internet instead of forcing the offline modality and limiting to less capable open source LLMs Simply put, local LLMs are pretty useless
David Godin
David Godin2mo ago
sad!

Did you find this page helpful?