What is the better approach for STT and TTS at Mastra level with Voice on mobile?
We have a React Native CLI (without Expo) app, where we are sending every 5-10s of audio chunks after the user clicked on the Voice mode button on the mobile app.
We are not going for STS (speech-to-speech)
Do we go for implementing Web Socket and connecting it to Mastra (backend) or do we have to go with node stream only method?
Don't you think each chunk will have HTTP overhead?
6 Replies
š Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10675
š If you're experiencing an error, please provide a minimal reproducible example whenever possible to help us resolve it quickly.
š Thank you for helping us improve Mastra!
Any update?
I would use WebSockets, you have a single persistent connection and lower latency
if youre using HTTP2 it could actually work without websockets
You could probably start with HTTP though and see
You are great šÆ, I was fumbling and bumbling and you just showed up and you made me stumbled on the perfect path š£ļø
Thank you so much Abhi šš
Of course Sujay! anytime
In a few releases we will be doing Server Adapters which allows you to attach mastra server to your own server instance
This is gonna allow you to control so much more especially if you wanna do websocket transport!!
Ohho, eagerly waiting for those āØ
Thank you Abhi and Team Mastra.
Mastra is my love.