How to implement text streaming(the same way it is done in ChatGPT)?
Use case: I am requesting chat completion from LLM(different ones, not only ChatGPT); I want to stream the response back as it arrives on the server, from the server to the client, word by word.
What is the easiest way to do it in Wasp?
Thanks!
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
W
Wasp
Rails-like framework for React, Node.js and Prisma. Build your app in a day and deploy it with a single CLI command.