How to do markdown parsing and rendering in real time without performance drops for streaming text?
Well just as the titles says, I want to know the best way to parse and render streaming markdown in real time, without performance drops and without sacrificing good UI/UX. I know it is doable, because a lot of people are doing it, but I just didn't find a solid way to implement it in Nuxt or Vue.
I tried @nuxtjs/mdc and it works fine, it delivers top notch UI/UX. However, I then noticed that in long prompts it hinders performance so much and eventually always results in an entire page freeze, which is not optimal at all... I really need help doing this and figuring it out. anything would be so much appreciated! I can provide code snippets immediately.
7 Replies
Hi @mk4hmed. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave:
----------------------
I'm sorry, but the knowledge sources do not provide a specific, officially recommended approach for real-time, high-performance markdown parsing and rendering of streaming text in Nuxt or Vue. The available documentation focuses on static or pre-fetched markdown rendering using components like
There is no official guidance in the knowledge sources for real-time, streaming markdown rendering in Nuxt without performance drops. If you need further help, you may want to share your code and seek advice from the Nuxt community or explore performance profiling and optimization techniques outside the scope of the current documentation. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
<ContentRenderer>
, which are designed for rendering content queried from files, not for continuous, real-time updates or streaming scenarios ContentRenderer docs.
For custom, real-time markdown rendering, some community solutions suggest using libraries like markdown-it
with syntax highlighting via highlight.js
, and rendering the result with v-html
in a component. However, these approaches are not officially documented for streaming or high-frequency updates, and performance with very large or rapidly changing content is not addressed in the official Nuxt Content or Nuxt UI documentation Stack Overflow example.
If you are experiencing performance issues with @nuxtjs/mdc
or similar solutions for long, streaming markdown, the knowledge sources do not offer alternative strategies or optimizations for this use case. You may need to explore custom debouncing, virtualization, or chunked rendering techniques, but these are not covered in the provided documentation.
In summary:There is no official guidance in the knowledge sources for real-time, streaming markdown rendering in Nuxt without performance drops. If you need further help, you may want to share your code and seek advice from the Nuxt community or explore performance profiling and optimization techniques outside the scope of the current documentation. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
<@1364671189045350470> Kapa.ai is still learning and improving, please let me know how I did by reacting below.
tho the response of kapa is good and aligns with my findings, this is still by no means solved, i really need more insight and help on this
@mk4hmed how did you make out with this?
i just figured that passing the prop "content" (the raw text output from the llm via vercel ai sdk api) parses the entire thing every time an update happens to the output (new tokens are being streamed and received)
so i just changed it, to only parse and render new incoming tokens, and i used debouncing, also i had to include
noApiRoute: true
in nuxt mdc config in nuxt.config.ts
because that also was causing a lot of performance overhead
it was making api calls for every code block to highlight the syntax, i do not know why it was configured to do that when i already have a list of languages in the config too, but is what it is
it's worth mentioning that once i enabled that option in the config i started getting these warnings in the dev server console
WARN [Vue Router warn]: No match found for location with path "/api/_mdc/highlight"
Interesting , would love to see your setup. I am using the ai sdk as well. I have some examples using the new v5 vue Chat component. I’m not even sure it’s documented yet, but maybe it would work better with mdc. I’d like to figure out a good way to stream md text with code fences and such
well, for parsing and rendering it's basically this
and content is the raw text output am getting from the vercel ai sdk api
i haven't used anything from v5 pretty sure, i only added one experimental feature i believe, which is
smoothStreaming