Streaming Limits with Vercel
So I'm trying to stream in a response from openAI, but the request times out at 10.01s on the vercel deployment due to runtime limits they have set on free tier serverless functions. My first thought was to move the endpoint to the edge, but that complicates things with auth. Any ideas on how I could make this work without signing up for premium? Thanks 
