How to reduce cold start times with tRPC api
My app was built using t3 stack and for the most part its great but it has one problem that i havent been able to solve / understand yet. Currently, my tRPC api has a ~10s or so cold start time in my vercel deployment. When the api isnt cold starting the api requests take a second or 2. This is ruining demos whenever I try to show people. Here are the things I know / figured out:
- The cold start time is only bad for api requests. The UI never seems to have a problem loading quickly.
- Vercel's function logger shows the Init Duration as > 7s
- Online i've seen people complaining about 1-2 second cold start times so i feel like i must be doing something totally wrong
- All of the routes listed in
npm run build
are red and /_app has 319kB First Load JS. I'm not sure if the routes in pages/api use /_app but im assuming they do because they also report a First Load JS of 319kB
- Vercel lists the memory used by the api request as ~320MB which seems high
- I turned on the webpack bundle analyzer and am even more confused. ('m looking at the nodejs.html since that seems like its the relevant one for api requests)
- Files seem to have pretty random amounts of JS relative to their actual file size
- The 4th largest bundle in my app and by far the largest 3rd party dependency is nextjs / router1 Reply
I'll keep adding things I find out here I guess
I'm also using upstash ratelimitting as per the video theo made. The analyzer spit out an
edge.html
file that only seems to show things from src/middleware.ts. It lists this as 257kB parsed