Slow cold start times in Vercel with my T3 App API
I'm trying to debug some really bad cold start times with my web app's api. The UI loads up really quickly and I haven't had a problem with it. However, my trpc api sometimes takes a full 10 seconds to return a response. My api performs fairly simple CRUD operations against a cloud firestore backend.
I'm not really sure how to debug this. I got the webpack analyzer working and the results are... confusing:
For starters, it seems like all of my pages are in the red in terms of 'First Load JS' size. However, again, the only routes I have problems with are the API ones. When I look at the block diagrams I'm even more confused.
I guess my biggest questions are:
1) how do i take the info from webpack analyzer and do something with it?
2) how do I know if my bundle sizes are too big?
3) Do layouts in _app.tsx add to the bundles sizes in pages/api?
4) How do I know if tree shaking isn't working?
8 Replies
filtering for just pages/api/trpc/[trpc]:
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
I was curious about this and deployed a fresh t3 app to vercel. The size of the serverless function for api/trpc/[trpc] at this point is 10.38 MB, which I assume is largely prisma. However, after installing and loading a few packages in _app.tsx and redploying, the size of the serverless function increased.
Does this mean that code in our layouts is increasing the size of our api routes?
didnt get a clear answer from google. chatgpt says its not included
my app doesnt use prisma, so i dont think its that
i do use cloud firestore though
but either way 10MB for either of those libs sounds kinda nuts
i sure hope the api directory doesnt load _app.tsx
that would be a real bummer
I also tested it with a clean next app and the same is true. code imported in _app.tsx increased the function size of the example api route. I'd be curious if someone with deeper knowledge of next has any insight into this, and if the increased function size due to layout code has any meaningful impact on cold start times
It's very difficult to debug whats happening under the hood with these complicated build setups. My guess is that vercel doesn't tree-shake packages for it's serverless functions, thus it's loading everything from node_modules for it's lambdas.
Which is honestly depressing, but not surprising in the slightest...
this is my assumption as well, but very painful for anyone building apis with heavier functions. it seems that any heavy library in your api folder, or even your front-end layout gets dumped into this shared bundle for api routes and impacts cold starts
my personal solution has been to pull out any heavy lambda workflows into a seperate project using Serverless which handles code splitting and allows defining layers for your lambdas.