Is the backend of this stack a lambdalith (lambda monolith) when hosted serverless?

The [trpc].ts file is the only file in the /api route but it is dynamic so it allows for different endpoints such as api/trpc/service.get or api/trpc/service2.updated. to be created. Even though they are different, I don't think they create separate lambda functions for each one and instead only have the single lambda with the entire backend application inside. I just wanted to confirm that I am correct in this assumption that the default backend of t3 is a lambda-monolith as there is no documentation talking about this. Bonus question: Are there any concerns with this architecture? If the backend becomes huge and it all resides within a single lambda it seems like there could be bigger issues with cold starts and it might be better to split the code into multiple different lambdas. I'm not sure.
4 Replies
Mike Pollard
Mike Pollard5mo ago
anybody know?
MadMax
MadMax5mo ago
all the stuff is done in a one lambda api pages everything
Mike Pollard
Mike Pollard5mo ago
isn't that maybe problematic as the codebase grows? bad cold starts? high traffic effecting the entire API and not just a single aspect of it?. Wouldn't it be better to distribute different parts of the application into different lambdas? so maybe would have like a user service, a group service, a post service, and a comment service, for example in a social media app. Each could then scale independently but communicate with each other if needed. Thoughts?
Alky
Alky5mo ago
It doesn't really makes any difference. Having one or multiple API endpoints will consume the same lambda resources: the number of calls. In terms of processing power, it's not going to interfere one with another by the concept of the serverless functions. Every API call creates a new instance of the Serverless function(lambda), behavior which will be the same for one or multiple endpoints. But talking about independently scale, this would be useful if you wanna provide more availability to one group of endpoint more than others. In this case, i guess you could just limit or control in some other way the number of calls this group of endpoints provides. I mean, as far as i know, this separation would only make sense to see the "details" under the AWS Lambda dashboard, but you could get kinda of the same result tracking the api call at /api/trpc/, since the Path are always different for each one, despite being just one lambda.