getClaims seems quite slow
I just recently ran through the asymmetric JWT migration, following along with the posted video from the blog post. I'm pretty sure I have everything setup correctly however I'm seeing a major difference in performance relative to the video. There it was showing sub 10ms times for the
getClaims
call, yet I'm seeing times that seem all over the place but more often than not, over 40-50ms (see attached screenshot, i also output the headers to show that the algorithm should be correct).
For reference, literally the code I have for fetching claims looks like this:
However, if i do it all manually (unclear if this is is the right way to do it) I see great performance improvements:
This decoding method was mostly cribbed from the announcement blogpost, so unclear if i'm shortcutting some work here or not. But also it has me concerned that I'm missing a step somewhere and getClaims isn't working properly

14 Replies
Maybe just some internet latency? Where are you based out of and where is your Supabase instance hosted?
Then again, the 2nd question is not a great question, because after the first request you should be hitting the Supabase global cache. Still, some internet latency could be affecting it.
If I have a chance here in a bit, I can test mine again. Think I was using
console.time()
and console.timeEnd()
to measure mine - not sure what's best, but that was reflecting good times for me, when I was originally testing.I'm still seeing low latency, except for when the global cache TTL runs out.


The method of timing shouldn’t matter, if anything though, date.now should be faster than a high precision timestamp like console.time/performance.now which should result in slightly better times.
All that said though, you are seeing vastly better/more consistent times than me, so that is quite concerning :thonk:
i guess one q -- do i have to fully revoke the old keys before i get these new optimizations? my brief perusal through the getClaims source code didn't seem to imply this
and also the headers are clearly showing the JWTs being minted are the new type
No, you don't.
What environment are you running this in?
it's a nextjs app, and i was looking at these logs on my local machine
(ran both dev build and a prod build on my machine)
Just wanted to make sure it wasn't some obscure thing. I know the "fast" version of getClaims also relies on access to
crypto.subtle
and I'm not sure if there are popular environments with no access to that.i am using bun for this locally, maybe that has something to do with it :thonk:
No, I use bun as well
I'm not sure what else would be going on in this case.
all good, def appreciate the input tho, and seeing your numbers shows that something is def amiss
@amadeus are you only running this server side?
@David this is the getClaims slowness post I was referencing.
yup!
hey @amadeus . can you tell me about the initial code being run on frontend or backend? where do you see those latencies? Cause on frontend, they should not happen, most definitely not. On Backend, that's a different story: Which provider do you use?
ok, so i missed one response, you run this server-side, clear!
now, tell me about WHERE you see this? with bun locally?
at which point of your code is the supabase client instantiated? can you share that?
hey! thanks for responding, so a couple things.
These logs and such that i shared were all just running the nextjs server locally via bun:
bun run dev
. I didn't do any sort of monitoring with the code in production.
I have a file with my methods for getClaims
, you can see a copy if it here: https://gist.github.com/amadeus/ce02e1b43090bfad59e4eed80cc4b0de (it exports 2 methods, one that just uses supabase auth (for clientside checks), and the other that does it manually (for serverside stuff). Right now in prod on the server, i'm doing it manually, but when i setup my timing logs, i was using these 2 functions as they were
Here's an example server rendered nextjs page that uses these methods: https://gist.github.com/amadeus/f96c709bc3f6bb1e514258be20d94fe8
And of course, here's what my createServerClient
looks like: https://gist.github.com/amadeus/e45c15043595976197cc980008944e37So, generally, especially when using bun locally,
auth-js
(used by supabase-js
)does the exact same as what you're doing with your JWT_CACHE
but with a TTL of 10 minutes (hardcoded). So if you see a new request every 10 minutes, that's expected and correct .
If you however saw requests more often with bun running, then it MUST've been something else, e.g. like a hot-reload that invalidated the cache or so. If it was a compiled version of that, this should really not happen more often than 10 minutes.
On Vercel deployment, this can be again different as Vercel can deploy new functions and new caches (which is the same problem with your manual solution as well, I'm soon publishing a video showing how to fix it)