getClaims seems quite slow

I just recently ran through the asymmetric JWT migration, following along with the posted video from the blog post. I'm pretty sure I have everything setup correctly however I'm seeing a major difference in performance relative to the video. There it was showing sub 10ms times for the getClaims call, yet I'm seeing times that seem all over the place but more often than not, over 40-50ms (see attached screenshot, i also output the headers to show that the algorithm should be correct). For reference, literally the code I have for fetching claims looks like this:
export async function getClientSessionClaims(supabase: SupabaseClient<Database>): Promise<SessionUserResponse> {
try {
const start = Date.now();
const {error, data} = await supabase.auth.getClaims();
console.log('ZZZZZ - getClaims time', Date.now() - start, data?.header);
return {claims: data?.claims ?? null, error};
} catch (e: any) {
return {claims: null, error: e};
}
}
export async function getClientSessionClaims(supabase: SupabaseClient<Database>): Promise<SessionUserResponse> {
try {
const start = Date.now();
const {error, data} = await supabase.auth.getClaims();
console.log('ZZZZZ - getClaims time', Date.now() - start, data?.header);
return {claims: data?.claims ?? null, error};
} catch (e: any) {
return {claims: null, error: e};
}
}
However, if i do it all manually (unclear if this is is the right way to do it) I see great performance improvements:
const SUPABASE_JWT_ISSUER = `${process.env.NEXT_PUBLIC_SUPABASE_URL}/auth/v1`;
const SUPABASE_JWT_KEYS = createRemoteJWKSet(new URL(SUPABASE_JWT_ISSUER + '/.well-known/jwks.json'));

export async function getJWT(accessToken: string) {
const start = Date.now();
const result = await jwtVerify(accessToken, SUPABASE_JWT_KEYS, {issuer: SUPABASE_JWT_ISSUER});
console.log('ZZZZZ - manual jwt time', Date.now() - start, result.protectedHeader);
return result.payload;
}
const SUPABASE_JWT_ISSUER = `${process.env.NEXT_PUBLIC_SUPABASE_URL}/auth/v1`;
const SUPABASE_JWT_KEYS = createRemoteJWKSet(new URL(SUPABASE_JWT_ISSUER + '/.well-known/jwks.json'));

export async function getJWT(accessToken: string) {
const start = Date.now();
const result = await jwtVerify(accessToken, SUPABASE_JWT_KEYS, {issuer: SUPABASE_JWT_ISSUER});
console.log('ZZZZZ - manual jwt time', Date.now() - start, result.protectedHeader);
return result.payload;
}
This decoding method was mostly cribbed from the announcement blogpost, so unclear if i'm shortcutting some work here or not. But also it has me concerned that I'm missing a step somewhere and getClaims isn't working properly
No description
No description
14 Replies
j4
j43w ago
Maybe just some internet latency? Where are you based out of and where is your Supabase instance hosted? Then again, the 2nd question is not a great question, because after the first request you should be hitting the Supabase global cache. Still, some internet latency could be affecting it. If I have a chance here in a bit, I can test mine again. Think I was using console.time() and console.timeEnd() to measure mine - not sure what's best, but that was reflecting good times for me, when I was originally testing.
j4
j43w ago
I'm still seeing low latency, except for when the global cache TTL runs out.
No description
No description
amadeus
amadeusOP3w ago
The method of timing shouldn’t matter, if anything though, date.now should be faster than a high precision timestamp like console.time/performance.now which should result in slightly better times. All that said though, you are seeing vastly better/more consistent times than me, so that is quite concerning :thonk: i guess one q -- do i have to fully revoke the old keys before i get these new optimizations? my brief perusal through the getClaims source code didn't seem to imply this and also the headers are clearly showing the JWTs being minted are the new type
j4
j43w ago
No, you don't. What environment are you running this in?
amadeus
amadeusOP3w ago
it's a nextjs app, and i was looking at these logs on my local machine (ran both dev build and a prod build on my machine)
j4
j43w ago
Just wanted to make sure it wasn't some obscure thing. I know the "fast" version of getClaims also relies on access to crypto.subtle and I'm not sure if there are popular environments with no access to that.
amadeus
amadeusOP3w ago
i am using bun for this locally, maybe that has something to do with it :thonk:
j4
j43w ago
No, I use bun as well I'm not sure what else would be going on in this case.
amadeus
amadeusOP3w ago
all good, def appreciate the input tho, and seeing your numbers shows that something is def amiss
j4
j42w ago
@amadeus are you only running this server side? @David this is the getClaims slowness post I was referencing.
amadeus
amadeusOP2w ago
yup!
David
David2w ago
hey @amadeus . can you tell me about the initial code being run on frontend or backend? where do you see those latencies? Cause on frontend, they should not happen, most definitely not. On Backend, that's a different story: Which provider do you use? ok, so i missed one response, you run this server-side, clear! now, tell me about WHERE you see this? with bun locally? at which point of your code is the supabase client instantiated? can you share that?
amadeus
amadeusOP2w ago
hey! thanks for responding, so a couple things. These logs and such that i shared were all just running the nextjs server locally via bun: bun run dev. I didn't do any sort of monitoring with the code in production. I have a file with my methods for getClaims, you can see a copy if it here: https://gist.github.com/amadeus/ce02e1b43090bfad59e4eed80cc4b0de (it exports 2 methods, one that just uses supabase auth (for clientside checks), and the other that does it manually (for serverside stuff). Right now in prod on the server, i'm doing it manually, but when i setup my timing logs, i was using these 2 functions as they were Here's an example server rendered nextjs page that uses these methods: https://gist.github.com/amadeus/f96c709bc3f6bb1e514258be20d94fe8 And of course, here's what my createServerClient looks like: https://gist.github.com/amadeus/e45c15043595976197cc980008944e37
David
David2w ago
So, generally, especially when using bun locally, auth-js (used by supabase-js)does the exact same as what you're doing with your JWT_CACHE but with a TTL of 10 minutes (hardcoded). So if you see a new request every 10 minutes, that's expected and correct . If you however saw requests more often with bun running, then it MUST've been something else, e.g. like a hot-reload that invalidated the cache or so. If it was a compiled version of that, this should really not happen more often than 10 minutes. On Vercel deployment, this can be again different as Vercel can deploy new functions and new caches (which is the same problem with your manual solution as well, I'm soon publishing a video showing how to fix it)

Did you find this page helpful?