debugging `canceling statement due to statement timeout` error
I have an RPC that has started returning
canceling statement due to statement timeout
errors. when I run the exact same function in the SQL editor using EXPLAIN ANALYZE
it executes in ~4s. it is a slow query, but I'm curious if there are other suggestions to figure out why it would be timing out? I have not modified statement_timeout
(and when I run select setting from pg_settings where name = 'statement_timeout'
it appears to be 120s, though I'm not sure if client queries have a different setting).
[editing this because I was mis-reading the memory usage, though initial question remains]20 Replies
The default timeout is 3 seconds for API access.....
https://supabase.com/docs/guides/database/timeouts
ah it is 8 seconds for authenticated users – and this is an authenticated call.
but could be cutting it too close regardless, so will look into that a bit more closely. 120s seemed way too far off to make sense.
If you have RLS that could add alot over what you get with SQL testing. There is now the ability to run explain with the API.
didn't realize we could run explain via API, that's a huge help!! thanks very much for pointing that out.
I'm not sure it is in the docs yet. It was added very recently...
https://github.com/supabase/postgrest-js/blob/master/src/PostgrestTransformBuilder.ts#L175
GitHub
postgrest-js/PostgrestTransformBuilder.ts at master · supabase/post...
Isomorphic JavaScript client for PostgREST. Contribute to supabase/postgrest-js development by creating an account on GitHub.
this blog post (linked below) on the explain command was very useful with examples. in following those, I receive the error: "None of these media types are available: application/vnd.pgrst.plan"
the blog post also mentions re explain command:
"This is only suitable for development environments and is only enabled when the db-plan-enabled config is set to true."
it doesn't seem that db-plan-enabled config is exposed in the UI anywhere. I've attempted to follow the Postgrest instructions which led to me running
SET PGRST_DB_PLAN_ENABLED to true;
in the SQL editor unsuccessfully. if there is an obvious way to turn that on, I would be very grateful for any tips there.
https://supabase.com/blog/postgrest-v10#:~:text=This%20is%20only%20suitable%20for%20development%20environments%20and%20is%20only%20enabled%20when%20the%20db%2Dplan%2Denabled%20config%20is%20set%20to%20true.Supabase
PostgREST v10: EXPLAIN and Improved Relationship Detection
Today, PostgREST 10 was released. Let's take a look at some of the new features that go hand in hand with supabase-js v2.
(and the instructions that I attempted to follow to enable db-plan-enabled: https://postgrest.org/en/v10.0/configuration.html#environment-variables)
I'm sorry, I've not had a chance to test yet. Could be why it is not in the official docs yet. Not sure.
will contact support and see if they can enable the config for me at least – thank you for the help!
I just verified I get the same error.
You can simulate RLS in the SQL window if you want to see impact...
https://github.com/supabase/supabase/discussions/9311
Shows one way of doing it with explain.
GitHub
Stable functions do not seem to be honored in RLS in basic form... ...
I was starting to do some testing on RLS optimization and there appears to be a big issue with using a stable function in RLS. I've limited the example here to using auth.uid(), a stable sq...
wow – that investigation is super helpful. our RLS isn't complicated (all tables in the query just check if user is authenticated) so I didn't think to check the RLS for efficiency and assumed it was the joins/complexity of query. will do some more testing to see if the RLS is the root issue using those steps – thank you!
I'm also curious whether any throttling happens at ~95% usage (noting the comments here https://supabase.com/docs/guides/platform/database-usage#database-storage-management) so have emailed and will see if that has any bearing here as well.
alter database postgres set pgrst.db_plan_enabled to true;
With a reset of the database (and also tried restart of instance) does set the flag... But still get the same error. I'm pretty sure my instance is on PostgREST 10 because geoJSON works.
@abaum
OK, I got hold of the PostgREST/Supabase employee who would know...
Works!thanks @garyaustin! I ran that, and I believe that it set db_plan_enabled though I don't know exactly how to confirm that/read the postgrest config.
when you say it works, were you able to run the explain() command successfully or just enable the setting? I'm still seeing an error in both curl and supabase-js "None of these media types are available: application/vnd.pgrst.plan"
Yes I got back explain results instead of the header error. I have paused and resumed the instance recently though which might be needed to upgrade PostgREST. Not sure.

I spoke too soon! just got the curl command to work, so it may have been the pause/resume issue. appreciate the tip!
also – WOW the results are extremely different executing with service_role token vs. authenticated user. exact same query takes 3058.813ms with service_role vs actual of 7452.908ms in client. all tables are public and just doing a check if authenticated for RLS, nothing complex on reads. that's pretty huge for us – again really appreciate the suggestion!
It is key to filter if possible without functions running on every row. Also adding indexes when used with a function is critical. Uid() and things like is_admin() type functions are low overhead but killer if run on a non indexed column with a lot of rows.
there's definitely some optimization we can probably do with filtering, but our RLS policies were generated via the UI back in ~March of this year, so they were all by default on public, using
(role() = 'authenticated'::text)
. we switched them to check on authenticated
outside of that, and just return true
and it brought the query time on client from 7391.312ms to 3083.451ms in our test database.
needless to say we're extremely excited to ship that to production.Role() has same overhead as uid(). That was part of why they went to using the native Postgres way.