Cancelling statement due to statemtn timeout

i am experiencing an issue where a query to a table is timing out and I am not sure why. the query is
const { data: cameras, error } = await supabase

.from("cameras")

.select(

"id,location,device_name,ip_address,physical_status,online_status,last_status_change"

);
const { data: cameras, error } = await supabase

.from("cameras")

.select(

"id,location,device_name,ip_address,physical_status,online_status,last_status_change"

);
however, when i navigate to my page /reports/dynamic/cameras i see that the request fails here are my terminal logs showing the error and request time
cameras route time: 6794
cameras route time: 2025
cameras route time: 10341
Error fetching cameras from Supabase: {
code: '57014',
details: null,
hint: null,
message: 'canceling statement due to statement timeout'
}
cameras route time: 7912
cameras route time: 179
cameras route time: 191
cameras route time: 8757
cameras route time: 134
cameras route time: 181
cameras route time: 154
cameras route time: 6794
cameras route time: 2025
cameras route time: 10341
Error fetching cameras from Supabase: {
code: '57014',
details: null,
hint: null,
message: 'canceling statement due to statement timeout'
}
cameras route time: 7912
cameras route time: 179
cameras route time: 191
cameras route time: 8757
cameras route time: 134
cameras route time: 181
cameras route time: 154
the amount of cameras in the table are 433 so the query should be pretty fast, what could the issue possibly be? I have indexes on the table, simple rls policies, and when running
EXPLAIN (ANALYZE, BUFFERS)

SELECT id, location, device_name, ip_address, physical_status, online_status, last_status_change

FROM public.cameras
EXPLAIN (ANALYZE, BUFFERS)

SELECT id, location, device_name, ip_address, physical_status, online_status, last_status_change

FROM public.cameras
i get the following
[
{
"QUERY PLAN": "Seq Scan on cameras (cost=0.00..21.33 rows=433 width=83) (actual time=0.594..2.797 rows=433 loops=1)"
},
{
"QUERY PLAN": " Buffers: shared read=17"
},
{
"QUERY PLAN": "Planning:"
},
{
"QUERY PLAN": " Buffers: shared hit=136689 read=98117"
},
{
"QUERY PLAN": "Planning Time: 8175.206 ms"
},
{
"QUERY PLAN": "Execution Time: 4.384 ms"
}
]
[
{
"QUERY PLAN": "Seq Scan on cameras (cost=0.00..21.33 rows=433 width=83) (actual time=0.594..2.797 rows=433 loops=1)"
},
{
"QUERY PLAN": " Buffers: shared read=17"
},
{
"QUERY PLAN": "Planning:"
},
{
"QUERY PLAN": " Buffers: shared hit=136689 read=98117"
},
{
"QUERY PLAN": "Planning Time: 8175.206 ms"
},
{
"QUERY PLAN": "Execution Time: 4.384 ms"
}
]
a large about of time is spent in planning. any thoughts on what may be causing the issue?
2 Replies
garyaustin
garyaustin3w ago
Does this table get lots of updates? If you have only 500 rows I doubt it is RLS and I would expect execution time to be the reason. This is all postgres so you should also search the web on high planning time for postgres. I do see dead tuples mentioned from high numbers of updates as postgres adds new rows for an update and the old rows still take up table space until a vacuum is done. Have you tried the query in the SQL editor?
Hector
HectorOP3w ago
yes thats how i see the detailed information the table does receive updates every 5 mins or so, thanks for the lead im going to do a bit more research

Did you find this page helpful?