Message: Organization exceeds quota.

Hello, I've received a message saying: Organization exceeds quota. You are given a grace period until 23 Sep, 2025 Does anyone know how to manage this? I can't seem to find how or where I've exceeded the quota as I am only trying out what I create and I've deleted almost everything. Any suggestions as to how approach this? Thanks!
52 Replies
garyaustin
garyaustin5w ago
Check your usage under settings for specific warnings.
estefanie.martens
I see this but then...
No description
garyaustin
garyaustin5w ago
That is pretty bad. You are way over.
estefanie.martens
not sure how to get the details of the cronjob
garyaustin
garyaustin5w ago
You need to have a cron task cleaning it up regularly. https://github.com/citusdata/pg_cron?tab=readme-ov-file#viewing-job-run-details You must have cron jobs running quite fast.
estefanie.martens
mmm but it´s weird because the tables don´t really have that much stuff, there´s not even a gigabyte in total in the tables ok running an sql to check the jobs
garyaustin
garyaustin5w ago
Your cron details table is way too big. The link shows a cron task to run to clean up in the future. You could run the SQL in the SQL editor to clean up now, but then will need to vacuum. https://supabase.com/docs/guides/platform/database-size
estefanie.martens
This is what I get
No description
estefanie.martens
ANy ideas what I can do?
garyaustin
garyaustin5w ago
What are you running and where? There is a 60 second limit for the SQL editor it seems... https://supabase.com/docs/guides/database/postgres/timeouts
estefanie.martens
I have no clue. I don't program, so I don't understand at all this language. Last night a friend of mine was helping me out. I'm trying to figure this out as the deadline is today
garyaustin
garyaustin5w ago
What do you do when you get that error?
estefanie.martens
That's from llast night, I don't know what I need to do to move forward.
garyaustin
garyaustin5w ago
And I don't know what you did that caused the error.
estefanie.martens
I think she run one of these
No description
garyaustin
garyaustin5w ago
So in the SQL editor try just running this:
DELETE FROM cron.job_run_details WHERE end_time < now() - interval '7 days';
DELETE FROM cron.job_run_details WHERE end_time < now() - interval '7 days';
estefanie.martens
YEs, that what I get from that quary
No description
garyaustin
garyaustin5w ago
I'm getting this from an AI but it looks correct....
DO $$
DECLARE
rows_deleted integer;
BEGIN
LOOP
DELETE FROM cron.job_run_details WHERE end_time < now() - interval '7 days' LIMIT 10000;
GET DIAGNOSTICS rows_deleted = ROW_COUNT;
IF rows_deleted = 0 THEN
EXIT;
END IF;
COMMIT; -- Commit after each batch
PERFORM pg_sleep(0.1); -- Optional: Add a small delay
END LOOP;
END;
$$;
DO $$
DECLARE
rows_deleted integer;
BEGIN
LOOP
DELETE FROM cron.job_run_details WHERE end_time < now() - interval '7 days' LIMIT 10000;
GET DIAGNOSTICS rows_deleted = ROW_COUNT;
IF rows_deleted = 0 THEN
EXIT;
END IF;
COMMIT; -- Commit after each batch
PERFORM pg_sleep(0.1); -- Optional: Add a small delay
END LOOP;
END;
$$;
But it could still timeout in the SQL editor, not sure.
estefanie.martens
should I run that?
garyaustin
garyaustin5w ago
I would try it on my system if I had the issue. I've never had a timeout deleting rows from a table, but I don't have any tables this big.
estefanie.martens
This is what i got
No description
garyaustin
garyaustin5w ago
The AI's disagree on whether the one above should work or not.
DO $$
BEGIN
LOOP
WITH deleted AS (
SELECT job_pid
FROM cron.job_run_details
WHERE end_time < NOW() - INTERVAL '7 days'
LIMIT 10000
FOR UPDATE SKIP LOCKED
)
DELETE FROM cron.job_run_details
WHERE job_pid IN (SELECT job_pid FROM deleted);

IF NOT FOUND THEN
EXIT; -- Exit the loop when no more rows are deleted
END IF;
COMMIT;
END LOOP;
END $$;
DO $$
BEGIN
LOOP
WITH deleted AS (
SELECT job_pid
FROM cron.job_run_details
WHERE end_time < NOW() - INTERVAL '7 days'
LIMIT 10000
FOR UPDATE SKIP LOCKED
)
DELETE FROM cron.job_run_details
WHERE job_pid IN (SELECT job_pid FROM deleted);

IF NOT FOUND THEN
EXIT; -- Exit the loop when no more rows are deleted
END IF;
COMMIT;
END LOOP;
END $$;
This one from another AI runs in my SQL editor, but I don't have many cron rows so not sure how it will do with yours... It did delete rows for me.
estefanie.martens
I think I got the same answer... I'm trying it again
garyaustin
garyaustin5w ago
Another way, much simpler if that does not work...
Truncate table cron.job_run_details;
Truncate table cron.job_run_details;
This will delete all rows. But unless you are debugging issues almost no reason to use the details table results. After either approach will still need to vacuum and then add the cron task to do the original query every day or so to keep this from happening again.
estefanie.martens
this is what I got now
No description
estefanie.martens
how do I vacuum?
garyaustin
garyaustin5w ago
So this is the warning on this...
vacuum full cron.job_run_details;
vacuum full cron.job_run_details;
It could cause cron jobs running to fail if it takes too long. You could also try running the Truncate one I showed first. It will delete everything but also supposedly cleans up too. Not sure though it will do that clean up if the rows are all ready deleted.
No description
estefanie.martens
I run the truncate table and I got this in return
No description
garyaustin
garyaustin5w ago
Check the large objects report again.
estefanie.martens
which one is that?
garyaustin
garyaustin5w ago
I don't know you showed it in one of your very first posts.
garyaustin
garyaustin5w ago
Appears to be from here:
No description
estefanie.martens
Oh yes
No description
estefanie.martens
This one ITs almost the same
garyaustin
garyaustin5w ago
You want the report that showed the large objects.
garyaustin
garyaustin5w ago
No description
estefanie.martens
This one?
No description
garyaustin
garyaustin5w ago
It is gone. If that is same project.
estefanie.martens
I don't see the cron anymore
garyaustin
garyaustin5w ago
There are reports for each project.
estefanie.martens
I only have these 2. Ar those what you mean?
No description
garyaustin
garyaustin5w ago
Right. Looks like it has been reduced.
estefanie.martens
This has changed too
No description
garyaustin
garyaustin5w ago
Now you need to add a cron task to keep it from happening again. https://github.com/citusdata/pg_cron?tab=readme-ov-file#viewing-job-run-details
-- Delete old cron.job_run_details records of the current user every day at noon
SELECT cron.schedule('delete-job-run-details', '0 12 * * *', $$DELETE FROM cron.job_run_details WHERE end_time < now() - interval '7 days'$$);
-- Delete old cron.job_run_details records of the current user every day at noon
SELECT cron.schedule('delete-job-run-details', '0 12 * * *', $$DELETE FROM cron.job_run_details WHERE end_time < now() - interval '7 days'$$);
estefanie.martens
I got this
No description
garyaustin
garyaustin5w ago
So it should be running. You want to check in a week or so and make sure your table stabilizes at the 7 day point. It should not grow much between 7 and 9 days as it keeps getting trimmed.
estefanie.martens
where do check?
garyaustin
garyaustin5w ago
You also should make sure you really need to be running your cron tasks as fast as your are that you built up the table so big. The database tab then tables then cron schema and you should see the size there. Not sure if it will drift up in large objects or not in 7 days.
estefanie.martens
Ok, I will talk to my friend so she makes sure I understand, lol. Thanks
estefanie.martens
Hi, I'm having trouble with supabase again because even though we cleaned my accpount on time (by the 23rd) it does not acknowledge that. I get that message that my services are restricted. Can you help me? This is extremely frustrating
No description
garyaustin
garyaustin4w ago
You will have to contact support or upgrade to pro if possible. Seems like it did not get fixed in time or there is another issue.

Did you find this page helpful?