How to cleanly restore binary dump using pg_restore?

I believe I need to disable triggers for it to go smoothly but not sure what to do about permissions, with --disable-triggers I am hitting pg_restore: error: could not execute query: ERROR: permission denied: "RI_ConstraintTrigger_whatever" is a system trigger. For context, I am trying to: - run npx supabase db reset first (run all the migrations and leave me with empty tables, no seed) - restore from data-only binary dump using pg_restore Vaguely what I'm trying to do from my bun script
// backup
await $`pg_dump ${databaseUrl} \
--data-only \
--encoding=UTF8 \
--format=custom \
--compress=6 \
--no-acl \
--no-owner \
--no-privileges \
--quote-all-identifiers \
--verbose \
--disable-triggers \
--schema=auth \
--schema=public \
--exclude-table=auth.schema_migrations \
--exclude-table=supabase_migrations.schema_migrations \
-f ${backupFile}`;

// restore (after npx supabase db reset)
await $`pg_restore --dbname="${dbUrl}" --data-only --disable-triggers --no-owner --no-privileges --verbose ${absoluteBackupPath}`.nothrow();
// backup
await $`pg_dump ${databaseUrl} \
--data-only \
--encoding=UTF8 \
--format=custom \
--compress=6 \
--no-acl \
--no-owner \
--no-privileges \
--quote-all-identifiers \
--verbose \
--disable-triggers \
--schema=auth \
--schema=public \
--exclude-table=auth.schema_migrations \
--exclude-table=supabase_migrations.schema_migrations \
-f ${backupFile}`;

// restore (after npx supabase db reset)
await $`pg_restore --dbname="${dbUrl}" --data-only --disable-triggers --no-owner --no-privileges --verbose ${absoluteBackupPath}`.nothrow();
11 Replies
vick
vick2mo ago
Restoring data-only is a very custom type of problem when you have no access to superuser (as you discovered). You cannot just disable the constraints and triggers then load the data because of lack of permissions. Your only way forward here is to create your schema without such constraints, load the data, then add the constraints. This is how the pg_restore command works when restoring to an empty database. If you do a pg_dump --schema-only you can see the order of how the schema is restored, and use that as your guide to replace all your migration files with one pre data load, and one post data load.
Isitfato
Isitfato2mo ago
@sleepless you end up finding something that works for you? I'm facing the same problem
sleepless
sleeplessOP2mo ago
@vick I'm trying to build a tool to easily clone the state of an environment to another one (eg 'restore' backup from prod or dev remote env onto local to test something), I want to avoid any manual steps / custom migration logic so that it's reusable for other projects etc. @Isitfato I don't have a clean solution, doing this cursed thing atm
const result = await $`(echo "SET session_replication_role = replica;"; pg_restore --data-only --no-owner --no-privileges --file=- ${absoluteBackupPath}; echo "SET session_replication_role = DEFAULT;") | PGPASSWORD=postgres psql -h 127.0.0.1 -p 54322 -U postgres -d postgres -v ON_ERROR_STOP=0`.nothrow();
const result = await $`(echo "SET session_replication_role = replica;"; pg_restore --data-only --no-owner --no-privileges --file=- ${absoluteBackupPath}; echo "SET session_replication_role = DEFAULT;") | PGPASSWORD=postgres psql -h 127.0.0.1 -p 54322 -U postgres -d postgres -v ON_ERROR_STOP=0`.nothrow();
this kind of defeats the purpose of binary format since I'm piping it back to psql and not taking advantage of pg_restore but somehow still faster than my old script was doing this before with sql format dumps
const result = await $`docker exec -i supabase_db_x env PGPASSWORD=postgres psql -q -U supabase_admin -d postgres < ${absoluteBackupPath}`.nothrow();
const result = await $`docker exec -i supabase_db_x env PGPASSWORD=postgres psql -q -U supabase_admin -d postgres < ${absoluteBackupPath}`.nothrow();
vick
vick2mo ago
Interesting workaround. Minor note: you don't need to "unset" the replication role; any such settings will only affect the current connection, and as soon as psql exits, that gets closed. I don't see a way around this if you're not restoring the schema at the same time.
sleepless
sleeplessOP2mo ago
It feels like there should be an easier way to do this, how do people automate their backup / restore process?
vick
vick2mo ago
If you don't have any circular FK dependencies, you can always re-order your table loads to avoid referential errors. Unfortunately there's no way to disable triggers without having full super user privileges, which Supabase does not grant you. This "hack" of setting the role to replica seems like the way to suppress the triggers and rules from firing. @sleepless I just had a thought about this... create a new role "postgresrestore" or some such name, which inherits from the "postgres" role, but has the replication setting set on it: ALTER USER postgresrestore SET session_replication_role = replica;. Now do your pg_restore using this user instead of user Postgres. I think this should work; I have not tested it.
sleepless
sleeplessOP2mo ago
thanks will look into it when I get a chance, do you know what the reason is supabase doesn't have some kind of 'unsafe' option to grant super user if you really want to do something? feels very weird if it's restricted like this even when selfhosting
vick
vick2mo ago
They used to give you full super user, but slowly they have been clawing back some freedom in the name of being able to manage the hosted instances without an undue support burden. A recent such change was disallowing altering the auth schema for example. For self-hosting, you should have access to the supabase_admin role. You do have access to it in the CLI dev environment too. Same password as the postgres user. I guess for your use case you could just use that user 🙂 Totally slipped my mind this exists on local dev... I even have scripts that use it. 🤦‍♂️
sleepless
sleeplessOP2mo ago
well to be fair ideally I want to be able to 'restore' my remote dev env too so maybe better to find something that works everywhere
vick
vick2mo ago
Let me know if the alter user approach works. I think that's the way to make it work on hosted instances.
garyaustin
garyaustin2mo ago
On hosted I've seen two reasons. As Vick mentioned, but users expect to saved when they destroy the Supabase predefined tables/functions/grants. And something about causing interference or performance problems with other instances on the same shared AWS server (the first several instance level are all on a shared server).

Did you find this page helpful?