R
Railway6mo ago
candide

NocoDB: deployment crash, migration failure, current transaction is aborted

Hi! I'm suffering from a migration failure and crashed NocoDb deployment on Railway. It says that current transaction is aborted, commands ignored until end of transaction block. Project ID: 611ca5b0-4e20-46b7-bbc8-993d9ad7024d
41 Replies
Percy
Percy6mo ago
Project ID: 611ca5b0-4e20-46b7-bbc8-993d9ad7024d
candide
candide6mo ago
Here are the Deploy logs:
Migration from 0108002 to 0111005 failed ────────────────────────────────────┐│ ││ Error ││ ----- ││ error: create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on ││ "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is ││ aborted, commands ignored until end of transaction block ││ at Parser.parseErrorMessage ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:287:98) ││ at Parser.handlePacket ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:126:29) ││ at Parser.parse ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:39:38) ││ at Socket.<anonymous> ││ (/usr/src/app/node_modules/pg-protocol/dist/index.js:11:42) ││ at Socket.emit (node:events:517:28) ││ at Socket.emit (node:domain:489:12) ││ at addChunk (node:internal/streams/readable:335:12) ││ at readableAddChunk (node:internal/streams/readable:308:9) ││ at Readable.push (node:internal/streams/readable:245:10) ││ at TCP.onStreamRead (node:internal/stream_based
Migration from 0108002 to 0111005 failed ────────────────────────────────────┐│ ││ Error ││ ----- ││ error: create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on ││ "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is ││ aborted, commands ignored until end of transaction block ││ at Parser.parseErrorMessage ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:287:98) ││ at Parser.handlePacket ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:126:29) ││ at Parser.parse ││ (/usr/src/app/node_modules/pg-protocol/dist/parser.js:39:38) ││ at Socket.<anonymous> ││ (/usr/src/app/node_modules/pg-protocol/dist/index.js:11:42) ││ at Socket.emit (node:events:517:28) ││ at Socket.emit (node:domain:489:12) ││ at addChunk (node:internal/streams/readable:335:12) ││ at readableAddChunk (node:internal/streams/readable:308:9) ││ at Readable.push (node:internal/streams/readable:245:10) ││ at TCP.onStreamRead (node:internal/stream_based
Brody
Brody6mo ago
did the migration itself complete?
candide
candide6mo ago
I found these other errors too
No description
No description
candide
candide6mo ago
How can I verify this?
Brody
Brody6mo ago
did the migration modal tell you the migration went off without issues?
candide
candide6mo ago
I'm a bit of an amateur here: where can I find the migration modal?
Brody
Brody6mo ago
you would have opened it when you went to migrate your database
candide
candide6mo ago
Ooooooh, yes. I migrated it last year after the email announcement was sent, and it migrated successfully.
candide
candide6mo ago
No description
Brody
Brody6mo ago
is noco still using the legacy database?
candide
candide6mo ago
I've checked the variables, and they use the db credentials for the new migrated service
Brody
Brody6mo ago
have you confirned all data is in place in the new database?
candide
candide6mo ago
Yes, just checked again now
Brody
Brody6mo ago
what do you have for your raw NC_DB variable? if it contains sensitive information then that means its wrong
candide
candide6mo ago
Which one is raw NC_DB?
No description
No description
Brody
Brody6mo ago
would you be interesting in deploying a new nocodb? your deployment is very old
candide
candide6mo ago
I'm not opposed to it. How can I deploy a new one while maintaining my db data?
Brody
Brody6mo ago
you would first need to deploy a new nocodb template, then deploy a migration template to copy the data from the current postgres to the new postgres database if thats something you want to do, i could write up a clear guide
candide
candide6mo ago
That would be wonderful, thank you 🙏🏿
Brody
Brody6mo ago
okay i will get to that when i have some time! would you like a ping when thats ready?
candide
candide6mo ago
Ouf, I see what you mean now by my deployment being very old... The dockerfile from Railway isn't even the same as the one I have in my repo... Do you think it would be enough to just edit my dockerfile to match it to the new one? If not, a ping would be perfect.
Brody
Brody6mo ago
I think you are using variables that nocodb has since renamed, so a migration to a new noco deploy would be best. will ping with guide once I've done that! @canxmedia - didn’t expect it to be such a long guide, take it slow and if you need any clarification, please don’t hesitate to ask hit the character count for my message, here is a gist instead https://gist.github.com/brody192/7689605241b25100135753029fc9484a
candide
candide6mo ago
Thank you so much, Brody! I'm currently at the step Monitor the logs of the postgres migration service The deployment crashed, and was looping through the following error:
==== Validating environment variables ====
[ OK ] PLUGIN_URL correctly set
[ OK ] NEW_URL correctly set
==== Checking if NEW_URL is empty ====
psql: error: connection to server at "roundhouse.proxy.rlwy.net" (35.212.138.205), port 45695 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
An error occurred. Exiting...
==== Validating environment variables ====
[ OK ] PLUGIN_URL correctly set
[ OK ] NEW_URL correctly set
==== Checking if NEW_URL is empty ====
psql: error: connection to server at "roundhouse.proxy.rlwy.net" (35.212.138.205), port 45695 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
An error occurred. Exiting...
No description
No description
Brody
Brody6mo ago
looks like you either are using the DATABASE_URL variable from the new postgres database as the NEW_URL variable for the migration, or you got the new and old variables mixed up now remember, railway is not responsible for any data loss that may ocour
candide
candide6mo ago
You're right. I used DATABASE_URL for both old and new. Let me try again Should I delete the Postgres Plugin Migration, or just edit the variables?
Brody
Brody6mo ago
you can just edit the variables
candide
candide6mo ago
Okay, I just want to make sure I understand the following passage from your guide:
Copy the DATABASE_URL variable from the Postgres database in your old NocoDB project into the note. making sure denote that it is the old database's URL Copy the DATABASE_PRIVATE_URL variable from the Postgres database in your new NocoDB project into the note. making sure denote that it is the old database's URL
I wasn't sure if the last sentence was a typo, so I noted down:
OLD DATABASE_URL: A OLD DATABASE_PRIVATE_URL: B NEW DATABASE_URL: C NEW DATABASE_PRIVATE_URL: D
Am I correct to place the following values? :
PLUGIN_URL: A NEW_URL: D
Brody
Brody6mo ago
100% a typo, my bad (i've fixed the gist) old DATABASE_URL -> PLUGIN_URL new DATABASE_PRIVATE_URL -> NEW_URL
candide
candide6mo ago
Awesome, so far so good...
candide
candide6mo ago
Well.... Good news: the db migrated once I put the right credentials. Bad news: the deployment failed with a similar issue. I think there's something wrong inside the database's data, but I just don't know what...
[Nest] 10 - 01/12/2024, 3:22:50 AM ERROR [ExceptionHandler] create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is aborted, commands ignored until end of transaction block

error: create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is aborted, commands ignored until end of transaction block
[Nest] 10 - 01/12/2024, 3:22:50 AM ERROR [ExceptionHandler] create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is aborted, commands ignored until end of transaction block

error: create index "nc_goya___nc_m2m_rv6ot3evwh_table2_id_index" on "nc_goya___nc_m2m_rv6ot3evwh" ("table2_id") - current transaction is aborted, commands ignored until end of transaction block
No description
Brody
Brody6mo ago
unfortunately theres not too much i can do if the data in the database is malformed might just have to use a fresh nocodb
candide
candide6mo ago
Well, I really appreciate the guide and all the help! The updated dockerfile was definitely something I would've missed if it wasn't for your help, and I'm sure the guide will help others. I'm going to try flagging down a NocoDB dev to see if the data error can be fixed, and report back in case someone else has the same error.
Brody
Brody6mo ago
i wish you the best luck! thank you for the trains!
candide
candide6mo ago
So, update on this issue: I've been troubleshooting with NocoDB for the last little bit, and here's what was said so far: I was asked to run select * from pg_indexes The NocoDB dev looked over the results and said, "Somehow index list query returns empty list even if there are index and which is causing this issue." I was then asked to do the following:
Please run the following query as well .... replace schema_name with your schema name used in connection. And you should connect to the database you are using as well.
SELECT
f.attname AS cn,
i.relname as key_name,
ix.indnatts, ix.indkey, f.attnum as seq_in_index,
pg_catalog.format_type(f.atttypid,f.atttypmod) AS type,
f.attnotnull as rqd,
p.contype as cst,
p.conname as cstn,
CASE
WHEN i.oid<>0 THEN true
ELSE false
END AS is_index,
CASE
WHEN p.contype = 'p' THEN true
ELSE false
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 0
WHEN p.contype = 'p' THEN 0
ELSE 1
END AS non_unique_original,
CASE
WHEN p.contype = 'p' THEN true
ELSE false
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 0
WHEN p.contype = 'p' THEN 0
ELSE 1
END AS non_unique,
CASE
WHEN f.atthasdef = 't' THEN pg_get_expr(d.adbin, d.adrelid)
END AS default FROM pg_attribute f
JOIN pg_class c ON c.oid = f.attrelid
JOIN pg_type t ON t.oid = f.atttypid
LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
LEFT JOIN pg_class AS g ON p.confrelid = g.oid
LEFT JOIN pg_index AS ix ON f.attnum = ANY(ix.indkey) and c.oid = f.attrelid and c.oid = ix.indrelid
LEFT JOIN pg_class AS i ON ix.indexrelid = i.oid
WHERE
c.relkind = 'r'::char
AND n.nspname = '__schema_name__'
and i.oid<>0
AND f.attnum > 0
ORDER BY i.relname, f.attnum;
SELECT
f.attname AS cn,
i.relname as key_name,
ix.indnatts, ix.indkey, f.attnum as seq_in_index,
pg_catalog.format_type(f.atttypid,f.atttypmod) AS type,
f.attnotnull as rqd,
p.contype as cst,
p.conname as cstn,
CASE
WHEN i.oid<>0 THEN true
ELSE false
END AS is_index,
CASE
WHEN p.contype = 'p' THEN true
ELSE false
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 0
WHEN p.contype = 'p' THEN 0
ELSE 1
END AS non_unique_original,
CASE
WHEN p.contype = 'p' THEN true
ELSE false
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 0
WHEN p.contype = 'p' THEN 0
ELSE 1
END AS non_unique,
CASE
WHEN f.atthasdef = 't' THEN pg_get_expr(d.adbin, d.adrelid)
END AS default FROM pg_attribute f
JOIN pg_class c ON c.oid = f.attrelid
JOIN pg_type t ON t.oid = f.atttypid
LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
LEFT JOIN pg_class AS g ON p.confrelid = g.oid
LEFT JOIN pg_index AS ix ON f.attnum = ANY(ix.indkey) and c.oid = f.attrelid and c.oid = ix.indrelid
LEFT JOIN pg_class AS i ON ix.indexrelid = i.oid
WHERE
c.relkind = 'r'::char
AND n.nspname = '__schema_name__'
and i.oid<>0
AND f.attnum > 0
ORDER BY i.relname, f.attnum;
My question is: how do I find the schema name to replace and run this query?
candide
candide6mo ago
I tried to look it up online and ran the following two queries, but got a bunch of results (I'm using Dbgate):
select schema_name
from information_schema.schemata;
select schema_name
from information_schema.schemata;
and
select nspname
from pg_catalog.pg_namespace;
select nspname
from pg_catalog.pg_namespace;
select schema_name query results
select schema_name query results
Brody
Brody6mo ago
I'm sorry but I have no clue here, I have never done any database management ever only migrations
candide
candide6mo ago
No problem! I'll do some more digging and troubleshooting with them, and report the solution here.
Brody
Brody6mo ago
sounds good, I again wish you luck!
candide
candide6mo ago
Good news: the issue has been resolved. I didn't need to run that other query after all, just needed to deploy the following image provided by a NocoDB dev, which skips transaction so the deployment works as expected: nocodb/nocodb-timely:0.204.0-pr-7425-20240113-1241 Then, once that deployed successfully, I just needed to switch back to nocodb:latest. Thank you again for your support, Brody! I've been dealing with this issue for months, and I'm sure the fresh deploy also made a difference, somehow. I'll probably refer to that guide in the future, and hopefully it helps someone else.
Brody
Brody6mo ago
oh wow i would not have ever come to that solution in a million years, very happy you where able to get it working!
Want results from more Discord servers?
Add your server
More Posts