R
Railway5mo ago
Smash

Migration failed at "Migrate data" step

I'm trying to migrate my postgre database to the latest version but I have this error. Project id : a7345c66-bdc3-498c-a8eb-04d21132af95
No description
31 Replies
Percy
Percy5mo ago
Project ID: a7345c66-bdc3-498c-a8eb-04d21132af95
Brody
Brody5mo ago
what do the logs of the migration service say?
Smash
Smash5mo ago
==== Dumping database from PLUGIN_URL ==== pg_dump: warning: there are circular foreign-key constraints on this table: pg_dump: detail: hypertable pg_dump: hint: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints. pg_dump: hint: Consider using a full dump instead of a --data-only dump to avoid this problem. pg_dump: warning: there are circular foreign-key constraints on this table: pg_dump: detail: chunk pg_dump: hint: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints. pg_dump: hint: Consider using a full dump instead of a --data-only dump to avoid this problem. [ OK ] Successfully saved dump to plugin_dump.sql Dump file size: 15G ==== Restoring database to NEW_URL ==== DO psql:plugin_dump.sql:1530063: ERROR: could not extend file "base/16384/23528.3": No space left on device HINT: Check free disk space. CONTEXT: COPY strategy_reports, line 53127 psql:plugin_dump.sql:1530063: STATEMENT: COPY "public"."strategy_reports" ("id", "avg_bars_in_loss_trade", "avg_bars_in_trade", "avg_bars_in_win_trade", "avg_los_trade", "avg_los_trade_percent", "avg_trade", "avg_trade_percent", "avg_win_trade", "avg_win_trade_percent", "commission_paid", "gross_loss", "gross_loss_percent", "gross_profit", "gross_profit_percent", "largest_los_trade", "largest_los_trade_percent", "largest_win_trade", "largest_win_trade_percent", "margin_calls", "max_contracts_held", "net_profit", "net_profit_percent", "number_of_losing_trades", "number_of_wining_trades", "percent_profitable", "profit_factor", "ratio_avg_win_avg_loss", "total_open_trades", "total_trades", "timeframe", "long_only", "short_only", "max_strategy_draw_down", "open_pl", "buy_hold_return", "sharpe_ratio", "sortino_ratio", "max_strategy_draw_down_percent", "max_strategy_run_up", "buy_hold_return_percent", "open_pl_percent", "max_strategy_run_up_percent", "from_date", "to_date", "trades", "history_buy_hold", "history_draw_down", "history_draw_down_percent", "history_equity", "history_equity_percent", "history_buy_hold_percent", "commission_value", "commission_type", "from_date_trading", "to_date_trading", "default_quantity_type", "default_quantity_value", "last_100_trades", "last_60_days_profit_factor", "last_60_days_total_trades", "use_bar_magnifier", "last_60_days_net_profit_percent", "created_at", "updated_at", "created_by_id", "updated_by_id", "t_statistic", "p_value", "standard_deviation_of_returns", "statistical_relevancy_score", "excess_return_percent", "annualized_rate_of_return", "avg_trade_duration_ms", "strategy_type", "history_cumulative_returns_percent_timed", "history_draw_down_percent_timed", "history_cumulative_buy_hold_returns_percent_timed", "alpha", "unique_key", "public", "pyramiding") FROM stdin; [ ERROR ] Failed to restore database to postgresql://postgres:dc44GDaCFb1b65d1Ad2EAC3dGd1bAe3e@postgres.railway.internal:5432/railway.
CiaranSweet
CiaranSweet5mo ago
==== Restoring database to NEW_URL ====
DO
psql:plugin_dump.sql:1530063: ERROR: could not extend file "base/16384/23528.3": No space left on device
HINT: Check free disk space.
CONTEXT: COPY strategy_reports, line 53127
==== Restoring database to NEW_URL ====
DO
psql:plugin_dump.sql:1530063: ERROR: could not extend file "base/16384/23528.3": No space left on device
HINT: Check free disk space.
CONTEXT: COPY strategy_reports, line 53127
Brody
Brody5mo ago
how big is your legacy database?
Smash
Smash5mo ago
If it's the Memory metrics, 6GB
Brody
Brody5mo ago
nope im talking about the size of the data you have stored in the database new databases are limited to 5gb on the hobby plan, do you think you have more data than that stored in your legacy database?
Smash
Smash5mo ago
Yes it's at least 8-10 GB
Brody
Brody5mo ago
then you would need to upgrade to pro for access to 50gb volumes, then you could rerun the migration
Smash
Smash5mo ago
Ok thanks Migration has completed successfully but I now have two Postgres Legacy service, it this normal ?
Brody
Brody5mo ago
it's not unheard of, delete the postgres legacy that has the volume the actual legacy database will not have a volume
Smash
Smash5mo ago
Sorry is this the 3rd one starting from the left ?
Smash
Smash5mo ago
No description
Brody
Brody5mo ago
tbh not what I thought it would look like, is that postgres legacy service with a volume the newest database with a 50gb volume?
Smash
Smash5mo ago
Yes it wasn't here before
Smash
Smash5mo ago
No description
Brody
Brody5mo ago
but is that postgres legacy service with a volume the newest database with a 50gb volume?
Smash
Smash5mo ago
I'm not sure to be honest, how can I check ?
Brody
Brody5mo ago
did someone else run the migration for you?
Smash
Smash5mo ago
No I did I checked by clicking on the volume and I guess it's the newest database because the other Postgres Legacy service has logs for 7+ days, if that's what's asked. I don't have any other postgres db on this project
Brody
Brody5mo ago
is all your data in the new postgres legacy service? is the volume on that postgres legacy service a 50gb volume?
Smash
Smash5mo ago
Yes it seems like it's complete, number of rows are correct It's a 50GB volume yes
Brody
Brody5mo ago
okay then you can just rename it to Postgres
Smash
Smash5mo ago
Ok
Brody
Brody5mo ago
make sure you are using variable references to connect your apps to the new database, once that is done and you are absolutely sure everything made it into the new database you are free to delete the old one
Smash
Smash5mo ago
Okay thanks you. This current month I had crazy egress cost that I think are due to my scrapping service sending data to Strapi that then communicate to Postgres. Will private networking solve this issue ? I activated it and replaced the old host with the private host in my scrapping script (using private network to communicate to strapi)
Brody
Brody5mo ago
there are no egress fees on the private network when you do service to service data transfer, so using it wherever possible will definitely reduce your egress costs
Smash
Smash5mo ago
Ok Thanks for the fast support, it's appreciated plus2
Brody
Brody5mo ago
no problem!