Error after updating to v1.43.2

Updated the app from 1.43.1 to 1.43.2 in proxmox using "update" command. Installation was done through community script, and the app has been running fine for months until this upgrade. Getting a failed status when I run "systemctl status homarr". See attachment for error while running the script "run_homarr.sh". systemctl status homarr × homarr.service - Homarr Service Loaded: loaded (/etc/systemd/system/homarr.service; enabled; preset: enabled) Active: failed (Result: exit-code) since Sun 2025-11-09 19:18:19 EST; 52min ago Duration: 278ms Process: 169 ExecStart=/opt/run_homarr.sh (code=exited, status=1/FAILURE) Main PID: 169 (code=exited, status=1/FAILURE) CPU: 311ms homarr run_homarr.sh[212]: at TracingChannel.traceSync (node:diagnostics_channel:328:1> homarr run_homarr.sh[212]: at wrapModuleLoad (node:internal/modules/cjs/loader:244:24) homarr run_homarr.sh[212]: at Module.executeUserEntryPoint [as runMain] (node:internal> homarr run_homarr.sh[212]: at node:internal/main/run_main_module:33:47 { homarr run_homarr.sh[212]: code: 'MODULE_NOT_FOUND', homarr run_homarr.sh[212]: requireStack: [] homarr run_homarr.sh[212]: } homarr run_homarr.sh[212]: Node.js v24.11.0 homarr systemd[1]: homarr.service: Main process exited, code=exited, status=1/FAILURE homarr systemd[1]: homarr.service: Failed with result 'exit-code'.
25 Replies
Cakey Bot
Cakey Bot4w ago
Thank you for submitting a support request. Depending on the volume of requests, our team should get in contact with you shortly.
⚠️ Please include the following details in your post or we may reject your request without further comment: - Log (See https://homarr.dev/docs/community/faq#how-do-i-open-the-console--log) - Operating system (Unraid, TrueNAS, Ubuntu, ...) - Exact Homarr version (eg. 0.15.0, not latest) - Configuration (eg. docker-compose, screenshot or similar. Use ``your-text`` to format) - Other relevant information (eg. your devices, your browser, ...)
Frequently Asked Questions | Homarr documentation
Can I install Homarr on a Raspberry Pi?
Meierschlumpf
Meierschlumpf4w ago
We got similar reports before and it always was temporary network issues. Can you try again?
.shinigami.
.shinigami.OP4w ago
restarted the container and ran the script again. Same issue :/
Meierschlumpf
Meierschlumpf4w ago
Did you stop the running processes? It says the ports are already in use: Address already in use
.shinigami.
.shinigami.OP4w ago
I thought restarting the container would do that. I just tried restarting and then doing "killall5 -9". Now after running the script I get this error.
.shinigami.
.shinigami.OP4w ago
Not sure if I should be doing something else to kill specific processes. At this point I'm just considering spinning up a new instance and moving over the data. Is there an easy way of doing that? I only see migration from old version in the docs.
Meierschlumpf
Meierschlumpf4w ago
Can you rerun the update command instead? Otherwise the data should be in the /opt directory
.shinigami.
.shinigami.OP4w ago
It just says, no update available. Or maybe I wait for the next release. Is it the homarr_db folder under /opt ?
Meierschlumpf
Meierschlumpf4w ago
Yes, but there should be some other things as well. I think you have to update a specific file to rerun it, I'll check in another post Maybe this post helps you: https://discord.com/channels/972958686051962910/1429156423740428399/1429217294390923448
monke
monke4w ago
I have the same issue, thankfully I have backups
.shinigami.
.shinigami.OP4w ago
I'll give it a try. I just have homarr homarr-data-backup homarr_db homarr_version.txt run_homarr.sh in the opt folder. Hoopefully the db folder should be enough. This was suppose to be a temp setup until I build my new server, but lessons learnt.
.shinigami.
.shinigami.OP4w ago
So just tried a brand new install, and that failed too. Had similar issues when I tried to build manually in old container as well.
No description
.shinigami.
.shinigami.OP4w ago
Btw if it makes any difference, I upgraded to proxmox 9.0.9 last month.
Manicraft1001
Manicraft10014w ago
137 is out of memory @.shinigami. Please allocate more memory to the LXC See https://github.com/community-scripts/ProxmoxVE/issues/3778#issuecomment-2890660396 And please upvote https://github.com/homarr-labs/homarr/issues/3146 so we know that there is demand for package based distribution
L0rd
L0rd4w ago
Hey I have the exact same error. Is there any way to fix it? I tried rebuilding everything with pnpm. I also changed the version in the .homarr file to be able to run update again but nothing seems to work.
Manicraft1001
Manicraft10014w ago
See this, @L0rd
L0rd
L0rd4w ago
I did the recommended steps but its still not working. 12gb disk space, 4 cores, 6gb ram. Disabled yarn, ran update, did a reboot
Manicraft1001
Manicraft10014w ago
6gb is not enough
L0rd
L0rd4w ago
more ram!?
Manicraft1001
Manicraft10014w ago
See the post for an explanation what's going on
L0rd
L0rd4w ago
Okay thanks I will try with 10 i guess It worked with 10 thanks a lot. Will probably also work with less but I didnt try
Manicraft1001
Manicraft10014w ago
You're welcome. As said in the linked issue, Homarr running doesn't need this much, but the building does. Until today, we do not ship package based, therefore building is required.
.shinigami.
.shinigami.OP4w ago
I can confirm that increasing the ram worked for me for a new build as well. Still don't have a solution to fix the current container though. I'll try to move over the db to new one and see if the data ports over correctly. Okay so I was able to move the data over to new container and my apps worked! However all the integrations were broken so I had to manually fix them. Luckily I didn't have that many.
Manicraft1001
Manicraft10014w ago
You probably didn't copy the encryption key, therefore they did not work
.shinigami.
.shinigami.OP4w ago
Here's what I did for reference. 1. Create new container with 16gb ram/4g 2. Remove homarr_db folder under /opt 3. Mount old container to host using "pct mount <CTID>" 4. Copy homarr_db folder to host tmp using "cp -r /path/to/mount/opt/homarr_db /path/to/tmp" 5. Unmount container using "pct unmount <CTID>" 6. Mount new container 7. Copy from host to new container under /opt 8. Unmount new container 9. Restart container Ohh okay. Good to know. All good for now. Time to build a proper server and setup backups! Thanks all for the help

Did you find this page helpful?