Jobs stuck on initial load/sync of large data set
Hi.
I've recently setup Immich and included an external library on the same server/disk containing 767GB of media. Now the jobs seem stuck.
Immich server logs says:
"ReplyError: ERR Error running script (call to f_b4cd4bbdf096cd8d06246080315ae81c56e05a46): @user_script:222: @user_script: 222: -MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error."
Attached the redis logs, last 1k lines.
I basically haven't tried any measures so far, not even a restart. Wanted to check here first to get some hints on what to try. Thanks in advance.
Other info:
Immich v1.121.0
Docker version 24.0.2, build cb74dfc
Ubuntu 22.04.5 LTS
3 Replies
:wave: Hey @andstr,
Thanks for reaching out to us. Please follow the recommended actions below; this will help us be more effective in our support effort and leave more time for building Immich :immich:.
References
- Container Logs:
docker compose logs
docs
- Container Status: docker compose ps
docs
- Reverse Proxy: https://immich.app/docs/administration/reverse-proxy
Checklist
1. :blue_square: I have verified I'm on the latest release(note that mobile app releases may take some time).
2. :blue_square: I have read applicable release notes.
3. :blue_square: I have reviewed the FAQs for known issues.
4. :blue_square: I have reviewed Github for known issues.
5. :blue_square: I have tried accessing Immich via local ip (without a custom reverse proxy).
6. :blue_square: I have uploaded the relevant logs, docker compose, and .env files, making sure to use code formatting.
7. :blue_square: I have tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable
(an item can be marked as "complete" by reacting with the appropriate number)
If this ticket can be closed you can use the /close
command, and re-open it later if needed.what do you mean by stuck? it's normal for the number to climb very rapidly and appear stagnant
because as jobs finish they add more jobs
if your CPU is busy and the number is high, give it 1-2-3 days to work through everything
Ok, so the error in the log is not relevant? The CPU load is high:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
595773 root 20 0 14.9g 167068 17532 S 99.7 4.3 2710:19 immich
I restarted the service and the jobs are now reducing again. Hopefully it will continue working.