docker logs immich-machine-learning error
Hello,
I’m running Immich on a Debian machine.
Everything has been working perfectly for months, but recently the immich-machine-learning container started failing repeatedly.
I checked the container logs using the following command:
docker logs immich-machine-learning
And I’m getting this repeated error:
[FATAL tini (7)] exec ./start.sh failed: No such file or directory
This happens immediately when the container tries to start. I haven’t made any configuration changes recently.
Could you please advise on how to fix this issue ?
Let me know if i should check logs somewhere else 🙏
Thank you for your help.19 Replies
:wave: Hey @HelloSadness,
Thanks for reaching out to us. Please carefully read this message and follow the recommended actions. This will help us be more effective in our support effort and leave more time for building Immich :immich:.
References
- Container Logs:
docker compose logs
docs
- Container Status: docker ps -a
docs
- Reverse Proxy: https://immich.app/docs/administration/reverse-proxy
- Code Formatting https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline#h_01GY0DAKGXDEHE263BCAYEGFJA
Checklist
I have...
1. :blue_square: verified I'm on the latest release(note that mobile app releases may take some time).
2. :blue_square: read applicable release notes.
3. :blue_square: reviewed the FAQs for known issues.
4. :blue_square: reviewed Github for known issues.
5. :blue_square: tried accessing Immich via local ip (without a custom reverse proxy).
6. :blue_square: uploaded the relevant information (see below).
7. :blue_square: tried an incognito window, disabled extensions, cleared mobile app cache, logged out and back in, different browsers, etc. as applicable
(an item can be marked as "complete" by reacting with the appropriate number)
Information
In order to be able to effectively help you, we need you to provide clear information to show what the problem is. The exact details needed vary per case, but here is a list of things to consider:
- Your docker-compose.yml and .env files.
- Logs from all the containers and their status (see above).
- All the troubleshooting steps you've tried so far.
- Any recent changes you've made to Immich or your system.
- Details about your system (both software/OS and hardware).
- Details about your storage (filesystems, type of disks, output of commands like fdisk -l
and df -h
).
- The version of the Immich server, mobile app, and other relevant pieces.
- Any other information that you think might be relevant.
Please paste files and logs with proper code formatting, and especially avoid blurry screenshots.
Without the right information we can't work out what the problem is. Help us help you ;)
If this ticket can be closed you can use the /close
command, and re-open it later if needed.Try to down your stack, then
docker image prune -a
, then bring it all back upAlso post your compose
well prune cleaned 34Go but still having the issue 😄
Immich is started from cosmos cloud, i don't really know where to find the compose of the stack, but i can get the compose of each item of the stack
I have 0 idea what cosmos cloud is, so not 100% sure on that
immich, immisch-dataabse and immich-redis are up and running, just the machine-learning failing and rebooting
Please show us the process of down, prune, up with all console output
stack running

stack stopped

prune

Well that should be freeing up space, since it’s not something is wrong
I assume cosmos related , it must be sequestering the image files somewhere or something Like that
when i did the prune first time, it cleared 34Go
Probably all old images. Did immich need to be re downloaded when you did up?
nope did not redownload anything
So it didn’t work
hmmmm
damn
weird that only machine learning container is screwed
maybe i should stop the docker-compose that starts cosmos
proceed with the prune and then up it again
Ah COsmos Cloud
This is a Cosmos problem
let me find the solution
https://discord.com/channels/979116623879368755/1356375687908294963/1356998399567859976 @HelloSadness
thanks i'll try that in few minutes 🤞
great, did the job !
thank you all for your time
♥️
should i close the ticket or do you manage it ?
This thread has been closed. To re-open, use the button below.