Errors with Microservice Container
I am having various errors with microservice container. It seems to have problems generating thumbnails, extracting exif data, and extracting the geolocation. This causes the container to restart over and over. Here is a copy of some of the errors found in the logs
https://pastebin.com/yyumk9um
Pastebin
[Nest] 1 - 04/03/2023, 2:55:01 PM LOG [VideoTranscodeProcessor...
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
96 Replies
Can you try adding a new volume to the microservices container:
Where
/some/path
is accessible on the host machine?I just add it as a volume in my docker compose right?
Yeah, for microservices
Not the most familiar with docker compose, but is this correct?
Yup, looks good.
Do you have two
/home/home
on purpose?Yeah I just named the user that lol
lol niiiiiiiiice
But when I run docker compose, I get errors from the server now as well
`

Are you using portainer?
Or using docker compose commands?
I'd start with verifying the database starts up and the microservices container starts up. I would expect there to be some files inside of that geocoding folder, which you can try deleting and then restarting microservices again.
Just docker-compose
Bringing everything down again seemed to fix it. Now, I'm just trying to see what errors pop up now
After queuing up jobs again, there doesn't seem to be any issues regarding the geocoding
But now I am running into Exif/StorageTemplate errors
Pastebin
[Nest] 1 - 04/03/2023, 3:26:07 PM LOG [MediaService] Start Gen...
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
Good to hear
Man you seem to have stuff in a pretty weird state.
It is possible that lots of these are just from old jobs that are getting processed again or something like that.
I'd do one of two things:
1. If you don't have a lot of media, it might be worth creating a fresh install and re-importing everything. Or,
2. Wait for all the jobs to finish, then run the exif/thumbnail/transcode jobs with the "missing" option and verify if everything looks good or if you are missing anything
I just did a fresh install, so it would be a little bit of pain to do that once again, especially since I want to verify that all the previous photos were imported in correctly again
How much media did you import?
Something along the lines of 150gb?
I'm not sure the jobs will finish, since it doesn't seem to skip any files it has an issue. It just seems to try it again
I may be wrong though, the active count stays the same but waiting goes up
For which job specifically?
For all of them
even extract exif?
Yes, they're all stuck in the single digits
Can you send a screenshot?

There is the active side and the waiting side. The active should be pretty consistent, the waiting side should go down.
The waiting side is going up actually
And you are just seeing more and more errors in microservice logs?
Yes
Not sure if this is relevant, but I used Immich previously before live photos were implemented, so some photos might not have been initially linked together as live.
I updated a few months back, but this is my first time reimporting all my photos in under a fresh reinstall.
That shouldn't be a problem.. So you just did a fresh install and re-imported everything?
Did you verify everything imported successfully, or did you start to get the restart errors during the import process?
Yes, I reimported everything successfully
The only issue is that appears that folders are still being processed?
I can see them in the new folder, but they are not viewable in the Immich application
In addition the photo/video count as well as the size seems to be slowly updating
When you import, the files get saved to the database and then a few jobs need to run before they're "completely" imported.
What system are you running Immich on?
Specifically, nothing shows up in the UI until a thumbnail has been successfully generated. You could always run the generate missing job to see if that fixes anything.
Ubuntu
The problem is I can't seem to run any jobs though, they are all stalling.
Can you send another screenshot of the jobs screen?


I mean are you running Immich on a Pi?

Ah no I'm running it on an old desktop
I see
That is a lot of assets to process.
I would recommend stopping everything.
I just stopped and then restarted everything so it says 0/0
OK cool.
I think it might be worthwhile to focus on non-machine learning stuff. That uses a lot of CPU. It might be better to just see if generating all the thumbnails and extracting the exif still results in errors or not.
If you set the env
MACHINE_LEARNING_URL=false
it should not run any of the ML jobs.
So, I'd try stopping it again, adding that env, then starting it up, and everything should still be a 0 jobs, then try running the thumbnail job.
If the microservices container is running it should pick up and process the jobs and you should see the counts go down. It's possible the machine was CPU pegged and the ML container was taking all the resources so other stuff wasn't being processed.
Did you not run into issues while importing the assets or how did that go? Did the import just finish?Should I clear thumbnails/encoded videos before restarting?
What do you mean by that? Delete the actual files?
Yes
No, that should not be necessary. Also, those paths will still exist in the database so manually deleting them isn't a good idea.
Have you deleted them before? That could explain some of the issues you're seeing.
I didn't run into any issues importing. It finished a while ago, but it just seems to take Immich a while to have them show up in the GUI
Yeah, that makes sense. They won't show up until they have a thumbnail created and that queue might have been backed-up especially if you had ML stuff running at the same time.
In my previous installation, the file structure was something like year, original, thumbnail, encoded-video
So when I reimported them back, I was told to just import year and original
Yeah, we just changed that in the latest release.
Yeah, that should be sufficient.
You should have something like this now
Yeah, that is the new format
No, I'm still getting the similiar errors
So you started it without ML and then ran the thumbnail job for missing? Can you share the new logs again?
I just ran the exif data
Pastebin
[Nest] 1 - 04/03/2023, 4:02:18 PM LOG [NestFactory] Starting N...
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
Hmm, interesting - the
UQ_16294b83fa8c0149719a1f631ef
error is related to live photosSince I uploaded them to the server before live photos were implemented, I'm not sure if there are duplicate photos on my server
That could be possible.
I am not sure if the previous format for live photos is treated differently than the new format for live photos
But I would have thought this error would have come up more often if that was the case. But then again, I may be overestimating how many people have an iphone and use live photos
We implemented a way to auto-link them on the server, so you can upload the still and the motion parts separately and they'll get linked regardless of the order you upload them.
I think you might have multiple still portions of a live photo uploaded. ie two assets that both have
exif.MediaGroupUUID
.Could that have come from an interrupted upload? Not sure if photos are written as a stream, or are only saved once the full file is uploaded
The full file is uploaded and we generate a sha1 hash of the full file and save that to the database. If the upload is interrupted we delete the file and nothing is saved to the database.
The situation is:
- Two complete, successful uploads of two different files
- Both files have the same exif data for
MediaGroupUUID
You could try to identify the actual files to see if they are indeed duplicates or if something else is going on.
It might be possible that there is a valid scenario for a multiple assets having the same MediaGroupUUID
and we don't handle that scenario (very well).Hmm, so I guess my best bet is to just double check that there are no duplicate files.
So I can better understand it, library contains the files that have been processed? While the upload directory contains files that have been uploaded and still need to be processed?
I'm not sure where I should be checking for duplicates
Do you know how to run database queries?
Yes, but not sure how to access it while in a docker container
You will need to first get into the database container with
docker exec -it immich_postgres
then login to the database with psql -D immich -U postgres
(assuming you are using the default setup in .env
file)Ok, I managed to access the database. What query should I run?
You can try this one:
It gives me back some results like this

I supposed that means I have some duplicates?
How many results are there?
Just those ones?
Can you pick one of the CIDs and run this query to find the paths:
There are a good bit more than that, I'm quite rusty so I don't have an exact count
I don't get any results back
Well, you'd have to replace you're id instead of the
354EF...
oneWhoops
Here is the result

I also ran fdupe on library and uploads, but it strangely didn't find any duplicates
Right, so that one
.mov
is the motion part, and the two .heic
are the stillsActually I spoke too soon
There seem to be some in the uploads, folder, but I want to double check
Are there supposed to be multiple stills for each live photo?
Or is that the issue
That's the question. The key constraint is happening because there is a movie with multiple stills.
Like, the error is because there are multiple stills. There shouldn't be as far as I know, but maybe there's a valid reason for them like edits or something else. If they're duplicates then the key constraint is kind of expected.
Is there a solution to this? Should I go ahead and delete and duplicates? Would that mess up the database?
I would not delete anything manually from the filesystem.
Can you confirm those are duplicates?
They both got uploaded, so that implies the files are slightly different. Not sure which one you would want to keep.
Ok I see
The photos are the same photos, but they are uploaded on two different users
It looks like the previous implementation kept thumbnail generation within each user
So there was no error if there were duplicates between users
But now, it looks like thumbnails are generated on a global sense, so that if there are duplicate photos between users it causes an error
At least that's what I assume
Thumbs are still kept separate (per user), the folder order is just flipped
thumbs/<uuid>/files
To be clear, these duplicates don't really impact the system at all, it's just that re-running exif is failing for the motion asset when it gets to the linking line.But it prevents thumbnail generation and such as well right
It looks like the duplicates are just duplicated between two users
Oh interesting. Hold on.
That could be a miss on our side.
When we link a live photo we should only look in the calling user's list of assets, which we aren't.
Hmm, I'm not sure but could this issue be talking about a similar case?
GitHub
[BUG] Not able to sync all images from iOS to Immich server ยท Issue...
The bug Hi there, First let me say how glad I am that I found Immich! A great product! And as with all great products there is always some polishing to be done. So I am trying to upload my collecti...
It seems to be referencing two stills linking to one motion
Although some of my cases there seems to be 2 stills attached to 2 motions
I think the original issue in this was related to live photos, but unrelated to this specific issue. It was more specific to the upload process itself. It was the situation where motion was a duplicate, but the still didn't exist.
In this case, do I just need to wait for a bug fix?
Yes I believe so
Yeah, I'll probably have it done tonight or tomorrow.
Should be in the next release btw - https://github.com/immich-app/immich/pull/2162, although you might need to manually unlink any assets that have been linked "cross-user". I did not do that automatically in the PR.
Thanks for the quick fix, Iโll look forward to it in the next release!
Is there a way to manually unlink the assets? Are you talking about how they are linked in the database?
The records in the asset table have a live photo video id column that links the two assets together.
Yeah, I think you'd have to unlink them in the database and then re-run exif extraction on all assets to re-link them again.
Or, some queries to find only the ones that are linked across users and manually delete and re-upload them.
It's a bit of a weird situation, so not sure what's the best way to fix it. Probably depends on the data to some extent.
Yeah tbh, I may just run with a complete fresh install
That works too ๐
The new release has a feature to pause queues, which will be nice ๐