Bulk Upload Large folder error
Trying to setup Immich as a replacement for QNAP Qumagie, the default photo app for my nas.
Finally got the app setup in docker and finally got the CLI tool installed, I ran the upload code and got this error.
I have over 50K worth of images i wanted to move over from my old backup folder. Could be it's just too large idk.
TIA for all the help.
Indexing local assets...
/share/CACHEDEV1_DATA/.qpkg/NodeJS18/lib/node_modules/immich/bin/index.js:164
files.push(...yield crawler.crawl(newPath).withPromise());
^
RangeError: Maximum call stack size exceeded
at /share/CACHEDEV1_DATA/.qpkg/NodeJS18/lib/node_modules/immich/bin/index.js:164:23
at Generator.next (<anonymous>)
at fulfilled (/share/CACHEDEV1_DATA/.qpkg/NodeJS18/lib/node_modules/immich/bin/index.js:29:58)
Node.js v18.12.1
54 Replies
You don't have any links that could cause an infinite loop do you?
There are the folders mapped to the container.
- /share/CACHEDEV1_DATA/Container/Docker Data/Immich:/config
- /share/CACHEDEV3_DATA/NAS Pics/Immich/Photos:/photos
- /share/CACHEDEV3_DATA/NAS Pics/Immich/Uploads:/uploads
This is the folder with my photos that I linked in the command.
/share/CACHEDEV3_DATA/NAS\ Pics/Main
What container have you mapped these folders to? The immich server?
the first 3 are binded to the server folders.
The CLI should really be run outside of the immich server container
it is
can you shared your docker-compose file and screenshot?
What directory / path are you passing to the cli command then?
/share/CACHEDEV3_DATA/NAS\ Pics/Main


what is the RAM on your Qnap?
16gb
Hmm
If you try to upload a folder with a small number of asset, does it work?
What about editing the index.js file and adding a console.log so we can see what path is causing the error?
The error is here:
You could add this line on just above, so like on line 163:
Then when you run it again hopefully that gives us a clue as to where it is going wrong.
ok one min.
ok added it and ran it
Indexing local assets...
/share/CACHEDEV3_DATA/NAS Pics/Main
/share/CACHEDEV1_DATA/.qpkg/NodeJS18/lib/node_modules/immich/bin/index.js:165
files.push(...yield crawler.crawl(newPath).withPromise());
^
RangeError: Maximum call stack size exceeded
at /share/CACHEDEV1_DATA/.qpkg/NodeJS18/lib/node_modules/immich/bin/index.js:165:23
at Generator.next (<anonymous>)
at fulfilled (/share/CACHEDEV1_DATA/.qpkg/NodeJS18/lib/node_modules/immich/bin/index.js:29:58)
Node.js v18.12.1
Super weird. I assume there is nothing unusual about this directory or it's contents?
just photos
You know, I think this might be due to large volume of photos + using async
I removed the recursive command and lets see if it works then. I'm pretty sure it has no sub folders but added it just to make sure.
Are all 50k assets in that one folder?
No,
How many are there?
less, im trying to check, I have 50K in the parent folder Nas Pics split into 2 folders Main which im trying to upload and Mobile where I let my phone backup too.
And now where cooking.

Turns out I have 3 folders under main with a total of 1600 pictures. so recursive was causing an issue for some reason
and I guess I do have over 50k pictures in the main folder alone.
We might have some issue with the CLI
because I did try to work with 100k of assets and it works fine
Yeah, it's weird because it's even working fine here with the large set of assets.
I was able to reproduce it locally though.
Let's roll back to the previous version
This might be a bug there too, let me check
Would these 2 folders cause recursive issues? Are the automatically ignored?

Can you instasll the new version of immich CLI
npm i -g immich
we just pushed out a potential fix
I don't think soAlso, the session stops when the ssh connection is lost?
It closed after about a 1100 pictures. considering I'm using a laptop the connection won't stay on for long.
ok updated, I should use the recursive command I assume.
please try it
hmm still indexing local assets. didnt take this long last time.
Well, last time you didn't go recursively to those other directories
true but the are one level deep and have 1k files.
This time it is and there is presumably a lot of files to scan through, most of which will be ignored.
You can run the
tree
command to look at all the files, presumably there are some ones that might not show up in the UI.When I did the benchmarking, scanning 100k+ asset doesn't take longer than 10-20seconds
still indexing, something must be wrong.
Running it without recursive took 5 second. ran it again twice with recursive and it took 20 seconds but it worked. it found over 210k more files than I know about so i wont use it but at least it was fast.

Yeah, that's weird. It might be picking up files from the trash
So I cleared the trash just to make sure an it was still the same amount. I then looked at the volume in sftp and see the hidden thumb folders. The CLI tool is pulling all the files from the hidden folders in the entire tree. is there anyway to program in an option to ignore hidden folders and files?
No, there is not unfortunately
can you explicitly exclude certain folders? QNAP starts all these folders with the @ sign.
No there is not for the CLI at the moment
ok, at least it's working on individual folders for now.
I was wondering, will it import faster if it was running all the jobs at the same time?
If Yes, how can I disable the jobs?
i just realized, the CLI is seperate program, I should be able to pause Immich right?
I don't think there is a way to disable the jobs that run automatically on upload. You could disable machine learning if you wanted to do that part after the upload was complete.
How do I do that?
If you want to pause the jobs while uploading, just turn off the microservices container
Yeah, that would work too. When you turn it back on, it'll process everything that's been queued.
How?
Usually, in a docker deployment, youd just run docker stop container name.
I was using the all in one so I needed to figure out a command to stop it. I did it and it definitely saved time.
I am now running all the jobs and was wondering why in the UI there is no way to pause them? I don't want to lose the queue I just want to be able to choose to pause each job where it stands.
You could maybe open a feature request for this. There is support for pausing and clearing the queue in the server api endpoint, but no buttons for it in the web.