Filesystem packer slows down after 30k files
After the filesystem packer has hashed all 255k files then the DB operations starts to slow down the entire application.
The DB writes get to 30k files before the TAR writer catches up and slows down to the DB writers speed. then it uses hours maybe days to finish...
Any way I can speed this up?
https://github.com/OptoCloud/OptoPacker
Current status:
The DB writes get to 30k files before the TAR writer catches up and slows down to the DB writers speed. then it uses hours maybe days to finish...
Any way I can speed this up?
https://github.com/OptoCloud/OptoPacker
Current status:

GitHub
Pre-packs huge filesystems containing repositories or other projects for compression, parsing gitignore files to exclude unnessecary files from packing - OptoCloud/OptoPacker