Fooocus loads the sdxl model too slowly
The time it takes to load the model has gone from 8 seconds to 30 seconds now. Does this have anything to do with me using Network Volume?
34 Replies
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
8 seconds is on other GPU cloud platforms, AutoDL
Network volumes are very slow, don't use them if speed matters
I have a 100G model to load. Where can I find enough storage space without using Network Volume?
bake into docker image or copy from s3 bucket
Where is the folder directory of Container Disk?
you dont want to save to conatiner storage
you want to save to volume but note if you stop pod you can end up in 0 GPU mode
I want to copy the model to the Container Disk and then load the model when I start a new machine.
You can do that but why? Regular persistent storage is fine too, its only network volumes that are slow usually
Container storage is mounted at / and includes all OS stuff too
I provide image generation services through the API. Each image may need to switch models, so the speed of loading models is very important to me.
Why do you want to use pods then? Surely serverless is better for that?
I haven't delved into serverless yet, but I use a custom version of the Fooocus code
Can Fooocus be customized without server?
Anything is possible but sounds like a lot of work.
I didn't find the Fooocus serverless template, is there one now?
No
How do I report a Pod that is performing very poorly?
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
Using Fooocus to load the sdxl model from the Container Disk still takes 30 seconds. Why? Normally it should take 5-10 seconds to complete.
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
I should have gotten rid of Network Volume and moved all models to Container Disk
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
Yes, so it should have nothing to do with Network Volume. I changed 3 Pods and loading the model is still slow.
I don't know how to have normal model loading speed
On RTX 4090
Use community cloud instead of secure cloud. Secure cloud seems to use the same disk as network storage even if you don't attach network storage to your pod.
However, community cloud cannot use Network Volume to copy resources. How to start new services efficiently?
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
Would switching to community cloud solve the slow model loading issue?
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
Hope this doesn't waste my money and time😂
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
Yes, confirmed, the community cloud loads models very quickly
Why can't the Container Disk with normal performance be used in the secure cloud using Network Volume?
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View
one host has multiple pods so I think if multiple use same host machine and if someone does intensive io tasks it might affect speeds. Though they are my speculations
Sounds very plausible
Unknown User•15mo ago
Message Not Public
Sign In & Join Server To View