Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Is there anyone who could recommend me guides about the efficiency and optimization considerations to be made when using ComfyUI workflows? Expecially to be used at scale. Thanks in advance!
Hey everyone i have a few questions and will start with this one .. I there a way to speed up the generation of flux dev on Sd forge. I feel it is very slow with my RTX 3090 4-5 min sometimes until it starts generating. With flux1-dev-bnb-nf4.safetensors and as well the normal dev version as you can see in the image
I finished a training of myself with 36 images i used epoch 150 and got 10 loars. I used ohwx man and was wondeirng if i have to use ohwx everytime in the prompt to trighger the lora? It seems that it didnt train enough or smething since i start seeing only a bit of likeness starting at 90..
"I am getting 26 secs on 30 steps, 19 secs on 20 steps, and 14 secs on Hyper 8 steps. That's including the unfortunate Lora handling by Swarm, as I'm using Hyper as a Lora"
I tried forge but it did weird things with loading and unloading the model, consuming a lot of system mem (not vmem). maybe you have the same issue and your system is swapping to disk?
Yeah it does look like it.. Maybe Install swarm then to test. Thanks for the feedback :). Looking at the output ein it seems a issue with the memory loading and unloading. was there a installer from the Dr. or just main page
Actually I tried to merge this features with flux so that it would create the images then convert it to vector with the specified features (color, stroke, style, etc) but I couldn't. I'm not that good at coding I think.
@Dr. Furkan Gözükara I noticed all your generations have very clear backgrounds, how is that possible? Mine always come out with a shallow depth of field which is soft and blurry background
the LoRA training making that impact. As you do train more epochs, that annoying "shallow depth of field which is soft and often blurry" effect is gone. But more training also causes overfitting so you have to have a very good config + decide which checkpoint to use for balance both