Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
i created a model with the flux dev and it is working fine it took only 10 mintues and giving me good outputs and i also tried with the way you explained in the tutorials with massive computation so what is diff , i am doing it with the flux and the 10 images of data set and getting same quality of the iamges as expected from your model and mine are same .
i want variety in expression and consistency in the body of the person but i guess as you stated that can be gathered by dataset if we include them in that
so i want to improve my person flow and the generations to get eveyhthing with prompts instead of big dataset
Here's my dumb question of the day lol. Would your supir app work on massed compute? if i installed it from the zip like normal. I have about 70 images i need to upscale and i don't want to wait 12 hours
i really like massed compute for protoyping. i like to train locally, but sometime i get wierd issues with dataset and working them out at lightning speed is so nice
@Dr. Furkan Gözükara Congratulations on a job well done. I tried the exact same generation from V31 to V32 after a fresh new install. It works, two important points: positive: the TeaCache implementation is much cleaner, better quality, less artifact with, of course, same settings of 0.15. negative: during the process, on V27-31, with the 32Gb preset, on a 5090, it was often around ~28/29Gb used VRAM. Now, 18.5Gb of used VRAM with the 32GB preset, I've even tried the 48GB preset, same result,
The same generation went from 310s (v31) to 418s (v32)
yes, ofc the new teacache is better, my willing was not to compare 2 teacaches, but obviously from v31 to v32, there is that new teacache, that seems great with certainly better vram usage, but there is a drop of performance on 24/32Gb GPUs
The first test was not good at all, but I made a mistake by inserting a 9:16 photo and generated a 16:9 video. I'm making another video, but it will take some time
It is rare. The first version of teacache started slow and ended very fast. The new update starts fast and ends very slow. I think the new update is a bit slower in general. I'll keep testing tomorrow