Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
yes, ofc the new teacache is better, my willing was not to compare 2 teacaches, but obviously from v31 to v32, there is that new teacache, that seems great with certainly better vram usage, but there is a drop of performance on 24/32Gb GPUs
The first test was not good at all, but I made a mistake by inserting a 9:16 photo and generated a 16:9 video. I'm making another video, but it will take some time
It is rare. The first version of teacache started slow and ended very fast. The new update starts fast and ends very slow. I think the new update is a bit slower in general. I'll keep testing tomorrow
From v32 to v33 : Aspect ratio works correctly now, GPU uses 26.9GB of VRAM instead of 18.5Gb with the 32Gb preset, and for the exact same generation with Teacache 0.15 for 14B : 310s (v31), 418s (v32), 402s (v33)
@Dr. Furkan Gözükara For fine-tuning flux with kohya, do you recommend using the models from OwlMaster/FLUX_LoRA_Train (https://huggingface.co/OwlMaster/FLUX_LoRA_Train/tree/main) or do you recommend the models from your unified downloader (ae from black-forest-labs/FLUX.1-schnell, t5xxl from OwlMaster/SD3New, clip_l from OwlMaster/zer0int-CLIP-SAE-ViT-L-14) ?