Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
It is rare. The first version of teacache started slow and ended very fast. The new update starts fast and ends very slow. I think the new update is a bit slower in general. I'll keep testing tomorrow
From v32 to v33 : Aspect ratio works correctly now, GPU uses 26.9GB of VRAM instead of 18.5Gb with the 32Gb preset, and for the exact same generation with Teacache 0.15 for 14B : 310s (v31), 418s (v32), 402s (v33)
@Dr. Furkan Gözükara For fine-tuning flux with kohya, do you recommend using the models from OwlMaster/FLUX_LoRA_Train (https://huggingface.co/OwlMaster/FLUX_LoRA_Train/tree/main) or do you recommend the models from your unified downloader (ae from black-forest-labs/FLUX.1-schnell, t5xxl from OwlMaster/SD3New, clip_l from OwlMaster/zer0int-CLIP-SAE-ViT-L-14) ?