Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
if your dataset has grainy images than it's because of that. If not then your learning rate is high. If your images are grainy try reducing the lora rank and re train
The installation of pytorch + xformers is successful. When I check in the .venv, the xformers are indeed installed. However during training, the CrossAttention settings simply get ignored, both if I set it to xformers and to SDPA. No mention of CrossAttention is shown in the training logs, and I notice on the VRAM usage that this is not being enabled. I have all the requirements on my pc (python 3.10, cuda 12.8 etc).
Do you have any advice? Are the cross attention settings working in your koyha_ss install with your 5090, xformers in particular? Thank you in advance.
Currently when I start kohya ss gui on RTX5090, I get the following error and cannot learn Lora. Maybe I should wait for an update, but is there any way to learn Lora using RTX5090? UserWarning: NV...
Thank you for your reply. Yes I have python 3.10.11, I am using the latest config "48GB_GPU_28200MB_6.3_second_it_Tier_1". Speed is around 3.85s/it. Perhaps it is indeed working but I find it strange there is no menion of this in the cmd windows. I remember the use of xformers was shown here on older installs (when using a 4090). Do you see this in your koyha cmd windows when training?
Further more, I am getting the exact same 3.85s/it speed and 27.7 GB VRAM used when I set CrossAttention to "none" in tge Koyha GUI, as when I set it to "xformers". So that shows that there is no cross attention happening.
my first guesst would be - some compression artefacts trained in with dataset. Another potential issue - you are trying to generate at much larger resolution than you training set
It would be cool if someone found prompting techniques to improve quality (the obvious ones like "hi res image", "high quality image" have almost no effect on the generation)
Okay but if I set cross attention to "none" I get the sane speed and VRAM. That and the missing mention of this in the cmd window makes me very suspicious if this is actually working.
Hi all - followed the Flux training Lora guide/tutorial. Does my speeds seem correct for a 3090? It’s roughly 9 it/s using the level 2 workflow.
I keep seeing references to a “massive speed increase” with torch 2.5, but the 1 click installer has I believe installs 2.7? So I imagine that’s already implemented.
FluxGym is substantially faster, around 2 it/s, if memory serves me. But as I am just learning about all this, is a bit difficult for me to compare apples to apples/settings to settings. FluxGym while easy to use, doesn’t have the most friendly of interfaces when digging into additional settings.