Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Trained sdxl lora and sdxl fine tune has many artifacts like extra arms and disembodied limbs. If lora I have to use it on low weight, like 0.4, higher reduces image quality. Trained on cyberreal xl and big love for lora. Used onetrainer for fine tune, body was not learned properly. I wonder if using regularisation images caused my models body to be not learned so well. Used cicitai for lora train. You guys have any tips?
Can someone help me diagnose this issue. Images from my recent fine tuned model are coming out with strange patterns particularly noticable on skin but also just overall in the image. Is it due to over training or could it be from particular grainy images in dataset?
if your dataset has grainy images than it's because of that. If not then your learning rate is high. If your images are grainy try reducing the lora rank and re train
The installation of pytorch + xformers is successful. When I check in the .venv, the xformers are indeed installed. However during training, the CrossAttention settings simply get ignored, both if I set it to xformers and to SDPA. No mention of CrossAttention is shown in the training logs, and I notice on the VRAM usage that this is not being enabled. I have all the requirements on my pc (python 3.10, cuda 12.8 etc).
Do you have any advice? Are the cross attention settings working in your koyha_ss install with your 5090, xformers in particular? Thank you in advance.
Currently when I start kohya ss gui on RTX5090, I get the following error and cannot learn Lora. Maybe I should wait for an update, but is there any way to learn Lora using RTX5090? UserWarning: NV...
Thank you for your reply. Yes I have python 3.10.11, I am using the latest config "48GB_GPU_28200MB_6.3_second_it_Tier_1". Speed is around 3.85s/it. Perhaps it is indeed working but I find it strange there is no menion of this in the cmd windows. I remember the use of xformers was shown here on older installs (when using a 4090). Do you see this in your koyha cmd windows when training?
Further more, I am getting the exact same 3.85s/it speed and 27.7 GB VRAM used when I set CrossAttention to "none" in tge Koyha GUI, as when I set it to "xformers". So that shows that there is no cross attention happening.