Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I'm currently training a test LORA on the wan22 low noise model, specifically a LORA for a person or character. From what I understand, it's best to train it on a low noise model. It's a fast training, only 30 images in diffusion pipe wsl. It should take about 1.5 hours.
like if i wanted to train the same dataset with the same settings but different base models (for a lora) it would be nice if it would do them after another over night. right now i run it, wait around 1-2hours and then start a new one
Aweomse. By the way, pro tip, I got a "flash_attn2" error running it on my RTX 5090 , however when I use the command line like this, loading CL before the loading the venv, it works just fine:
C: cd "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build" vcvarsall.bat x86_amd64 cl C:\musubi-tuner cd venv cd scripts activate cd .. cd ..
What is the best finetune checkpoint for consistent character to avoid plastic flux look? Is krea or wan or qwan better than flux dev? I was optimistic about krea, was it not as good after all?