I'm looking for the latest info and best practices for training WAN 2.2 LoRAs, especially for charac

I'm looking for the latest info and best practices for training WAN 2.2 LoRAs, especially for character likeness on a 24GB card.

From what I've gathered so far, it seems the best dataset approach is to train the high-noise model on low-resolution videos (to learn motion) and the low-noise model on high-resolution images (to learn the fine details).

I also saw in the new Musubi-Tuner docs that you can apparently train both high and low noise LoRAs at the same time now, using the --dit (for low) and --dit_high_noise (for high) parameters in a single command. For 24GB VRAM, the key seems to be adding the --lazy_loading option to only keep the active model on the GPU.

Has anyone tried this simultaneous training method yet?
Was this page helpful?