will you use musubi-tuner? I just can't get good lora's for wan with the same dataset as used for g
will you use musubi-tuner? I just can't get good lora's for wan with the same dataset as used for good loras on hunyuan. I actually think captions might be required this time for more flexibilty during prompting. But not sure yet. I didn't caption hunyuan and they came out good, sometimes better than flux with kohya.
