When fine tuning flux, any suggestions on lora_rank? The default on the replicate trainer is 16, and
When fine tuning flux, any suggestions on lora_rank? The default on the replicate trainer is 16, and it says higher numbers capture more features but take longer to train. Any guidance on this?

