**Additional question, is there any way that I can improve training speed itself?** - I used best Ti

Additional question, is there any way that I can improve training speed itself?
  • I used best Tier 1 config for 48GB vram for lora training
    • But, set "cache_latents = false / cache_latents_to_disk = false"
  • With RTX 6000 Ada, it took 24 mins for 20 epoch
  • With A100 SXM, it took 16 mins for 20 epoch
Was this page helpful?