Hello everyone! Today, I’ll be guiding you step by step through the process of training a LoRA on the latest state-of-the-art text-to-image generative AI model, FLUX.
Over the past week, I’ve been deeply immersed in research, working tirelessly to identify the most effective training workflows and configurations. So far, I’ve completed 64 full training sessions, and more are underway.
I’ve developed a range of unique training configurations that cater to GPUs with as little as 8GB of VRAM, all the way up to 48GB. These configurations are optimized for VRAM usage and ranked by training quality. Remarkably, all of them deliver outstanding results—the primary difference lies in the training speed.
So yes, even if you’re using an 8GB RTX 4060, you can train an impressive FLUX LoRA at a respectable speed.
For this tutorial, I’ll be using the Kohya SS GUI, a user-friendly interface built on the acclaimed Kohya SS training scripts. With this GUI, you’ll be able to install, set up, and start training with just mouse clicks..
While this tutorial will demonstrate how to use the Kohya SS GUI on a local Windows machine, the process is identical for cloud-based services.
I encourage you to watch this tutorial to learn how to effectively use the Kohya SS GUI for training. We’ll cover everything from the basics, so even if you’re a complete beginner, you’ll be able to fully train and utilize an amazing FLUX LoRA model.
The tutorial is organized into chapters and includes manually written English captions, so be sure to check out the chapters and enable captions if you need them.
In addition to training, I’ll also show you how to use the generated LoRAs within the SwarmUI, and how to perform grid generation to identify the best training checkpoint.
Finally, at the end of the video, I’ll demonstrate how you can train Stable Diffusion 1.5 and SDXL models using the latest Kohya GUI interface.
Hello everyone! Today, I’ll be guiding you step by step through the process of training a LoRA on the latest state-of-the-art text-to-image generative AI model, FLUX.
Over the past week, I’ve been deeply immersed in research, working tirelessly to identify the most effective training workflows and configurations. So far, I’ve completed 64 full training sessions, and more are underway.
I’ve developed a range of unique training configurations that cater to GPUs with as little as 8GB of VRAM, all the way up to 48GB. These configurations are optimized for VRAM usage and ranked by training quality. Remarkably, all of them deliver outstanding results—the primary difference lies in the training speed.
So yes, even if you’re using an 8GB RTX 4060, you can train an impressive FLUX LoRA at a respectable speed.
For this tutorial, I’ll be using the Kohya SS GUI, a user-friendly interface built on the acclaimed Kohya SS training scripts. With this GUI, you’ll be able to install, set up, and start training with just mouse clicks..
While this tutorial will demonstrate how to use the Kohya SS GUI on a local Windows machine, the process is identical for cloud-based services.
I encourage you to watch this tutorial to learn how to effectively use the Kohya SS GUI for training. We’ll cover everything from the basics, so even if you’re a complete beginner, you’ll be able to fully train and utilize an amazing FLUX LoRA model.
The tutorial is organized into chapters and includes manually written English captions, so be sure to check out the chapters and enable captions if you need them.
In addition to training, I’ll also show you how to use the generated LoRAs within the SwarmUI, and how to perform grid generation to identify the best training checkpoint.
Finally, at the end of the video, I’ll demonstrate how you can train Stable Diffusion 1.5 and SDXL models using the latest Kohya GUI interface.