Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Should I train the first trained Lora with a new dataset using more steps, or should I train from scratch using the images generated by the trained Lora as a dataset?
I'm trying to get my head around the structure of the Patreon as well as tutorials and have a bit of a hard time to see where to start - will appreciate a pointer to a 101 or a tutorial
My use case WAS => Stable Diffusion 1.5 and some public CivitAI models + my own photography dataset with poses, light and bokehs (150-200 images) = New checkpoint + Dreambooth fine-tuning on a new person (50-100 images) = New checkpoint, that let me create portraits of people in my own recognizable photography style (Rembrandt light, bokehs, ethereal feeling). I used https://github.com/TheLastBen/fast-stable-diffusion workbook on Colab for both trainings (SD15/CIVIT + MyStyle-XYZ.jpg x 200 images) + (Person-XYZ x 100 images). Sample results are attached - these look exactly as real people to the extent the parents can't say if it is made with AI or not. With some experimenting I managed to get to 768 and 896 pixel resolutions here, all with Automatic1111.
What I want is to elevate this to SDXL with 1280 pixels and test out SDXL and CivitAI checkpoints. So, I would have 150-200 1024-1280px images to train my style and then 50-100 images of a person to generate images like (beautiful personXYZ as styleXYZ .... ). I can't find a reliable workbook (Colab or RunPod, paid are OK) with clear instructions. I have a good development background and PhD in CS, but in a slightly different area, so I struggle to get to the point where I have a code that works so that I could experiment. Like I found configs, but how do I set up the environment / deployed OneTrainer - but it won't accept .safetensors model / managed to adjust fast-stable-diffusion to accept SDXL, but it fails to work with GDrive
Any clues, hints or pointers I could use? Thank you so much in advance! Cheers, Alex
If you are interested in using AI, generative AI applications, open source applications in your computer, than this is the most fundamental and important tutorial that you need. In this tutorial I show and explain how to properly install appropriate Python versions accurately, how to switch between different Python versions, how to install diffe...
If you want to train FLUX with maximum possible quality, this is the tutorial looking for. In this comprehensive tutorial, you will learn how to install Kohya GUI and use it to fully Fine-Tune / DreamBooth FLUX model. After that how to use SwarmUI to compare generated checkpoints / models and find the very best one to generate most amazing image...
Thank you so much Furkan! I have my best prompts and community models that worked well for SDXL, which is the best and latest SDXL (not base model) tutorial?
In the end, it turned out to be 180 images. I'm using 100 epochs, and right now it's at 50% of the training process at a speed of 6.6s per step. We'll see how the final result turns out
What would be the best way to train a model for interior design? Let me explain. I need to apply a specific product to a given room. For example, I want a Renaissance-style room with this product on the wall and this product on the floor. I understand that in this case, it makes sense to train a highly structured model with all the individual products and, at the same time, the same products applied to different rooms. I assume that captions are important in this scenario. What would be your approach to training a model for this application?
Hey @Furkan Gözükara SECourses, I'm enjoying the 5090 videos you've done and had a question. Wondering if you're planning on doing a video comparing 3090 vs 5090 for fine tuning or lora training? Sorry if asked elsewhere discord didn't show if it had or hadn't.