Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
In this tutorial, I am going to show you how to install OneTrainer from scratch on your computer and do a Stable Diffusion SDXL (Full Fine-Tuning 10.3 GB VRAM) and SD 1.5 (Full Fine-Tuning 7GB VRAM) based models training on your computer and also do the same training on a very cheap cloud machine from MassedCompute if you don't have such compute...
I trained a flux lora on cicitai, and for testing i went with a real person. On generations, skin gets plastic and shiny...not like my dataset. Using heun beta 30 steps. Curious about what might cause this
FusionX: The BEST AI Video Model? + FLUX Hyper-Realistic Upscaling (One-Click Setup!). Struggling to create high-quality AI videos and hyper-realistic images? This tutorial is your ultimate solution! I'm introducing the incredible new Wan 2.1 FusionX model and a game-changing 2x latent upscaler for the FLUX model, all made incredibly simple with...
13:48 Generating a High-Quality Image with The Official FLUX Preset 14:50 Using Automatic Face Segmentation & Inpainting with FLUX 16:05 The Ultimate Upgrade: Applying The FLUX 2x Latent Upscaler Preset 16:32 Final Result: Comparing Standard vs. 2x Upscaled Image Quality 16:50 Outro & Sneak Peek of The New Ultimate Video Processing App
should I check and verify the weigths ect as in the 1st place mixing my lora with that model is altering my model details or it'll be done automatically?
Have any examples to share? If not that’s fine. Just curious how “plastic” it is. Plastic type skin is pretty inherent with Flux regardless. I have read there are some Loras that can help, I am testing way too many different things at the moment so can’t suggest.
And everything I have read training to Flux 1 Dev is by far the best.
What other models are you training when you say it looses its likeness?
Dr Furkan, I saw you released an update for SDXL finetunning today, 20/june. Are you planning any update for FLUX finetunning or your last release for FLUX are the best yet?
Dr using your tutorial and trainning a FLUX finetunning in Massed Compute, about choosing the GPU, is there a better GPU to choose? For better I'm only considering faster trainning and the relation cost x benefit, I know that the prices increase. I don't understand one thing, if I choose a H100 x8, or a 8 H100 boards, is it eight times faster than an 1 H100? or it doesn't work this way? Can you tell me what is the best board option in Massed Compute, ignoring price, for a 100 images finetunning of a person?
In your post, about Dreambooth finetunning, you said: "...single RTX 4090 is almost same speed as RTX A6000 for Fine Tuning." Does it mean that if I have a RTX 4090 can I train, locally, at the same speed as a RTX a6000 that has 48gb of vram, double vram than mine 4090?
So, if I choose the file 16GB_GPU_15150MB_11.6_second_it_Tier_1 to train a FLUX finetunning, my 4090 is better than a RTX A6000? Is it right? I imagine that I can't choose the 24GB_GPU_22900MB_7.3_second_it_Tier_1 because it's to close to use the full amount of memory, right?
Hi! Does anyone know if it's possible to upload a fine-tuned model to Replicate to use it via API? And if someone knows, could you give me a mini step-by-step tutorial on how to do it?
if i were to train a finetune on say norwegian women, to better capture their looks, say i have 5 models with maybe 10 photos each i use for this. would this be an example of where we use captions? @Furkan Gözükara SECourses