Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Master Stable Diffusion XL Training on Kaggle for Free! Welcome to this comprehensive tutorial where I'll be guiding you through the exciting world of setting up and training Stable Diffusion XL (SDXL) with Kohya on a free Kaggle account. This video is your one-stop resource for learning everything from initiating a Kaggle session with dual ...
Has anyone played with stable-fast yet? I'm getting decent results in SDXL (crap with 1.5) and it's 2x as fast as a1111/sd.next (sd.next claims it is in their latest build but I only get results when I gen using python scripts)
hi everyone, I have a question about my xl controlnet models. for some reason even when I updated them from the patreon file in my runpod, it's still not working.
Yes, I'm running a 4090 at the moment. I'm just wondering which version of Dreambooth I should use in terms of the ones listed here https://github.com/d8ahazard/sd_dreambooth_extension/tags, because it seems like the version I got from the "extensions" tab is just not working properly, or I have something set very wrong & nothing is fixing it for some reason. I mentioned before, when I first watched your original Tutorial on training LoRA through Dreambooth, my Dreambooth looked the same as yours & everything was decent enough, but then my SD broke when I was trying to get TensorRT working the first time, so I had to reinstall it from scratch, and since then, my Dreambooth looks very different from the one in the tutorial, so I'm not sure if I'm missing a setting, have something wrong, or what the issue is. I've tried training the same face off as many as 60 & as few as 25 images, with quite a few different settings ranges (I've literally tried to train this face 10 times now lol), and it keeps coming up as barely any different than if I put the person's name in with no LoRA loaded, in terms of how they look, on a given Checkpoint.
depending on how hot you're getting, and what GPU we're talking about, it should be fine. I cranked my 4090 at one point when rendering batches of 4x4, and it was actually getting warm enough to thermally throttle the clocks slightly, but that was with a +150 core or so offset, with the voltage maxed & the power slider maxed, and that's on an air cooled card in a well ventilated case, the GPU Hotspot in HWInfo was showing 87.6C it was kinda warm in here since it's winter, but yeah. If you're not overclocking and you have a well ventilated case, I would say don't worry about it. You can also try Undervolting, because rendering in SD uses VRAM more than the Core, so undervolting/downclocking your core a bit will actually save a lot of heat by lowering your power draw. I've been running my 4090 @ 2600MHz core with 900mV & the difference in render time between that & stock is less than 3 seconds on a 16-image batch - but that runs literally 10C cooler on the core & draws ~120W less power while rendering. Quoted the wrong post but that was in reference to you asking about liquid cooling your GPU.
for time being will train new image batch with lora. Last time there were too many repeating images which made lora inherit certain aspect of dress/color into it and was making output with repative features of the images
When caching taking over 30 minutes it time outs. I wonder if this is Kohya related The message is from Kaggle notebook caching latents. checking cache validity... 100%|████████████████████████████...
My lora training just started .. roughly 4.5 hours. @Dr. Furkan Gözükara The old guide seems to be limiting with ram availability , but i think it can be slightly updated since kaggle has increased the ram. I didnt add -lowram argument