Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
getting this error trying to reinstall and cant fix "Error no kernel image is available for execution on the device at line 167 in file D:\ai\tool\bitsandbytes\csrc\ops.cu"
Shit... My A1111 is borked (again). Just suddenly it won't start. I have managed to start it without --xformers, but that has allways worked before. Maybe I need to update to 0.0.19 (I was running 0.0.17 which worked better than 0.0.18), but now for some unknown reason it's not working at all
My installation had somehow stopped working. It does from time to time. Dunno why, but I'm installing a shitload or other AI-tools and updates all the time, so it probably have something to do with that. I just wish all tools used venv so the installations was separate form eachother
How to install Python, have multiple Python installations, set system wide default Python version. How to compose venv for any Python installation, change Python default path, and install SD web UI properly. Discord: https://bit.ly/SECoursesDiscord.
If I have been of assistance to you and you would like to show your support for my work, please...
problem is... My day is only 24 hours I'd like at least the double to be able to both keep up and actually use the tools for something other than just testing stuff
I am using Dreambooth using your guide (which is awesome by the way!) but I get the following error when trying to train a model 'Please check your dataset directories.' I have my images in the following folder C:\johntrain and this is what I entered into the Dataset Directory. Do you know what I am doing wrong?
Is there a necessary correlation between the base model used for training Lora and the generated class dataset? Does the class images have to be generated by the base model used? Are we attempting to pass the model weights (bias?) to the trained Lora model? Trying to understand the underlying logic @Furkan Gözükara SECourses
So one should try to use images generated by the base model used for the Lora in order not to mess up the models own fine tuning. I'm going to test this out
another question. Batch size will affect how exactly the learning rate? I know one should divide total steps by batch size to get the number of steps. So for a training of (12 instance images x 10 epochs x 80 repeats) (total 9600steps) / batch size = 1200 steps... I use batch 8 to speed up the process, but does this degrade the learning rate or loss? And I guess 1200 steps is on the low side. I should be rather between 1500-3000 steps after batch divide, correct?
Hey guys. Thank you Furkan for your contributions! I've a question. I'm running Dreambooth trainings with the A111 SD extension. I've a Nvidia GeForce RTX 3060 12gb. For what I've seen from your videos, the card should be able to make 5-7 it/s right? I don't know if anyone here has the same card and does trainings... I'm stuck in 2.5 it/s and can't go any faster, I've tried a lot of things. Do anyone here know what things I should test or check to see if there's any issue or that's the expected card speed?