Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Hey dr Furkan, i saw your video and you had txt docs besides all of your sample images, is that necessary ? If not what would happen if we don't go through the hassle of creating a txt doc filled with tags for each image and just put them all into dreambooth training ? How does that work, and does Dreambooth recognise the subject and style the image contains by itself ? Wouldn't it need a description of what it's seeing ? I'm trying to train for a style and i probably have 100 images ready to be trained, so i want clarification on this one
Discord : https://discord.gg/HbqgGaZVmr. This is the video where you will learn how to use Google Colab for Stable Diffusion. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
Our Discord : https://discord.gg/HbqgGaZVmr. Newest update of DreamBooth extension of Automatic1111 brought huge quality and success improvement. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
Discord : https://discord.gg/HbqgGaZVmr. This is the video where you will learn how to use Google Colab for Stable Diffusion. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
I am explaining from scratch to very advanced level how to use #Automatic1111 Web UI and D8ahazard #DreamBooth extension to teach new subjects, e.g. your face into a model. Moreover, I am showing how to inject your taught face into a completely new model e.g. Protogen x3.4 to produce awesome quality images without wasting too much time on findin...
Openpose + dreambooth sounds amazing. Would it be possible to train dreambooth using Openpose and still use controlnet? I hope there will be a way to combine already existing dreambooth models into Openpose.
Hey, does any know how to solve the error [RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x1024 and 768x320)] when trying to use ControlNet?
Our Discord : https://discord.gg/HbqgGaZVmr. In this video, You will learn how to use new amazing Stable Diffusion technology #ControlNet in Automatic1111 Web UI. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
I like this video because you go from beginning to scratch really quickly through each step and showing little tricks that are very good for non-computer scientist like myself to learn.
i got this error "Exception training model: 'expected Tensor as element 0 in argument 0, but got NoneType'." trying to training using LORA settings. using model: SD15NewVAEpruned.ckpt as a base model. anyone knows what is this error about?
Stephen Wolfram explores the broader picture of what's going on inside ChatGPT and why it produces meaningful text. Discusses models, training neural nets, embeddings, tokens, transformers, language syntax.