Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Creating consistent characters in AI art, especially with tools like Stable Diffusion SDXL, requires a blend of creativity, precision, and the right techniques.
Speaking of keywords, tonight I made a Lora model that I first trained with a keyword that was in the anime models but not (or had little chance of being) in the photo models. It performed relatively well on the photo models, but it confounded the anime models. I then created a custom keyword from several words and it performed better on both model types. And one more thing about repetition: I was training poses, and so I didn't use reg images. At first I doubled the number of repetitions, but I got a strong model. After going back to the "100/number of images" formula and using 4 reps instead of 8 (I have 25 images), I got a more flexible model. It was still a bit strong. I trained on Prodigy and 4/4 weights with 0.3 d_coef, I can't go below those values anymore.
One more thing about flexibility: I can combine my own loras very well, so I can combine people with poses, styles, and they perform well on several models. A lot of the models that others have made have been so heavily trained that they simply can't be combined, and I'm thinking of generating images with them and re-creating them myself. Plus, some of them are so lousy that it takes tricks to get a good result. I admit I've never been a fan of weighting, because where you have to use a lot of weight, the model doesn't understand what I want. I know anime models love it, but I really find it unnecessary, but sometimes you have to use it for those crappy loras.
Welcome to the Pillars of AI Application Libraries: How to Install Tutorial! Are you ready to embark on a journey of installing AI libraries and applications with ease? In this video, we'll guide you through the process of installing Python, Git, Visual Studio C++ Compile tools, and FFmpeg on your Windows 10 machine. We'll also show you ho...
File "N:\AI\Stable_Diffusion\ClassImages\SECourses\Deepface\venv\lib\site-packages\tensorflow\python\tf2.py", line 21, in <module> from tensorflow.python.platform import _pywrap_tf2 ImportError: DLL load failed while importing _pywrap_tf2: A dynamic link library (DLL) initialization routine failed.
Hi there I am wondering if there is any diffusion model that allows me to create a dataset of faces (with hats or helmets) from an existing dataset of faces only. Specifically, I have an image of a face and I want a Stable Diffusion model to accurately add a hat to this face. I tried using Stable Diffusion inpainting, but it didn't maintain the correct details of the face. I tried SDXL inpainting, but seems like it doesn't work