Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
A familiar trick. So first you create a "character generator" Lora, which is not flexible, then you use the generated images to create a tagged database and train a flexible Lora.
This might be a dumb question, if you use dreambooth instead of Lora for the training, is the safetensors still used in the Lora folder like a Lora or the checkpoint folder? Or can you do either? And which is best if so?
@Dr. Furkan Gözükara thanks for the quick response, Yes the file was acquired from patreon, my patreon emails is different than my discord email: aljohns
(venv) N:\AI\Stable_Diffusion\ClassImages\SECourses\Deepface>python Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>>
Creating consistent characters in AI art, especially with tools like Stable Diffusion SDXL, requires a blend of creativity, precision, and the right techniques.
Speaking of keywords, tonight I made a Lora model that I first trained with a keyword that was in the anime models but not (or had little chance of being) in the photo models. It performed relatively well on the photo models, but it confounded the anime models. I then created a custom keyword from several words and it performed better on both model types. And one more thing about repetition: I was training poses, and so I didn't use reg images. At first I doubled the number of repetitions, but I got a strong model. After going back to the "100/number of images" formula and using 4 reps instead of 8 (I have 25 images), I got a more flexible model. It was still a bit strong. I trained on Prodigy and 4/4 weights with 0.3 d_coef, I can't go below those values anymore.
One more thing about flexibility: I can combine my own loras very well, so I can combine people with poses, styles, and they perform well on several models. A lot of the models that others have made have been so heavily trained that they simply can't be combined, and I'm thinking of generating images with them and re-creating them myself. Plus, some of them are so lousy that it takes tricks to get a good result. I admit I've never been a fan of weighting, because where you have to use a lot of weight, the model doesn't understand what I want. I know anime models love it, but I really find it unnecessary, but sometimes you have to use it for those crappy loras.
Welcome to the Pillars of AI Application Libraries: How to Install Tutorial! Are you ready to embark on a journey of installing AI libraries and applications with ease? In this video, we'll guide you through the process of installing Python, Git, Visual Studio C++ Compile tools, and FFmpeg on your Windows 10 machine. We'll also show you ho...
File "N:\AI\Stable_Diffusion\ClassImages\SECourses\Deepface\venv\lib\site-packages\tensorflow\python\tf2.py", line 21, in <module> from tensorflow.python.platform import _pywrap_tf2 ImportError: DLL load failed while importing _pywrap_tf2: A dynamic link library (DLL) initialization routine failed.