Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Everything is buggy and trucculent. I am going to actually use aText snippet textexpander and use my own template. Seems like booru and all those generators are meant for Clip2 and weird W14 tags
I have a very specific use case. I am training Loha's for character models which require specific tags like close-up, portrait, macro lens shot, background description, jewellery, makeup, etc...
when i install dreambooth extension it says model_cfg: DreamboothConfig = Body(description="The config to save"), NameError: name 'DreamboothConfig' is not defined
RuntimeError: Failed to import transformers.models.clip.feature_extraction_clip because of the following error (look up to see its traceback): cannot import name 'is_torch_dtype' from 'transformers.utils' (C:\Users\Lukag\Desktop\stable-diffusion-webui\venv\lib\site-packages\transformers\utils__init__.py)
Discord : https://bit.ly/SECoursesDiscord. Torch 2 / Pytorch 2 is now supported along with new DreamBooth Automatic1111 Web UI extension. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
Hi everyone, in you experience, is a larger training set better than a smaller one in dreambooth? Like do i get better results with 10 pictures of myself or 100? Or do I just get overtraining?
Hi, I'm trying to train a LORA model in Dreambooth on Automatic1111. I got it to kind of work a few times but now I can't seem to get good results. It seems like using a class data set along with my images makes the result worse. If I share my parameters/data info could someone give me some tips on what to change?
does anyone know how to update the Lora models in Auto1111 when you click refresh. I actually have to quite the terminal and relaunch from scratch. Refresh Lora tab and reload webui, neither refresh the new model in. Any thoughts?
I just got some pretty good results with this data and these settings:
42 training images of myself in different situations/lighting caption .txt files that go along with the images RealisticVisionV20 as a source checkpoint
Saving -Use Lora -use Lora extended -100 steps/epochs -batch size 2 -use gradient checkpointing -0.00001 unet learning rate -constant_with_warmup LR scheduler -other settings from all the vids, 8bit AdamW, fp16, xformers -Scale prior loss (checked, did not edit values) -no sanity prompt
Concepts -Used directory with photos of me, and .txt files with the token I assigned replacing my name -no class data -instance token is my name without vowels -class token is man -instance and class prompts are [filewords] -sample prompt boxes are blank -no class img gen
I'd rate the results i'm getting 7 or 8 out of 10, any tips appreciated