Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
so this was it. i have new xformers now, and now my training in automatic 1111 dreambooth extension is working, but only in the sample images dreambooth folder shows me. , when i go to text to image, it is not recognizing my lora input at all. is there a known issue with stable diffusion and lora? theres always something (kohya trained lora seems to be working, I may have to just default to kohya instead of dreambooth extension )
thanks. i tried a fresh install as well as a different commit, but i can't get my it/s back to where it used to be. i have no clue how this happened. i haven't done any system updates or software installs recently, and this literally happened after doing the git pull
anyone know how to generate a photo with extremely detailed stripe and decoration like the photo above? I trained several model, they are good in looking, but far away from the detail level, anybody have a clue?
Thank Dr.Furkan it works and I can compare different models now. But today I discover that XYZ plot still show the same images when I use different VAE or Clip Skip. The model can still make different images with different clip skip (attached images are clip skip 1 and 10) but XYZ is unable to do so. I already do the git pull how should I fix this? (My model are trained by Kohya_ss)
@Furkan Gözükara SECourses as a follow up to information I offered about 2.X control nets last week, I found these today and thought maybe they'd be of interest to you as well https://huggingface.co/thibaud/controlnet-sd21/tree/main
i think i needed to adjust a few things, but it was jsut to get a feel for the process, seem to be having problems with the python script to rename the files, so i didn't upscale it