Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
anyone know how to generate a photo with extremely detailed stripe and decoration like the photo above? I trained several model, they are good in looking, but far away from the detail level, anybody have a clue?
Thank Dr.Furkan it works and I can compare different models now. But today I discover that XYZ plot still show the same images when I use different VAE or Clip Skip. The model can still make different images with different clip skip (attached images are clip skip 1 and 10) but XYZ is unable to do so. I already do the git pull how should I fix this? (My model are trained by Kohya_ss)
@Furkan Gözükara SECourses as a follow up to information I offered about 2.X control nets last week, I found these today and thought maybe they'd be of interest to you as well https://huggingface.co/thibaud/controlnet-sd21/tree/main
i think i needed to adjust a few things, but it was jsut to get a feel for the process, seem to be having problems with the python script to rename the files, so i didn't upscale it
@Furkan Gözükara SECourses Would you mind me asking... I might have either missed it completely, missed it because of the flood of available info or I have not missed at all and there actually is no such video, BUT.... Finetuning... We have tutors for creating checkpoints/embeddings/LORA's from subjects/persons/styles, but where is the tutorials for creating complete finetuned checkpoints (Or embeddings/LORA's if possible). I mean like training them on thousands of images like some models are. Most models on CivitAI seem to be just people mixing other models together and that feels more like a trial and error method that can turn out "how ever".