Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Furkan Gözükara SECourses hello. I just watched your last video. Amazing stuff. One question. You said you trained The model based of The settings in that video but then you said you generated The images using realistic vision. I dont understand how that works.. Or did you train The images on realistic, because in The video you use The standard model..
Discord: https://bit.ly/SECoursesDiscord. RTX 3090 review along with RTX 3060. Stable Diffusion, OpenAI's Whisper, DaVinci Resolve and FFmpeg benchmarked. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
can you passivly train with lora captions? like using the captions of the images which dont all represent the same object but the same theme make that theme bleed into the composition even the lora was focused on a very different object.
lets say you train a lora on car rims, just rims. but u use mainly pictures of cars with rims to train the lora and have the captions be very detailed about the cars to train the lora about the rims to have it focus on the rim, but will the cars bleed into a composition later if you only want to show a stack of rims
That's a very usefull video thank you very much for sharing. I asked ChatGPT to make shorter reddit title using click-bait type (imo it's fine since you always deliver what you said) and it gave some interesting ideas:
Original "How To Find Best Stable Diffusion (any AI) Generated Images By Using DeepFace AI - Step By Step Full Tutorial - Let The AI Find Best Images For You When You Generated Images With LoRA or DreamBooth or Textual Inversion Or Any Way - Can Be Used For Professional Tasks As Well"
>> Effortlessly Discover the Most Striking Generated Images with DeepFace - Step-by-Step Guide Inside! >> Stop Wasting Time! Let AI Find the Best Generated Images for You - Step-by-Step Guide Included! >> Find the Best Images with SD and DeepFace - Save 100 hours this week - Full Tutorial Inside! >> I let DeepFace find Perfect Images among 1000 results - Save tons of time" >> I'm tired of checking 1000 SD images manually so I let Deepfake do it for me >> SD on Steroid: Check 1000 images in seconds with Deepfake
Thanks, my latest DB model is almost completed after your latest video instructions, will be intressting.. I think your next video should be about EveryDream2 and comparing it with dreambooth.. I've been messing with it for a few weeks and I find it to be able to reproduce much better images than dreambooth and a much more styled manner BUT, it is not as consistent as dreambooth, in my experience one has to produce more images to get "good" images, with dreambooth you get more similarity but the price is styleization.. But that's just me, you are much more adept at these things, it would be intressting to see what you come up with..
btw, about that info, are the images you are showing there in the video upscaled? I could see from your details that it's using a model called analog madness, you didn't mention any of this.. seems it might be important..
but it seems one can train much further than with dreambooth, and use many more images, and no need to resize and make 512x512 etc.. I get much better stylization there anyway..
on another matter, something has happend with my xyz plot, when I use the method for checkpoint name it loads weights from the wrong model, a model that I haven't even selected.. It's very strange, do you have any ideas?
not really sure how to upscale with the analogmadness model, that would be useful.. and again I'm not at all getting the same amazing results you are.. that's why I'm asking if your videos you are showing in the vid are upscaled..