Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
lets say you train a lora on car rims, just rims. but u use mainly pictures of cars with rims to train the lora and have the captions be very detailed about the cars to train the lora about the rims to have it focus on the rim, but will the cars bleed into a composition later if you only want to show a stack of rims
That's a very usefull video thank you very much for sharing. I asked ChatGPT to make shorter reddit title using click-bait type (imo it's fine since you always deliver what you said) and it gave some interesting ideas:
Original "How To Find Best Stable Diffusion (any AI) Generated Images By Using DeepFace AI - Step By Step Full Tutorial - Let The AI Find Best Images For You When You Generated Images With LoRA or DreamBooth or Textual Inversion Or Any Way - Can Be Used For Professional Tasks As Well"
>> Effortlessly Discover the Most Striking Generated Images with DeepFace - Step-by-Step Guide Inside! >> Stop Wasting Time! Let AI Find the Best Generated Images for You - Step-by-Step Guide Included! >> Find the Best Images with SD and DeepFace - Save 100 hours this week - Full Tutorial Inside! >> I let DeepFace find Perfect Images among 1000 results - Save tons of time" >> I'm tired of checking 1000 SD images manually so I let Deepfake do it for me >> SD on Steroid: Check 1000 images in seconds with Deepfake
Thanks, my latest DB model is almost completed after your latest video instructions, will be intressting.. I think your next video should be about EveryDream2 and comparing it with dreambooth.. I've been messing with it for a few weeks and I find it to be able to reproduce much better images than dreambooth and a much more styled manner BUT, it is not as consistent as dreambooth, in my experience one has to produce more images to get "good" images, with dreambooth you get more similarity but the price is styleization.. But that's just me, you are much more adept at these things, it would be intressting to see what you come up with..
btw, about that info, are the images you are showing there in the video upscaled? I could see from your details that it's using a model called analog madness, you didn't mention any of this.. seems it might be important..
but it seems one can train much further than with dreambooth, and use many more images, and no need to resize and make 512x512 etc.. I get much better stylization there anyway..
on another matter, something has happend with my xyz plot, when I use the method for checkpoint name it loads weights from the wrong model, a model that I haven't even selected.. It's very strange, do you have any ideas?
not really sure how to upscale with the analogmadness model, that would be useful.. and again I'm not at all getting the same amazing results you are.. that's why I'm asking if your videos you are showing in the vid are upscaled..
Welcome to Software Engineering Courses (SECourses) – the ultimate destination for skillfully curated insights into state-of-the-art technologies and programming paradigms. We demystify the realms of Artificial Intelligence, Stable Diffusion, DreamBooth, LoRA, ControlNet, Textual Inversion, Software Engineering, Programming, C#, .NET, ASP .NET, ...
Anyone some ideas what to prompt and which model to use?
The best I got so far looked like this:
parameters
a couple of semi trucks driving down a road, close-up shot taken from behind, traveling in france, full body profile camera shot, high quality detailed, white border, shipping containers, photograph”, waste, detailed wheels, head and upper body in frame, trailer, dutch, port, no long neck, panorama shot, cone heads, stock photo, masterpiece,best quality, advertisement, high quality photo Negative prompt: (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)) Steps: 25, Sampler: Euler a, CFG scale: 4.5, Seed: 3134462633, Size: 512x512, Model hash: e6415c4892, Model: civitai_realisticVisionV20_v20
When using the prompt "RAW photo of woman" to generate class images mostly I get a variation opf nearly the same face - EVERY TIME! She looks like this, in different variations - that's a little bit creepy:
if you want a different face, prompting for facial features - "square jaw", "high cheekbones" - doesn't work very well. What DOES work very well is prompting for celebrities - it knows what they look like. And you can remix celebrities, so [ angelina jolie | scarlett johansson ] will give you a mix between their faces. if you prompt for a specific celebrity but only for part of the generation like [ angelina jolie :: 0.8 ] then you will usually get a softer version of that celebrities face as it mixes it down at the end of the generation
you can even put specific celebrities in the negative prompt and it will modify the face away from what they look like - so if you want a woman, put some male celebrity in the negative and it will change the face for you
good day guys, I'm new here and was directed here from the youtube video on deepface for finding best image. I ran into this issue while installing deepface and was wondering if anyone can help guide me to resolve it? I don't have programming experience so this is all a bit confusing for me
Hi, Hi, I need help to set up SD on Google Colab without using GPU, like Easy Diffusion can do that on local pc without any GPU. I know it's going to take a lot of time to render, but still, I am ok with that. I am ok with using only command line interface, so no GUI is needed. Any links or help ?
so my lora model looks great when i use it on the model it was trained on, but when i start using other models, i don't see my person at all, what's the best way to use my models face across other checkpoints?