Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
yes that I understand. While generating one image with one model it takes the first image that we generated as reference and inpaints the face right? but in grid generation, I didnt have a first image and I directly started generating images. My doubt is what reference would it take for inpainting if we dont have our first image generated?
Oh okay. I thought "1" in 1, 0.7, 0.5 meant take the first image as reference for inpainting the face <segment:yolo-face_yolov9c.pt-1,0.7,0.5//cid=11> that is why I was confused and wanted to know what would happen for the first image generation if we didn't have our first image generated.
can we use ipadapter to use a reference image/face so the generated images are consistent? similar to segment, do we have anything for ipadapter like this - <ipadapter:test_image.jpg:0.5> ?
I noticed you are using adafactor, what is your take on prodigy? the learning rate will adapt accordingly right? it might consume more vram but what do you think the result quality would be with prodigy?
I might be wrong but because the learning rate adapts, I think we might eliminate the risk of overfitting. Your advise would be helpful.