Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I'm considering fine-tuning Stable Diffusion with 80k clothing images, featuring 5k unique products. I plan to assign a unique token for each apparel item. Is this approach suitable for training? I'm using the EveryDream2 trainer.
yeah planning to use multi gpus. I have unique ids for each product so planning to put some random alphabets that is not in the clip vocabulary. I have tried this with 500 products and I got decent results. my concern will the model remember all the tokens when I scale it to 5000
i have a ComfyUI workflow question. anybody have a good method of inpainting large images so that the sampler only works on the masked area?
example, i arrive at a large-ish image (3000x3000) by upscaling. I need to fix parts. if i use the Mask Editor and push it through the KSampler (VAEencode > Latent Noise Mask > KSampler) it seems it still works on the entire image rather than just the masked area. In A1111, I recall it just working on the masked area.