Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
for those who were interested in this dreambooth-training here are some more tests. Some with reg data, some without, some with training prompt, some without anything just code. It doesn't seem to make much of a difference.
some vips are already trained in to some degree, in my experciene no reg, no prompt gives me more then good enough results. and i switched to the competitor product (fgym) because it is so easy to use
I have a few questions. I'm now trying the DB method for flux training, no captions, 30 images, 1 repeat, 150-200 epoch about 5000-6000steps - is that too much. Also since i'm a male but i have long hair, every sample image during training shows as a women, in my case is it better to provide captions saying I'm a man? Joy caption also thinks I'm female. Also i noticed that in any of my trained/extracted lora, i can never add any other person in the picture, because they all look like me, even though I'm using a numeric trigger, and even if I import another celebrity lora, it just blends the 2 lora's together. is there no way to do an AI image of myself and my favorite celeb? ie. "photo of k3v1n with billie elish" (another lora)
so inpainting is the only way? how does it work in base flux compared to lora? because in base flux you can specify 2 embedded base celebrities, ie. john wick and taylor swift playing in a concert together.
thank you i understand. So for now, there is no way for me to train me and my friends and create pictures together, even with unique words in a checkpoint? could i train them all at once into a new checkpoint or would it bleed? ie. 1_k3v man, 1_fr1endone, 1_fr1endtwo (ie each friend is a concept)?
Thank you for all of your help and community contribution, I'm at a university in Canada, and we're looking at upgrading our old Tesla T4 servers in our engineering dept, so we can do more AI research. we're a bit behind.