Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
@Furkan Gözükara SECourses do you think to have 2 Lora people in one picture it's best to do face inpainting or regional prompting on Swarm? what will achieve best results
you guys ever get this error while training? in this session im running now, seems to happen almost every time, just have to click "generate" a few times and then it works anyway? (4x RTX 6000 on MassedCompute, using Init Image)
@Furkan Gözükara SECourses have you tried the recently released flux pro model's api for fine tuning, wondering what your take is on that vs the dreambooth method in terms of quality
@Furkan Gözükara SECoursesEDIT SOLVED: In the folder on massed compute there was hidden folder "cache" that I had to delete. then I could upload again. ORIGINAL MESSEAGE: I am trying to transfer checkpoints from massed compute to huggingface. I use the jupyter notebook. Works very good and fast. But I have one problem.... Some checkpoints I had to delete from hugging face. Then I tried again to retransfer them from massed compute to hugging face. but the jupyter always says: "Recovering from metadata files: 100%". and then its just skipping.... What do I need to do? is there an overwrite command? or do i need to delete metadata files and where are they?
@Furkan Gözükara SECourses hey, i installed invoke ai on my linux machine locally with your config. how do i import these clips and vae? i dont wanna download from the starter model cause i use finetuned flux model which does not have vae and text encoders baked in.
@Furkan Gözükara SECourses I did training of flux model. And get very nice looking pictures when generating the grid to compare the different checkpoints. I noticed though that most of the pictures are kind of serious looking (face) or only minimal smile. In my training set of 256 images the majority are more serious looking... so this then directly determines also that the generated images will mostly be serious looking? is the model then capable of more smiling or joyful expression generating? and does it need to be explicitly prompted? e.g. man wide smiling ....
may I ask if VisoMaster deepfake app is better than the usual Rope live that we had so far which was good enough ?
Checking it out quickly, i think VisoMaster is pretty much the same with Rope Live. They are using the same models, just VisoMaster made a better UI. Is that the case?