Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
i've done a flux lora with "ohwx man" and another with captions generated by JoyCaption. just an anecdote on a small sample size but wanted to share: the subject had glasses (wearing them in every one of the 20 training pics), and with the lora that had the JoyCaptions, fewer output images have him wearing glasses compared to the lora captioned with just "ohwx man". probably because of JoyCaption explicitly captioning that the man is wearing glasses? if i add "wearing glasses" to my prompt the glasses are there as expected, and they match the style from the training images. interested to hear thoughts and if anyone else experienced the same.
also, i did separate loras of two different people, then prompted something like "photo of ohwx man and phwx woman smiling" - applied both loras and turned the strength of both down to 0.8. there was some bleeding of facial features but still produces images really close to the likenesses of the subjects.
also training the TE could make the diference, is not the same training the entire man class that is was doing right now than training a unique person, currently training a subject is overwriting the entire class weights
That has to with the folder training directory having to be the main directory above the folder of the images and not the image folder itself, if that makes sense