i tried different dataset style, always perfect quality with different clothing and background
i found out that it is not the best idea to add different facial expression since when using a prompt like "surprised"
if the model use an already surprised expression from the dataset it would be SUPER exaggerated and very weird, even using moderate prompt like "slightly"
I suppose that it would not be an issue if every picture from dataset were captioned, but i havnt found a good tutorial about dataset/captioning and method to use
Also i found that not be able to modify face is a pretty sad, adding a cyborg eye, or facepaint, makup, blood, dirt whatever, tried a lot of prompt even in the adetailer face parameter
I also found out that if your dataset is always with a sharp focused talent and blurry backround, whatever prompt you use, it will always be the same in Stablediffusion, always a sharp focused person, but blury background (i may doing something wrong), i feel like the "from single text file" method is pretty good for something quick, but if you want to achieve God level of training, you need to do the Super anoying long version of data training method