all of these outputs were 1024x1024 so I don't think so? I'll try some higher resolutions now
all of these outputs were 1024x1024 so I don't think so? I'll try some higher resolutions now









This LoRA was trained on two layers single_transformer_blocks.7.proj_out and single_transformer_blocks.20.proj_out
It is possible to train on blocks.7.proj_outlayer only if you don't want the dataset style transfer, the size depending on the dim can be below 4.5mb (bf16), the model is powerful enough to offer advantages such as single layer training, so let's do that.
A Flux LoRA with 99% face likeness with great flexibility can be as small as 580kb (single layer, dim 16)
allow_pickle=False to raise an error instead."