which dataset where? which captions?
which dataset where? which captions?
42 Replies
cool, just dont have time right now, but will check tomorrow
no worries
hehe i downloaded it so far, interesting nsfw collection you got there š
what kind of results do you get with the lora?
have you tried lora strength of 1.5 , 2, 3?
the result isnt even the model herself...its not even the same person.. i used the sdxl training presets by the Dr but i dont know whats wrong
𤣠thanks... been having issues with this for a while
during training or in the workflow
you only need maybe 5 photos of the model i think
front view, side view, rear view, closeup face, closeup face + torso
and maybe two sets of those, one dressed, one nude (since its nsfw you train)
so 10 photos
caption them like this
m0d3l, front view, red dress, garden, table, blue sky
m0d3l, front view, closeup face, yellow bedroom, smiling
thats how i usually do it
you did pretty good already on your captions
but you need to make captions for all images
but you only need 10 as i said
1500-2500 steps for SDXL probably
whats your DIM and Network alpha.
64, 32 or so maybe good
also use samples every 100 steps
at 1000+ it should start to resemble your model
at 1500-2000 it should be decent enough to be used
well.. im using One trainer .. presets by the dr..and what base model should i use..
sdxl?
been using these presets so far..
JuggernautXL
learning rate is very low
try 0.0001
also maybe set custom vae
to sdxl vae
𤣠im new to One Trainer so i had to learn through the DR .. for the last week
ohhh okay
you could also try Koyha_ss for SDXL training
are the results ultra realistic
its older and less optimized but it worked for me
yea
i probably will try
https://civitai.com/models/133005/juggernaut-xl
its the one i use for SDXL
but you could also try Pony
it gives mixed results
but its very good for NSFW
lemme check it out
apparently this one gave me good results on pony
https://civitai.com/models/643732/animatedponyreal
its even nsfw focused š
if its your first time training a lora then maybe start even with sd 1.5
it can be extremely fast to train
and it picks up things very quick
so you will know your dataset/ config works
the way i wrote the caption tips and image picking for you should work well š
well... i was using flux but the realism wasnt that great... and im launching my AI model and want it to be ultra realistic.... so i made the dataset with an sdxl checkpoint and some loras and wanted to train the entire lora
so yeah.. this is the first time training on sdxl
why not train wan?
doc will probably release a setup for wan soon
𤣠still learning about it... i thought wan is for videos only
it can do images at 1920x1080
:0
but video is 1280x720
if it might be the most realistic (if it will be) then whats the issue?
š
im delving into wan training
i trained a lot of sd 1.5 , a few sdxl + pony
ohh nice... i gotta take a look ...have been struggling to understand the docs stuff but this is great
but i think wan is very you wan-t to be
𤣠good one
xD
hows your progress with wan training
if you train wan, start with wan 2.1 14b t2v
doing other things right now
https://civitai.com/models/1889006/jonxls-directwan-v6-wan-21-vace-stand-in-22-advanced-v6-workflow-t2i-t2v-i2v-flf2v-trim-extend-upscale-interpolate-overlay-workflow
working on workflows with many features
which includes wan generation š
so far i think wan 2.1 training may be easier and better results than wan 2.2
i managed to make cartoon lora for wan 2.2
after im doone with the workflow stuff ill try to make some more loras for wan 2.1
https://civitai.com/models/1862320/wan22-t2v-lora-cartoon-style
https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/199793b3-29ff-48d4-acbb-e5237408a9f7/transcode=true,original=true,quality=90/94051197.webm
wow this is good stuff... so wan training is better than the sdxl right
i havent delved deep into it, but i'd almost wager š
thats awesome
so i went through everything
and yeahh.. im in a learning curve especially with manual training...*
everyone had to go thru iot
just like driving a car
i didnt know how to drive a car and everything in 1 day
which learning rate will you use?
how many steps?
𤣠im still noting everything down... i use massed computes so everything i do costs.. from what you said .. the learning rate is really low.. so i have to do everything careful before the next training...
plus since i was using the Drs presets... i will have to redo everything afresh
if its no trouble.. could you share your presets for OT
i dont use onetrainer
i used Koyha_ss for SD 1.5, SDXL and Pony
and Musubi-tuner for Wan training
im even watching the Khoya training right now
why not train SD 1.5 first then?
i think you can train sd 1.5 lora in 10 minutes on a fast GPU
what's your local GPU?
you can even train SD 1.5 on 8 GB vram i believe
will it run with an epicrealism checkpoint.. itss sdxl
i dont have a local GPU..
sd 1.5 is the predecesor of SDXL
but it has realisticvision5
https://civitai.com/models/4201/realistic-vision-v60-b1
sd 1.5 training is fast, and captures things more easily than other models
tho your input images should be 512x512 or 768x768 max
so i have to dowcscale the images for my dataset
yes
but you can do that in batch with irfanview
lemme do it now
how is your training going?
š