Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
i changed to 48 but stills runs at the same speed. ae = "X:/AI/Kohya/ae.safetensors" do you see anything wrong with the toml? I really appreciate your help blocks_to_swap = 0 bucket_no_upscale = true bucket_reso_steps = 64 cache_latents = true cache_latents_to_disk = true cache_text_encoder_outputs = true cache_text_encoder_outputs_to_disk = true caption_dropout_every_n_epochs = 0 caption_dropout_rate = 0
For LoRA training with 15 images and an epoch of 200 on RTXA6000, it takes around 7 hours. Shall we change the epoch value to 15 or 10? Is the caption mandatory?