Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
train_network.py: error: argument --max_data_loader_n_workers: expected one argument train_network.py: error: argument --max_data_loader_n_workers: expected one argument Traceback (most recent call last): File "/opt/conda/bin/accelerate", line 8, in <module> sys.exit(main()) File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main args.func(args) File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 977, in launch_command multi_gpu_launcher(args) File "/opt/conda/lib/python3.10/site-packages/accelerate/commands/launch.py", line 646, in multi_gpu_launcher distrib_run.run(args) File "/opt/conda/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/opt/conda/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
trying to put my face on this image, but it's not going well, tried regular lora with positive prompt, roop and auto detailer, but never gets the face well
Hi friends, I have this problem: Very slow image generation speed when using SDXL 1.0 on RTX 3060 Ti GPU and 16gb RAM. If i generate 1024x1024 with 20 steps then the generation time is 10 minutes, if for example 768x1024 then the generation speed is from 20 minutes. I have seen many people have generation speeds of only a couple minutes on the same GPU. I am using AUTOMATIC1111 version v1.6.0 and the start parameter --opt-sdp-attention. I don't get any errors about lack of VRAM, just a very slow generation speed. Maybe I should do a clean SD installation?
I enabled sdxl.vae and got a reduction in image generation time. But I got an error. Anyway it's not a 10 minutes wait anymore:) When generating at 40 steps, the generation time doubles.
I also just noticed the change in processing units. Never paid attention before but I was getting 1.6 sec/Iteration on 3060 and now I'm getting 0.31 sec/iteration (or 3.21 iterations/sec). 4-5x faster and better quality
I think I was able to fix the slow generation speed on the 3060 ti. I now generate a 768x1024 image in 30-40 seconds. The problem is with Nvidia drivers from version 531 and up. And also I used --xformers parameter instead of --opt-sdp-attention. https://github.com/vladmandic/automatic/discussions/1285
it seems that nVidia changed memory management in the latest versions of drivers, specifically 532 and 535 new behavior is that once gpu vram is exhausted, it will actually use shared memory thus c...
Yes, I ran the --medvram-sdxl command and it really helped for my 3060 ti. The only downside is the long wait for the first image to be generated. Also xformers helped me because opt-sdp-attention gives extremely low generation speed for sdxl.