Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Fixed to work tools/convert_diffusers20_original_sd.py. Thanks to Disty0! PR #1016 The issues in multi-GPU training are fixed. Thanks to Isotr0py! PR #989 and #1000
The only thing that broke, it seems, was Blip captioning, but as I recall, I was using some type of rollback version of something. I think it was transformers. Honestly most of these captions tools are lousy and if you are using a smaller data-set manual captioning is best
Can I do multi-gpu training in Koyha if the graphics cards are different models/different Vram? (example, RTX 3080 and RTX 3090). Sorry to ask before testing, it's that I need a new power supply if I want to use both cards.
Does anyone know the Syntax for the regional prompter well. I'm getting better with prompts but if I have two Character Loras what's the best way to set up the prompt?
I'm getting [virtualMemoryBuffer.cpp::nvinfer1::StdVirtualMemoryBufferImpl::resizePhysical::140] Error Code 2: OutofMemory when attempting to generate the TensorRT Default Engine
I have 12.3 installed, but it is not set in Path except that CUDA_PATH_V12_3 is set to that version. I installed 12.3 because it was the only way to check the samples with Visual Studio 2022.
Updating torch, torchvision, and xformers tossed another error saying torch could not access the GPU. It said I can set a flag, so it won't run the check, but I don't want to do that.
Your install program reinstalled cu118, so I'm assuming it must be that version which means I'm running the correct version of CUDA, so I don't think TensorRT is working at the moment.
So it is saying UNSUPPORTED_STATESkipping tactic...insufficient memory. Is there a launch setting such as --medvram I need to use? I haven't tried that yet, but that's next.
It is telling me what the problem is, but I'm not seeing it for some reason. Reading the error generated more carefully made it clear, but I'm not sure what to do about it. I set the launcher to --lowvram even though I have 12GB of it, but that still may not be enough. You recommended we choose a machine with at least 24GB of VRAM on RunPod if we are using it.
I was following your tutorial for Kohya, but if I can't even load Tensor then I highly doubt I can run that. The tutorial mentions it can be run on 12GB of VRAM, but I probably won't struggle with it to get it to work.
I couldn't fix it. Whatever is wrong, it is running out of memory before it can finish building the engine. At this point I don't think it really does much good to install it.
Hi, I am a Patreon subscriber and have been learning how to train SDXL Lora's using the video posted on YouTube. I have trained 4 Loras today. Two of them are fantastic and two are not. The resulting images are high quality, but I noticed that with two of the Loras, the character likeness is only good if the prompt only contains the instance and class prompt. If I add other words to the prompt such as describing clothing, hair, glasses, etc. the likeness drops considerably. I've been using 30 or so images of the subject set to 20 repeats in Kohya and 25 regularization (using the images from the Patreon post) images per Training Image repeat, so in the case of my most recent training I used 700 regularization images since I am repeating them just once. For epochs I have been setting it to a number that equals between 6000-9000 total training steps. I can provide sample images, but I have a feeling I'm missing something easy. Any ideas would be appreciated. I'm happy to post my preset .json for Kohya if it would help. Thanks!