Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Our Discord : https://discord.gg/HbqgGaZVmr. This is the video where you will learn how to use Google Colab for Stable Diffusion. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
Playlist of Stable Diffusion Tutorials, #Automatic1111...
Our Discord : https://discord.gg/HbqgGaZVmr. Grand Master tutorial for Textual Inversion / Text Embeddings. If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on https://www.patreon.com/SECourses
Playlist of Stable Diffusion Tutorials, Automatic1111 and Google Colab Guide...
also, with Lora my face does not look close to as good as your images you get to generate. I will now try with normal Dreambooth without Lora and then try the XYZ picking
Hi, i'm having issues with setting up xformers with the A5000 GPU in SD Dreambooth
Whenever i click Generate Class Images i'm getting :
Exception generating concepts: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 2, 1, 40) (torch.float32) key : shape=(1, 2, 1, 40) (torch.float32) value : shape=(1, 2, 1, 40) (torch.float32) attn_bias : p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) tritonflashattF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support smallkF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 40