Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Thats way I want reduce. I managed already send all checkpoints to HugginsFace. 10x23gb, so when I decide which one I like... will download to local PC and change size. Not watchen yet video about reducing size , so I hope its possible on local PC, or must be done at Massed Comp
ok, I think I need cancel genrate GRID of Stylized as its still almost 2H and Im very sleepy. lol. If I copy from Massed Comp folder GRID, can I continue tomorrow generate this GRID ?
@Furkan Gözükara SECourses Could you check v9 configs for 8GB again.. with Torch 2.5, i get out of memory errors, With 2.4 - it works but very slow. It was working on an earlier version of your 8GB configs. File "E:\StabilityMatrix-RAID\Kohya_Flux\kohya_ss\sd-scripts\library\flux_models.py", line 830, in _forward attn = attention(q, k, v, pe=pe, attn_mask=attn_mask) File "E:\StabilityMatrix-RAID\Kohya_Flux\kohya_ss\sd-scripts\library\flux_models.py", line 449, in attention x = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.90 GiB. GPU 0 has a total capacity of 8.00 GiB of which 0 bytes is free. Of the allocated memory 8.05 GiB is allocated by PyTorch, and 2.07 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) steps: 0%|
I think it has to do with Torch 2.5? Does it work on windows? Shared VRAM should be enabled, system has 64GB, and Task Manager shows 40GB for GPU Memory (8+32), shared GPU memory shows 32GB
with SPDA setting.. RuntimeError: CUDA error: out of memory CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with