Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Having essentially the same issue with LDSR as I was having with controlnet, I did not see any outgoing request through my firewall that was being blocked so I am not sure how to fix the issue. This is a fresh Windows 10 install with fresh Python 3.10.6 and freshAutomatic1111 dev build install on an Auros Master z590 with a 4090, 11700@4.9ghz, 128gb 3200 ram and a 2tb gen 4.0 NVME.
Is Multi-frame Video rendering not working for other people? It seems like I'm just not able to upload guide frames. I'm able to select, but get no confirmation message about this. Trying to follow this tutorial: https://www.youtube.com/watch?v=kmT-z2lqEPQ&ab_channel=SECourses. Reverting to a previous commit also doesn't work because I'm using the Google Colab from fastBen Automatic1111 and reverting breaks other components of the code
Nevermind, I'm able to upload, I just run into this error: PIL.UnidentifiedImageError: cannot identify image file '/tmp/1000bz1rz3p6.png'. Does this error just render this tutorial not really followable? Because it seems like a crucial step in creating the video that's missing from the entire pipeline with this bug
@Furkan Gözükara SECourses what is the best approach for Dreambooth training in terms of time and resource (RAM/VRAM) consumption for a single GPU (3060 12GB). Are the Kohya_ss and A1111 the most efficient for model training or does these libraries ColossalAI (https://github.com/hpcaitech/ColossalAI#AIGC) and VoltaML (https://github.com/VoltaML/voltaML-fast-stable-diffusion) offer better training speed and efficient memory utilization? I don't mind the OS, be it windows or Linux.
I've just seen the Video to Anime tutorial and now I wonder, would it be possible/feasible to apply this to something other than a person? Specifically I'm thinking about taking footage of me cooking (no person in frame, only hands/food/tools) and style change it to anime style. If it's doable, would I have to do the training for each new recipe/video or could I just do the training with a few videos and then use the same training data for more?
it still works. are you using dev version of the atuomatic1111? try with dev branch. i tested the video workflow on runpod few days ago for a patreon subscriber. it worked very well
After training, i am converting it to ckpt but then I can't download from g drive idk what's happening it keeps saying error when it's almost done, so i have downloaded the weights how do i convert them to ckpt?