Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I've just seen the Video to Anime tutorial and now I wonder, would it be possible/feasible to apply this to something other than a person? Specifically I'm thinking about taking footage of me cooking (no person in frame, only hands/food/tools) and style change it to anime style. If it's doable, would I have to do the training for each new recipe/video or could I just do the training with a few videos and then use the same training data for more?
it still works. are you using dev version of the atuomatic1111? try with dev branch. i tested the video workflow on runpod few days ago for a patreon subscriber. it worked very well
After training, i am converting it to ckpt but then I can't download from g drive idk what's happening it keeps saying error when it's almost done, so i have downloaded the weights how do i convert them to ckpt?
Nvidia has announced HUGE news: 2x improvement in speed for Stable Diffusion and more with the latest driver. Using it is a little more complicated, but the speed boost is there! Exciting things coming in the future of AI. This video covers installing and using the new ONNX/Olive models and converter, as well as converting models, generating ima...
hi everyone! wanted to share a demo of running A1111 with a fal-serverless backend. The idea here is that you just run the UI locally but the GPU parts happen in the cloud. So instead of having a machine on all the time, you would only have a machine running when you are generating images which should save some $. Here is a video of it in action: https://www.youtube.com/watch?v=EIxF7TKk6wg
I am looking for feedback. Would you want to use this?