Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Does anyone know why the output of my animatediff looks so bad when using controlnet lineart and also depth? I'm trying to make this cartoon pixar art style and its turning out random flicker colors
In automatic 1111 -- When disabling sd-webui-controlnet and sd-webui-animatediff, I no longer have the option of seeing the Animatediff tab at the bottom? I installed the two other things sd-webui-controlnet-animatediff and sd-webui-animatediff-for-ControlNet. Why doesn't the animatediff tab show up?
@Dr. Furkan Gözükara downloading juggernautxl to workspace to make a LoRA on kaggle worked. however also learned that juggernautxl seems to respond better to a person LoRA made with SDXL than one trained on juggernautxl
I am using kaggle and I notice the default path to the model is stabilityai/stable-diffusion-xl-base-1.0 - I don't really know where this stabilityai folder is... if I use the huggingface API to upload a model from huggingface to kaggle should I save it under /kaggle/input/ folder and use that as the 'Pretrained model name or path' to the model in kohya GUI?
I have moy model on huggingface, wondering how I can upload it, the info I have is I need to download it from hugging face into the kaggle as a dataset, or is there a better way to do this?