Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
I solved the issue installing to venv works but it's not a permanent solution having python path setup correctly in system i think is a more permanent solution
How to install Python, have multiple Python installations, set system wide default Python version. How to compose venv for any Python installation, change Python default path, and install SD web UI properly. Discord: https://bit.ly/SECoursesDiscord.
If I have been of assistance to you and you would like to show your support for my work, please...
hi, so i followed some of the tutorials and was able to manage to train stable diffusion on myself and got some safetensors from it to use. however, I'm wondering if i can somehow manage to integrate this model with some civit.ai anime model to create anime versions of myself using the civit anime model? Or is providing my own style images the only way to do so after? Sorry kinda new to this whole stable diffusion thing.
from nvidia documentation "For best performance, the recommended configuration is cuDNN 8.9.0 on H100 with CUDA 12.0, and cuDNN 8.9.0 on all other GPUs with CUDA 11.8, because this is the configuration that was used for tuning heuristics." seems like cuda 11.8 is best for now unless you have the h100
These support matrices provide a look into the supported versions of the OS, NVIDIA CUDA, the CUDA driver, and the hardware for the NVIDIA cuDNN 8.9.0 release.
I wanted a trained file to play around with and see it's photorealism before deep diving into training it myself. Do you have any trained models? I can't see any links to trained models on your method
I'm just trying to prompt engineer and see what all a trained model using your method can do. If you can't share your face that's fine, perhaps you have done another face?
Hi, does anyone have a workflow suggestion for replacing someone's face in a video? I couldn't get whole videos to look good so now I crop the video to a 512x512 square with the face mostly centered and then put it back in the full, uncropped video.
This is my process: -set full uncropped video to 24fps in After Effects - face track the video in AE and crop video to 512x512, face as big as possible - Extract all frames to pngs -remove every other frame (fewer imgs to process, will interpolate later) - batch replace face with extension in img2tmg to give me a jump start - img2img that batch with HED controlnet, original frames in the CN - import that batch as a png sequence in AE at 12fps -export new 512x512 video and run it into Flowframes to interpolate/increase fps -import flowframes video into AE, place on the face in the uncropped original video -make a simple mask to hide the edge of the blurred video -make another layer of the video on top, 50% opacity, delay 1 frame
any suggestions? I'm using automatic1111 if that matters.
How to install Python, have multiple Python installations, set system wide default Python version. How to compose venv for any Python installation, change Python default path, and install SD web UI properly. Discord: https://bit.ly/SECoursesDiscord.
If I have been of assistance to you and you would like to show your support for my work, please...