Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
Welcome to the Pillars of AI Application Libraries: How to Install Tutorial! Are you ready to embark on a journey of installing AI libraries and applications with ease? In this video, we'll guide you through the process of installing Python, Git, Visual Studio C++ Compile tools, and FFmpeg on your Windows 10 machine. We'll also show you ho...
this is my first illustration character in SDXL. It's a lot easier than training a real person. Also a lot easier to put into context, even though all training images had white background.
here's my config for the variables (normally in the cropper_v7.py script. I am getting the full output to different ar folders, but the images are not actually cropped...
ok well i can see some minor cropping, just i guess is due to model trying to preserve areas around face like neck. So i suppose i have to still run some manual cropping at Topaz stage of upscaling or with Birme. unless there is a threshold slider to force the model to crop
https://www.patreon.com/posts/89213064 am I correct in thinking we generally dont need to amend any settings not provided in the JSON files here? For example I noticed using /workspace/24GB_TextEncoder.json I have 8 epoch with 40 repeating steps. I don't need to change anything just because I have higher training images right? say 50 or 100? the training steps seem to work out still according to the formula about 35k when I print the training command