Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
that's pretty standard. it has everythign to do with whether the AI was trained on images that were all the same AR or if it was trained on images of all different ARs. if they were all the same AR - then it knows how to draw that and things get wierd when it tries to draw some other aspevt ratio
you see that a lot with stable diffusion 1.5 - which was trained on 512x512 images. ask for an AR that's not 512x512, what it does is draw multiple 1:1 AR images and concatenate them together - resulting in heads on top of heads in a 9:16 image or other strangeness
@Dr. Furkan Gözükara , I trained my own LoRA Flux with the Flux FP8 model, and I want to generate images with my LoRA for free on Kaggle. Before I subscribe to your Patreon, I wanted to ask if your Patreon post about "Free Kaggle Account Notebook for SwarmUI with Stable Diffusion 3 and Dual T4 GPU support" , can use FP8 LoRAs ?. I previously tried using Forge , but the images result were blurry and pixelated.
Meta Movie Gen is our latest research breakthrough that allows you to use simple text inputs to create videos and sounds, edit existing videos or transform your personal image into a unique video.
Movie Gen sets a new standard for immersive AI content. Meta announced the Meta Movie Gen just today which is going to change the Cinema, Video and Animation industries forever. This will shock the industry workers.
@Dr. Furkan Gözükara Have you had any success with the batch 7 training? If the config is ready but no video yet would it be possible to receive the config file to try?
Just a quick question on Supir upscaling: are there a collection of presets that match up with some of the tests run? I'm not seeing exactly how to configure this to match up with the grid tests run on the SECourses channel. Is there a go-to configuration that I can start with and experiment from there?
I'm likely most interested in upscaling images generated in Flux and likely 2x to 4x in resolution