Hello everyone. I am Dr. Furkan Gözükara. PhD Computer Engineer. SECourses is a dedicated YouTube channel for the following topics : Tech, AI, News, Science, Robotics, Singularity, ComfyUI, SwarmUI, ML, Artificial Intelligence, Humanoid Robots, Wan 2.2, FLUX, Krea, Qwen Image, VLMs, Stable Diffusion
The 48g 4090 is currently available for private consumers in China (you can buy it exactly), with no restrictions (such as not being able to run on Windows). The only downside is that due to the fully modified turbo fan, the noise is similar to that of a helicopter...
There so many, I must say some examples: detailer daemon, different t5xxl & clip text encode, separate prompt and prompt style ... so many I dont even remember. Especially when combine Teacache with Sage Attention I nearly get 3x render speed without losing any quality of an image. Here my example:
Flux tend to make image more polished which people claim it's artificial. It depends on people taste, I agree. But for understanding prompt, I never going back to SD 1.5 and even SDXL. Flux is way ahead of prompt understanding enough to forgive its weakness which is styling an image.
no single config fits all cases, you always have to play around with creativity and normalisation settings depending on composition, upscale factor and how close you need upscale to be to original
there is a huge difference between a raw photo (digital or analog) and a stylized photo (with filters) of a person (a portrait). If you work with photography your eye is trained on raw images and you notice when something is even slightly artificial. FLUX at it's current stage struggles a lot with creating raw portraits. With elaborate prompt you can get better results, but currently it is a struggle, whereas accordingly-trained stable diffusion 1.5 or XL custom models do this with ease. It would be great if one day there will be a more flexible FLUX version. That said, if you are looking to create studio-like portraits with retouched skin and filter effect, FLUX is very good. Just not for natural portraits, same as Midjourney btw.
@Furkan Gözükara SECourses @Timson I'm planning as a next project to train shoes, jewellery and other objects. Should I follow the same concept of diversity as I did with the logo ?
you'll have to figure out the peculiarities of each class, but the starting point would be the same process. It all depends on how each of your base classes are trained into flux
Do you see real non placebo benefit in this? Do you use SwarmUI comfyUI flow customisation for this? I did not follow up this research avenue because BFL is not leveraging it for flux pro API, seems counter-intuitive that they would ignore it if it was a viable way to improve outputs with no additinal costs
hi, I'm trying to use flux fill in SwarmUI to replace an orange wristband over a subject in a photo with some ISO noise. I'm using the basic generation flow and it looks perfect except it's too perfect and doesn't have any of the ISO noise in the original photo.
Is there a way I can get the generation to also add in similar noise to the inpainted mask that I'm shrink growing? I'm at max creativity since otherwise the orange wristband still shows. you can see in this image clip