In FLUX.1 LoRA training, when `--fp8_base` is specified, the FLUX.1 model file with fp8 (`float8_e4m
In FLUX.1 LoRA training, when
--fp8_base is specified, the FLUX.1 model file with fp8 (float8_e4m3fn type) can be loaded directly. Also, in flux_minimal_inference.py, it is possible to load it by specifying fp8 (float8_e4m3fn) in --flux_dtype.








