Yes, I edited the toml file. Also I am working off a 4090. It looks like it attempts to run. I get t

Yes, I edited the toml file. Also I am working off a 4090. It looks like it attempts to run. I get the error: File "D:\APPS\KohyaSD\sd-scripts-sd3\venv\lib\site-packages\torch\nn\modules\module.py", line 1362, in convert
raise NotImplementedError(
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Traceback (most recent call last):
File "D:\PROG\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None, and Toml: [general]

Common settings for the dataset

flip_aug = true
color_aug = false
keep_tokens_separator = "|||"
caption_extension = ".txt"

[[datasets]]

This is the dataset definition for training

batch_size = 2
enable_bucket = true
resolution = [1024, 1024]

[[datasets.subsets]]
image_dir = "D:/APPS/KohyaSD/TestIMG/10_C4TS cats"
num_repeats = 1 A suggestion from chatgpt was to move to bf16. (on the two lines in the bat file) I changes that is and looks like it attempts to load the models. But then another set of errors. It's like moving from one set of issues to another and I'm afraid I'm just working in the wrong direction.
Was this page helpful?