I thought u did without captions on these newer lora trainings
I thought u did without captions on these newer lora trainings



Process Process-2:
Traceback (most recent call last):
File "E:\Joy_Caption_v12\venv\lib\site-packages\transformers\feature_extraction_utils.py", line 183, in convert_to_tensors
tensor = as_tensor(value)
File "E:\Joy_Caption_v12\venv\lib\site-packages\transformers\feature_extraction_utils.py", line 142, in as_tensor
return torch.tensor(value)
RuntimeError: Could not infer dtype of numpy.float32During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\Program Files\Python310\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "E:\Joy_Caption_v12\app.py", line 335, in process_gpu_batch
images = clip_processor(images=batch_images, return_tensors='pt', padding=True).pixel_values.to(device)
File "E:\Joy_Caption_v12\venv\lib\site-packages\transformers\models\siglip\processing_siglip.py", line 113, in __call__
image_features = self.image_processor(images, return_tensors=return_tensors)
File "E:\Joy_Caption_v12\venv\lib\site-packages\transformers\image_processing_utils.py", line 41, in __call__
return self.preprocess(images, **kwargs)
File "E:\Joy_Caption_v12\venv\lib\site-packages\transformers\models\siglip\image_processing_siglip.py", line 259, in preprocess
return BatchFeature(data=data, tensor_type=return_tensors)
File "E:\Joy_Caption_v12\venv\lib\site-packages\transformers\feature_extraction_utils.py", line 79, in __init__
self.convert_to_tensors(tensor_type=tensor_type)
File "E:\Joy_Caption_v12\venv\lib\site-packages\transformers\feature_extraction_utils.py", line 189, in convert_to_tensors
raise ValueError(
ValueError: Unable to create tensor, you should probably activate padding with 'padding=True' to have batched tensors with the same length.