self.device = torch.device( "cuda" if torch.cuda.is_available() else "cpu") self.processor = AutoProcessor.from_pretrained( "HuggingFaceM4/idefics2-8b") self.model = AutoModelForVision2Seq.from_pretrained( "HuggingFaceM4/idefics2-8b", torch_dtype=torch.float16, # _attn_implementation="flash_attention_2", ).to(self.device) print("Time taken to load model: ", time.time()-to_time)
self.device = torch.device( "cuda" if torch.cuda.is_available() else "cpu") self.processor = AutoProcessor.from_pretrained( "HuggingFaceM4/idefics2-8b") self.model = AutoModelForVision2Seq.from_pretrained( "HuggingFaceM4/idefics2-8b", torch_dtype=torch.float16, # _attn_implementation="flash_attention_2", ).to(self.device) print("Time taken to load model: ", time.time()-to_time)
When starting the instances, the model starts downloading from HF, but this takes a awfully long time. So long indeed that the serverless handler seems to never start. And the process starts again on another instance, in loop. My guess is that after X amount of time if the worker doesn't expose the handler function, runpod kills it.
The thing is, the model is "only" 35Gb in size. Loading the model on my laptop using my home bandwidth takes only a few minutes.
It seems then that the bandwidth allocation for serverless workers is too limited? I feel like this has changed in the past couple of weeks, I never had issues with this in the past.