R
RunPod3mo ago
minu

Img2txt code works locally but not after deploying

I am using a model for Image 2 text , i have made its handler file and tested it locally , for testing i have used a json file with just defining the input { "input": { "image_path": "/content/1700052015451vm8aj9ac.png"
} }
and test it by !python -u rp_handler.py but when i made its dockerfile, pushed it and used it through serverless api, tested it on postman am getting error image not found this is my handler script
10 Replies
minu
minu3mo ago
import runpod from PIL import Image from transformers import AutoModel, AutoProcessor import torch Load model and processor outside the handler for efficiency model_name = "unum-cloud/uform-gen2-qwen-500m" # Replace with your model name model = AutoModel.from_pretrained(model_name, trust_remote_code=True) processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True) Disable gradients and model head for faster inference model.eval() model.config.output_hidden_states = False def process_image(input): """ Executes the image description logic with a hardcoded prompt """ image_path = input['image_path'] # Hardcoded prompt prompt = "Image description" # Replace with your desired prompt try: image = Image.open(image_path) except FileNotFoundError: return {"error": f"Image not found at {image_path}"} with torch.inference_mode(): inputs = processor(text=[prompt], images=[image], return_tensors="pt") output = model.generate( **inputs, do_sample=False, use_cache=True, max_new_tokens=256, eos_token_id=151645, pad_token_id=processor.tokenizer.pad_token_id ) prompt_len = inputs["input_ids"].shape[1] decoded_text = processor.batch_decode(output[:, prompt_len:])[0] return {"description": decoded_text} ---------------------------------------------------------------------------- # RunPod Handler # ---------------------------------------------------------------------------- # def handler(event): """ This is the handler function that will be called by RunPod serverless. """ return process_image(event['input']) if name == 'main': runpod.serverless.start({'handler': handler})
Madiator2011
Madiator20113mo ago
It’s cause you telling api to load image that is located on container storage
minu
minu3mo ago
oh, then how can i get the image when i test api from postman?
Madiator2011
Madiator20113mo ago
Upload somewhere where you can get direct link to image
minu
minu3mo ago
can you please see this, this can get image from any url (image publically accesible)
minu
minu3mo ago
i am testing on colab, so i dont think i can can images from my local computer to test it out ? but for publically accessiuble urls , internet images it is working
Madiator2011
Madiator20113mo ago
It’s cause your code is not fully implemented you are missing input schema, validation and more
minu
minu3mo ago
Oh okay, so you are saying even on colab it should work for images present on my local computer, i should chnage the code accordingly
Madiator2011
Madiator20113mo ago
I’m saying that it’s not proper serverless worker
minu
minu3mo ago
okay , let me check further , but still big thanks atleast you gave me a start and pathway to correct my mistake
Want results from more Discord servers?
Add your server
More Posts
Upload files to Network volume? Two days spent on this and can't make it happenHOW do I get my local safetensor LLM files on my PC to the network volume? Is the CLI the only way? Docker image using headless OpenGL (EGL, surfaceless plaform) OK locally, fails to CPU in RunpodHi all, I'm wondering if anyone can educate me on what would be causing this difference in behaviouMoving to production on Runpod: Need to check information on serverless costsHi team. I'm working with my company to move our product to release, with a soft launch in April. WShell asks for a password when I try to ssh to a secure cloud pod (with correct public key set)I have a correctly formatted public key set, I have ssh enabled. Still asks for a password when I ssrunpodctl create pod for CPU onlyHello, i try to create pod from cli width **runpodctl create pod --gpuCount 0** but i have this errodocker not foundHello, I get an error from the container attempting to launch: /opt/nvidia/nvidia_entrypoint.sh: linHow to mount network volume to the pod?Hey all, I created a network volume and have a pod. How do I connect the network volume to the pod?Securing Gradio App on Runpod with IP WhitelistHello, I'm running a Gradio app configured with share=True on Runpod. I can access it, but I'd like load a new network volumen into a pod?I am new in runpod. Recenlty, I have created a network volume and try to load it in a GPU pod. WhileThe Bloke LLM Template ExLlamaV2Cache_Q4 ErrorHas anyone found a way around this. I use to use the pip install --upgrade exllamav2 command in the