I am trying to send frames to runpod for inference. I am currently using serverless endpoints (but open to warm or 24/7 containers as well!). Basically, in opencv, you would get the frames within the video loop. I will be sending those frames to runpod for inference.
I am wondering if this is possible. In my test.json, I have the example of the image path (the full b64 file). I tried initializing the serverless pods with two image_paths: one, an example b64 one (made up), and the second, the full b64 image path. Both failed.