"Failed to return job results. | 400, message='Bad Request', url=URL('https://api.runpod.ai/v2/gg3lo

{5 items "dt":"2024-02-19 02:45:23.347011" "endpointid":"gg3lo31p6vvlb0" "level":"error" "message":"Failed to return job results. | 400, message='Bad Request', url=URL('https://api.runpod.ai/v2/gg3lo31p6vvlb0/job-done/3plkb7uehbwit0/83aac4d7-36c5-45ce-8b43-8189a65a855f-u1?gpu=NVIDIA+L40&isStream=false')" "workerId":"3plkb7uehbwit0" }
3 Replies
ashleyk
ashleyk4mo ago
This kind of thing usually happens when your worker throws an exception or you return an error and pass a dict to the error key insteasd of a string. The RunPod SDK used to support a dict, but the latest versions only support a string. It also tends to result in your job status being COMPLETED without returning any output.
randhash
randhash2mo ago
The RunPod SDK used to support a dict, but the latest versions only support a string.
Isn't the output serialized to JSON within the output handler here: https://github.com/runpod/runpod-python/blob/main/runpod/serverless/modules/rp_http.py#L45 I'm getting the same error even if I explicitly convert the output to a string After some more debugging, the error seems to be related to the size of the payload, but I can't find any information on a maximum size for streamed outputs in the documentation. Is there are fixed limit?
nerdylive
nerdylive2mo ago
i think its the same as listed on here https://docs.runpod.io/serverless/references/operations also, is there any other details of what you are running, the model probably or size, other logs of error? theres a possibility that the model exceeds the VRAM usage causing it to fail too.
Endpoint operations | RunPod Documentation
Comprehensive guide on interacting with models using RunPod's API Endpoints without managing the pods yourself.