R
RunPod3mo ago
abtx

How do indicated job status in a handler?

For example in https://docs.runpod.io/serverless/workers/handlers/handler-async
import runpod
import asyncio


async def async_generator_handler(job):
for i in range(5):
# Generate an asynchronous output token
output = f"Generated async token output {i}"
yield output

# Simulate an asynchronous task, such as processing time for a large language model
await asyncio.sleep(1)


# Configure and start the RunPod serverless function
runpod.serverless.start(
{
"handler": async_generator_handler, # Required: Specify the async handler
"return_aggregate_stream": True, # Optional: Aggregate results are accessible via /run endpoint
}
)
import runpod
import asyncio


async def async_generator_handler(job):
for i in range(5):
# Generate an asynchronous output token
output = f"Generated async token output {i}"
yield output

# Simulate an asynchronous task, such as processing time for a large language model
await asyncio.sleep(1)


# Configure and start the RunPod serverless function
runpod.serverless.start(
{
"handler": async_generator_handler, # Required: Specify the async handler
"return_aggregate_stream": True, # Optional: Aggregate results are accessible via /run endpoint
}
)
Dopes the job status automatically becomes "COMPLETED" after async_generator_handler returns? In general how do you update the status of the job in runpod python sdk? What I am trying to achieve is to use a single machine at a time for training purposes. I am not sure 1 hour long POST request is a good idea. How should this be done?
Asynchronous Handler | RunPod Documentation
RunPod supports the use of asynchronous handlers, enabling efficient handling of tasks that benefit from non-blocking operations. This feature is particularly useful for tasks like processing large datasets, interacting with APIs, or handling I/O-bound operations.
3 Replies
ichabodcole
ichabodcole3mo ago
I'm far from an expert here, so take with bath salts, but sounds like you don't need to stream the results back. So using a non-generator handler response and instead calling the run endpoint getting an id back and then periodically checking it's status via polling the status endpoint with that id, which will eventually result in a COMPLETED status and include the resulting data you need to return. You could also use a webhook to do a more push oriented style rather than polling, if you have some other serivce to push a result / notification to.
abtx
abtx3mo ago
So basically after async request is made after async_generator_handler returned, the status becomes "COMPLETED"? Also side question, any idea how to log? Prints and python logging seem to get supressed
ashleyk
ashleyk3mo ago
Its normal for logging to be suppressed if your logs are too verbose. Try logging less.
Want results from more Discord servers?
Add your server
More Posts
A6000 serverless worker is failing for an unknown reason.In the last week a few of our serverless workers have been failing on all requests. Trying to narrowCan multiple models be queried using the vllm serverless worker?Just getting started with the vllm serverless worker and my first question is can I query multiple mDidn't get response via email, trying my luck hereHi, we've reached out to you via email to maybe setup a meet and didn't get a response. We need yourtorch.cuda.is_available() is FalseSpinning up several H100s (burning money 😅) and no matter which official docker image I use, `torchNumber of requests per secondHello! Can you clarify about the number of requests per second? Here (https://docs.runpod.io/serverlUserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cudaThis is a reocurring problem on RunPod. This time with 3090 -- tried 3 different pods in CA region I shouldnt be getting charged for this error.Only lost 30cents so far so its not a big deal but if I wasnt paying attention, this could have easiLatest version of Automatic1111 in 'RunPod Automatic1111 Stable Diffusion Template t 'is there a way to update to the latest version of Autometic1111 in 'RunPod Automatic1111 Stable DiffInconsistent delay time with generator workerI am getting very inconsistent delay times when running serverless with a generator handler. I have ComfyUI Connection refused errorWhen using comfyUI, the 127.0.0.1:8188 service is started, but when requesting the service, it keeps