RunPods Serverless - Testing Endpoint in Local with Docker and GPU
I’m creating a custom container to run FLUX and Lora on Runpods, using this Stable Diffusion example as a starting point. I successfully deployed my first pod on Runpods, and everything worked fine.
However, my issue arises when I make code changes and want to test my endpoints locally before redeploying. Constantly deploying to Runpods for every small test is quite time-consuming.
I found a guide for local testing in the Runpods documentation here. Unfortunately, it only provides a simple example that suggests running the handler function directly, like this:
python your_handler.py --test_input '{"input": {"prompt": "The quick brown fox jumps"}}' This does not work for me as it ignores the Docker setup entirely and runs the function in my local Python environment. I want to go beyond this and test the Docker image end-to-end locally—on my GPU—with the exact dependencies and setup used when deploying on Runpods.
Is there specific documentation for testing Docker images locally for Runpods, or a recommended workflow for this kind of setup?
Learn how to test your Handler Function locally using custom inputs and a local test server, simulating deployment scenarios without the need for cloud resources.