Remote bindings likely simplifies the situation far more than attempting to accurately simulate everything locally or making developers pick between speed and precision.
The selective opt-in model stands out as especially clever.
Having the option “I want my D1 database calls to hit the real thing, but I’ll mock my KV store locally” is quite impressive.
Moreover, executing the actual workerd runtime while running locally ensures that you're getting actual Workers behavior for the execution environment, which is ideal.
I'm interested in a couple of things:
Is there a measure to tell how much remote binding adds to the overhead? If I call a remote D1 database from my local Worker, will it be any slower than when operating in full --remote mode?
How do you manage remote resource credentials during local development? Is it the same as authorization used for wrangler dev --remote?
While developing against remote resources, do you have methods or known patterns to avoid contaminating production data by snapshotting/restoring state?
The below ideas feel like they are pointing me in the right direction. Rather than viewing the dichotomy between local and remote as the framing conflict, it may be better to take the opposite approach and consider what components need to be real vs. simulated for this particular session.
Since the launch of the feature, have you observed good adoption of remote bindings?