Local development experience - Are there
Local development experience - Are there plans to improve the local dev server's parity with production? I've seen users hit edge cases where behavior differs between wrangler dev and deployed Workers.
TypeScript and type safety - Any roadmap for better type generation for bindings (KV, Durable Objects, R2, etc.)? The DX could be smoother when types are automatically inferred from wrangler.toml.
11 Replies
Regarding the local development experience we are definitely committed to get it as close as possible to production, and actually I think we are quire close, I would be interested to hear about these edge cases you've seen
Are they known issues / are there issues in workers-sdk for those?
Regarding TypeScript and type safety.... we do have
wrangler types it is not perfect but it shouldn't be too bad, are you experiencing specific issues there?
In any case we do have some small items that we want to there but nothing major as far as I remember 🤔This is a pretty big edge case I've been struggling with:
https://github.com/cloudflare/workers-sdk/issues/9795
(original here: https://github.com/cloudflare/workers-sdk/issues/9929)
to get around this, I detect local dev and have a proxy run the request direct to the localhost binding for the consumer instead of RPC, same payload / API, just a separate way to send from any separate wrangler producer (or vite) to the central consumer, then the fetch handler on the consumer re-pushes the same call to its own RPC for proper queue consumption
I should be transparent here - I don't have specific edge cases from real user reports or GitHub issues that I'm tracking. That was more of a general statement based on the pattern that local dev environments often have subtle differences from production, but I don't have concrete Workers-specific examples to point to.
That said, common categories where these differences tend to appear in similar platforms include:
Cold start behavior: timing and resource availability differences
Request/response header handling: edge cases in how headers are normalized or limits enforced
Networking and fetch behavior: especially around redirects, timeouts, or subrequest patterns
Resource limits: CPU time, memory, or subrequest counts that might be enforced differently
Module resolution: occasionally differences in how imports/requires are resolved
But again, I'm speculating based on general patterns, not citing actual Workers issues.
I'm aware that wrangler types exists and generates types from wrangler.toml
What I more wanted to focus on are these:
Does it handle all binding types comprehensively?
Does IntelliSense/autocomplete work smoothly for things like env.MY_KV.get() etc.?
Mh.... I think we are doing a good job and that we are quite close to the production behavior (we do run the js in the actual production runtime) and we are committed to keep that and improve it as much as we can. But honestly this is quite tricky and it's very difficult to guarantee a perfect 1-to-1 identical behavior... 😕
For example, we've been significantly investing in our Vite plugin lately, that, during development, does run code in the workerd runtime, but it has lots of machinery there to make it integrate with the Vite system (the environments API which allows us to have HMR etc...). So there I think we'll always have some disgnificant differences in comparison to production 🤔 (But there you can also build + preview, which gets the behavior that much close to production).
The same applies to the vitest-pool-workers package (much more so there! 😓).
Wrangler dev, in my personal opinion, is closer to production than vite, but there are things like cold starts which we can't really feasibly realistically reproduce since they are network related and have lots of variables involved 😕
sorry.... TLRD; we are doing our best and are commited to keep doing that and improving things, but getting 100% maching behavior is (in my opionion) nearly impossible
Does IntelliSense/autocomplete work smoothly for things like env.MY_KV.get() etc.?Yes! 💪
Does it handle all binding types comprehensively?Mh.... kind of.... it gets almost all binding types correctly, but there are a few cases where we currently don't have a perfect solution, for example when you are binding to a durable object defined in an external worker, in that case we can't do anything too clever and simple use a generic DO type... we might improve this in the future, but right now you might need to do some overriding 🤔 So basically in my opionion
wrangler types is quite good and gets you quite far but for some more advanced scenarios you might need to override the Env interface yourselfI don't want to talk about everything bad here I like the work that you guys, and personally I don't think it's so bad, I would choose a different platform long ago if it was like that, just wanted to ask since I have the opportunity
Alright thanks, I haven't worked as much with it, to know all the nits and bits, but that is impressive
no worries, thanks for giving us a shot! 🫶
please give workers a try and if you hit issues we're always here to help 😄 (or even more so on GitHub: https://github.com/cloudflare/workers-sdk 🙂)
GitHub
GitHub - cloudflare/workers-sdk: ⛅️ Home to Wrangler, the CLI f...
⛅️ Home to Wrangler, the CLI for Cloudflare Workers® - cloudflare/workers-sdk
My current frustration is definitely trying to get local service binding working. My setup is a public facing worker with 2 service worker bindings. No matter what I do the gateway worker just can’t seem to discover the 2 service worker(always shows up not connected)
I’ve had to go down the path of starting them in a single command: wrangler dev —local -c a.jsonc -c b.jsonc, but that came with its own complications, as one of the service worker deals with D1 and it’s always trying to target remote
fwiw, this should definitely work. I have a setup like yours and it works fine, all the workers see eachother and I am not starting them all under one command with -c -c. Make sure you have the latest wrangler, this was something that was fixed (a few months ago)
I followed the official documentation to the letter and can’t get this to work with starting up each worker individually. I’m using 4.47.0 for all of them. My gateway just get error like: Cannot access “<rpc method>” as we couldn’t find a local dev session for the “default” entry point of service…..
Managed to get it working by stacking all configs under 1 command, not ideal and comes with its own complications but at least we can dev around it