workers-help
pages-help
general-help
durable-objects
workers-and-pages-discussions
pages-discussions
wrangler
coding-help
kv
🦀rust-on-workers
miniflare
stream
general-discussions
functions
zaraz
⚡instant-logs
email-routing
r2
pubsub-beta
analytics-engine
d1-database
queues
workers-for-platforms
workerd-runtime
web3
🤖turnstile
radar
web-research
logs-engine
cloudflare-go
terraform-provider-cloudflare
workers-ai
browser-rendering-api
logs-and-analytics
next-on-pages
cloudflare-ai
build-caching-beta
hyperdrive
vectorize
ai-gateway
python-workers-beta
vitest-integration-beta
workers-observability
workflows
vite-plugin
pipelines-beta
containers-beta
Do workers have deploy hooks?
Cannot destructure property 'Agent' of 'workerdHttp' as it is undefined.

Can't access env vars in Workers build
compatibility_flags = [ "nodejs_compat", "nodejs_compat_populate_process_env" ]
But in any moment i'm seeing the variables in log
Is there anything else i need to do?...Workers cache overhead?
Delete a single deployment
How to have multiple external custom hostnames within the same zone name for my different Workers?
Workers CPU Limits
All of my requests are failing when smart placement is turned on with dev server.
Getting Authentication error [code: 10000] on deploy from wrangler
Is there a way to view durable objects built in SQL or agent schedule?
Need help running queue jobs in worker concurrently
Cloudflare blocking traffic to my Worker because of non-existent "Rate limiting rules"

Cannot open index.html with mailto: or discord: redirect. Why?
What's the recommended way to interact with cache from Workers?
MongoDB errors
nodejs_compat
and compatibility date to 2025-05-05
, but when I publish the worker I keep getting errors like:
```diff...Debug worker in a turborepo monorepo, Cursor/VSCode
Long wall times and short response times
context.wailtUntil
anywhere, and this is a fairly simple request flow through Hono (which we've now logged the living daylights out of, including in Hono itself). We have a couple of Supabase REST API calls that complete quickly, and we seem to get to the point where we're returning the response within 200-500ms, but we end up with a CPU time of maybe 10-20ms and a wall time of 30s. The worker then gets cancelled, but appears to have correctly returned a response—all very mysterious. Finally, I've tried to hit everything with the dangling promise lint rules in @typescript-eslint and biome, but nothing pops out.
With all that said, I'm wondering if anyone knows what might be good to look out for to understand where the wall time is being spent—how do others go about debugging these scenarios?...Counting Workers requests: difference between charts
Workers and Pages
page tells me my workers have handled 501K requests this month. But the Billable Usage
chart on the Billing page says 4M(!) requests over roughly the same time period.
What could explain the difference?
4M can't be right - it's not consistent with database metrics, analytics, etc....
Wrangler deployment error - Custom Instance
Error rolling out application project-container due to an internal error (request id: undefined): VALIDATE_INPUT
Error rolling out application project-container due to an internal error (request id: undefined): VALIDATE_INPUT
Unable to cache static pages in ssg using opennext