Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

How to debug Worker Error 1101?

I have deployed Next.js app to a Cloudflare Worker using OpenNext. It works well but the Google login callback triggers an Error 1101. It works fine on local and other dev platforms. There log error I see is: Error 1101: Worker threw exception "The Workers runtime canceled this request because it detected that your Worker's code had hung and would never generate a response."...

Attach VSCode Debugger while running more than one worker with wrangler dev

I have more than one worker in my project and wish to debug them locally during development using VSCode. The CloudFlare docs recommend running multiple workers with a single command like this: yarn wrangler dev -c ./app/wrangler.jsonc -c ./api/wrangler.jsonc (docs: https://developers.cloudflare.com/workers/development-testing/multi-workers/#single-dev-command) However, when doing this I cannot deterministically attach the debugger to any specific service. It seems wrangler decides which service loads first (presumably based on the dependency tree between them as specified in the wrangler.jsonc services bindings?) and then only the first worker binds the inspector to port 9229. I've tried rearranging the order of wrangler parameters to influence this to no avail. I also tried adding inspector_port in the wrangler.jsonc but it seems that option is not supported? Is there any way to run multiple workers via wrangler dev and attach a debugger to either all of them, or choose which worker gets the 9229 debugger port, or assign unique debugger ports to each worker being run?...

Next.js requiring env variables in build variables

I'm deploying a new Next.js app to a worker using OpenNext. However I'm running into an issue where it cannot detect env variables unless I also add them to the build variables in the dashboard. Is that normal?...

how do i avoid worker to run on hongkong region

Hi, I’m using a worker to run Gemini AI, and if the worker runs in the Hong Kong region, it gets blocked by Google. Is there a way to configure Cloudflare Workers to avoid running in the Hong Kong region?

Cloudflare Workers drop single byte ranges to Backblaze B2 S3 and return 200 instead of 206

Hi all, im experiencing an issue with range requests via workers and cannot find any solutions for this online. Hopefully this community can help! Setup - Backblaze B2 S3-compatible (eu-central-003) - Serving large static MP4s (>1 GB)...

Workers Caching

Hey, I’m using a worker to serve protected assets. The problem I’m facing is that the cache-control headers I’ve set aren’t being applied, so the responses aren’t getting cached as expected. I’m trying to figure out whether this is happening because I don’t have a custom domain configured, or if caching should work without one. Could you help me understand what I might be missing?

EventSource.from type issue

How to force typescript language server to use EventSource.from from ./worker-configuration.d.ts? The typescript language server always use EventSource from node types, it doesnt even resovle EventSource from ./worker-configuration.d.ts file....

Will Cloudflare Snippets' isolations persist for a certain amount of time after booting?

I know that the Cloudflare Workers might persist the isolations for a certain amount of time after booting, allowing the module-scope object to survive (i.e. const cache = new Map() on the top) across few requests. I'd like to know if Cloudflare Snippets works in the same way or the isolations are destroyed after their requests finished. This is to determine whether I should hoist a few objects and variabled to the module scope so that they would be re-used across requests or not....

10000: Auth Error when connecting with Workers AI

I'm getting constant 10000: Authentication Error when trying to connect with Workers AI locally, (doesn't happen in prod) ``` InferenceUpstreamError: 10000: Authentication error at Ai._parseError (cloudflare-internal:ai-api:181:24)...

Background processing with a websocket connection

Hi, I’m wondering if I can fire of a task that just runs an infinite loop with a websocket connection in parallel? I looked into waitUntil, but I can’t seem to find any docs on how long will waitUntil allow the task to run for

Database query fails over proxy

Hey, first of all, I know this is neither the right channel nor server for this but I am unable to get this fixed and I know there are a lot of competent developers here so I thought I might try my luck here. I am currently trying to use drizzle on durable objects via the drizzle sqlite-proxy adapter. I am running the sql queries on durable object sqlite storage and then returning the rows to drizzle, but I am always running into issues when trying to use json columns. Drizzle will just error out with this: ```...

Disable edge cache in fetch

I am attempting to make an api call to an external service that also appears to be using Cloudflare. Locally it works fine and I get an uncached response, but when using workers, it's returning a response with last-modified header of August 8. How do I disable this cache? I have tried EVERYTHING: cache: 'no-store' in RequestInit...
No description

Do workers have deploy hooks?

I would like to configure external cron to rebuild several times a day. Right now I am using a Github workflow to update an empty file at the root of the project to trigger a build. But it gets tiresome since I have to rebase every time before I commit.

Cannot destructure property 'Agent' of 'workerdHttp' as it is undefined.

I'm trying to migrate from next-on-pages to opennextjs, and I'm getting this error when trying to preview the site but can't figure out what's actually causing it. Has anyone seen anything similar? Where should I be looking to debug? The stacktrace just shows a giant line of autogenerated code which isn't particularly helpful...
No description

Can't access env vars in Workers build

I'm migrating some Astro projects, that build static pages, from Cloudflare Pages to Cloudflare Workers. I'm following the migration tutotial and looking the docs. But I 'cant access the environment variables defined in wrangler.toml in process.env during or after the build. I using the compat flags: compatibility_flags = [ "nodejs_compat", "nodejs_compat_populate_process_env" ] But in any moment i'm seeing the variables in log Is there anything else i need to do?...

Workers cache overhead?

Hi, I am trying to figure out the best way to optimize response times for cached resources served from a worker. With worker + cache API, I am getting response times of about 200ms pretty consistently for an image. The same image, when served from cloudfront, is about 50-70ms. Both responses appear to be coming from IAD. Is the extra 150ms a result of worker overhead? Is there any way to improve this? (My application is an interactive medical imaging viewer, so load times are really important)

How to have multiple external custom hostnames within the same zone name for my different Workers?

I have generally speaking small clients, and (in the future) many of them. Because of that, I need to be able to have external domain names pointing to my few different workers (i.e. a client has a domain on Gandhi/OVH/etc and points it towards my app) without having to necessarily transfer the domain to Cloudflare. After some help on https://discord.com/channels/595317990191398933/812577823599755274/1405542977069121728: * created the fallback origin (proxy.reeple.studio) * added the custom hostname from my client (thorgal.caracal.agency)...

Workers CPU Limits

We’re currently dealing with some issues around cpu time limits for workers. We have a system setup with Workers for Platforms such that someone could upload a function and then schedule it to be invoked. It is our invoker that seems to be hitting cpu limits in some cases and we’re looking to get answer to the question of whether the Workers for Platforms workers would have their own cpu limit or whether that compute would count towards the limit of the invoker.

All of my requests are failing when smart placement is turned on with dev server.

Hi, I'm trying out smart placement for worker with dev deployment which serves 0 to minimum traffic. I know that smart placement has analysis process that may take up to 15 minutes, however worker request failing by 100% is something odd if I understood correctly. Is this normal for Dev servers with less traffic? Will it work if I just blindly switch smart placement on for Prod servers? (I'm afraid not) Looking for a help / insight to mitigate this request failures!...