ok, so lets say I use cron jobs to slowly sync my database, and then I want to run some script to process the data in the D1 database, what would be the best way to do that?
I have a worker set up to send emails using the MailChannels Integration. It works great from the console. But when I call it from my localhost to try and send an email. I get a 403 error. Is this not possible? My site is Astro and is deployed on Cloudflare Pages.
okay yeah it has to be running deployed in CF env. I just went through getting MC working (using the free setup with Workers). It's sort of a pita;) You did the Domain Lockdown DNS thing, DKIM, SPF, etc...?
Yeah. I have the Domain lockdown DNS, DKIM, SPF all that Jazz. If I run it from the Dashboard. It works great. Let me try it again with some better errror handling
is it being called in a way that would pass the referrer? CORS works off the Origin header (set by browsers when executing AJAX calls). So maybe you just need a CORS implementation?
I have a question regarding the design of worker logics. I frequently encounter situations where I want to extract the logic required by a fetch handler into separate functions. However, as I proceed with the extraction steps, I find myself needing access to the request and environment objects within nested functions. I'm struggling to find a solution for designing these parent-child relationships without having to pass the request, context, and environment objects several levels deep. Is there a solution for this issue?
Ah ok, and the way to reach them in the UI is via Settings >> Triggers. But just for the record, this has changed, hasn't it? It used to be directly on root level (same tab row as Setting?). Just checking if I'm geting dementia...
I would say that passing it down is fine. This is also the same as other platforms, e.g. Node.js. If you don't want to pass down three objects, you could create your own class to wrap request, env and context, and pass down instances of that class.
Good Morning, if I may ask, I have a question regarding a worker prototype I've recently built. Essentially, it functions as a bundled worker serving as a WebSocket server [yes!]. It listens for incoming parameterized requests, looks up a value in KV, and returns the value via the WebSocket connection. Due to the inherent behavior of bundled workers having a 50 ms lifetime, the WebSocket connection is automatically closed or terminated. This behavior is intentional. I utilize the WebSocket connection to transmit data to the client in a manner that is intended to make data scraping (e.g., via REST) more challenging.
Now, here are my questions: How does a worker, in general, scale concerning WebSockets? Or is this a general inquiry regarding the behavior of bundled workers concerning concurrent connections? I'd like to determine how many parallel WebSocket connections/requests I could potentially serve, both in the free plan and the smallest paid plan initially. Additionally, I don't yet grasp whether it's my responsibility as a developer to spin up the appropriate number of workers myself or if this process is handled automatically by the Cloudflare infrastructure (meaning only one worker needs to be programmed, and the rest occurs automatically).
@HardlyWorkin' let's say I go with the paid plan. It's $0.15/million requests per month. Could those 1 MIO websocket based requests be served also in an hour? Will the infrastructure scale?
@HardlyWorkin' maybe... just scaling in theory. You never know. Realistically I am not expecting it. Just want to assess if the planned infrastructure is feasible