Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

Workers API

Within a Workflow, can you run a Workflow Step in parallel? For example: The AI SDK allows parallel execution of a step: https://developers.cloudflare.com/workflows/build/workers-api/#call-workflows-from-workers And so does Mastra: https://mastra.ai/en/docs/workflows/control-flow#simultaneous-steps-with-parallel...

in workflow step if we want to return

in workflow step if we want to return not just from step also from workflow successfully what can we do, do you have any idea ? I mean is there any equivalent form of non retryable error

Has anyone encountered Workflows failing

Has anyone encountered Workflows failing with the following error: (instance.in_finite_state) Instance reached a finite state, cannot send events to it

it seems like onl;y the invocation

it seems like onl;y the invocation islogged in workers logs

Large File Convertion

Hi, i am implementing a feature that requires doing file conversion, and the file can be > 30mb. a single worker definetely can't handle it. haven't thought everything through, but in general i think the entire process can be divided into multiple steps (e.g. read, convert, store ...) is workflow a good solution for this kind of use case? otherwise can i have some suggestion. thanks

Is there anyone faces invalid instance

Is there anyone faces invalid instance id problem, in cloudflare docs I didnt see any rule about this

Hey đź‘‹

Hey đź‘‹ I deployed a new version of my workflow and I'm now running into WorkflowInternalError: Attempt failed due to internal workflows error errors - at any step (even those that weren't affected by my latest deploy). Run ID: 6b8af9a0-7ba7-4f2a-ade4-31500c2b2cfb, 0ecea869-a90e-416f-a882-2a84f45d7f13, or 5ba45a09-a860-4f7c-8097-197b67dbfbf0. ...

Workflows stuck QUEUED

We are seeing a handful of workflows occasionally be stuck in QUEUED. They never get out of this state. We have to manually restart them. Hasn't been an issue until about a week ago, now seeing this happen sporadically when we start workflows. Any idea what might be up?
No description

Blocked cron

Okay, looks like I understand what's going on, when you trigger the workflow it seems to initiate the worker near your geographic location. When done through cron it seems to do it from other geographic locations. I have a security rule to block all traffic to the worker that's not from the United States and it seems that, as of recently, my cron-scheduled workflows are being instantiated outside of the United States in Poland or Singapore which is causing them to be blocked by the security rules. Is there a way to force location of execution of workflows or whitelist / exempt same account workers from security rules?...

Can you actually see the console.logs

Can you actually see the console.logs anywhere?

Why workflows have such a big wall time

Why workflows have such a big wall time? what could the reason be? Is this walletime expected? Is it related to retries? in my workflow i have one step that is expected to fail and go errored status, maybe that's the issue there? ...
No description

`introspectWorkflow` _should_ be done at

introspectWorkflow should be done at the start of the test (you can think of doing introspectWorkflow where you mock other APIs and so on) - intercepted workflows with DOs should just work (if you want to mock/intercept the calls between the two, let me know)

Greetings, having the same issue as some

Greetings, having the same issue as some others here, where valid Workflow IDs are visible in the UI but can't be found by my Worker. Workflow name is: PipelineV2Runner-production A Workflow ID this happened to is: e215050f-edb0-4132-9484-9da1e84019ea ...

Greetings,

Greetings, I am calling like this to start a workflow. ``` const instances = await env.MATCH_DAY_UPDATE.createBatch( batch.map((match) => ({ params: { matchId: match.id } })),...

Hey, I got errors like instance.not_

Hey, I got errors like instance.not_found for my workflow instance when I query
env.WORKFLOW.get(instanceId)
env.WORKFLOW.get(instanceId)
, while the instance is on the dashboard. Is there anything wrong? I also found occationly the dashboard page shows my workflow instance status is unknown but display it normally after I refreshed the page.

Found a bug in making workflows. Despite

Found a bug in making workflows. Despite the docs saying you can't, you can make workflows with ids beyond 64 characters and outside of the regex listed https://developers.cloudflare.com/workflows/reference/limits/

I'm getting the following error for my

I'm getting the following error for my workflows that have a sleep step for a longer period of time. Can someone help me troubleshoot?
Error: Aborting engine: Grace period complete
Error: Aborting engine: Grace period complete
...

In terms of UX, that would be great to

In terms of UX, that would be great to reassure end user about the processing of each step, totally agree. Do we have to use the api and not the workers binding? Also, I'm noticing in local env that error is not returned from the instance status, but returned in the production env which make it really hard to test it out.

Hi, any workflows we create are giving

Hi, any workflows we create are giving off a Internal Server Error, and we can't view our Workflows in the dashboard. Any incidents going on?
No description

We are having an issue where one of our

We are having an issue where one of our workflows starts failing every single instance whenever a CF build runs on a completely separate (unrelated) worker. It keeps failing until we trigger a re-build on the workflow's worker. This is an extremely annoying issue and very unclear why this would even be happening.....
Next