Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

if i use pipeline with the http instead

if i use pipeline with the http instead of a binding, do i have to be in "smart placement"? I have a worker using piplines on http and anylicsts engibe with a binding and i cant switch it to the "deafult, keeps erroring out

Thanks for the update

Thanks for the update

Pipelines -> R2 debugging

Is there something going on with Pipelines ? (pipeline ID: 0fb015d78e5d4b01b62fe13460eb9f08) For more than 24h now there is nothing ingested / delivered to R2, but in my workers there are no errors and events are acknowledged...
No description

partitioning key and number of files created

Hi! Is partitioning key supposed to have an impact on the number of files created? I am seeing a lot of small parquet files created with partitioning key %F/%H%M%S, despite 30s interval and 512MB max file size...

Writing to streams

is this what you're looking for? ```{ "pipelines": [ { "pipeline": "<STREAM_ID>",...

Hi everyone, I'm seeing a similar issue

Hi everyone, I'm seeing a similar issue as @mau . I'm sending events via worker binding, but I don't see them landing in r2 (sink). Configuration is pretty close to default one, schema is pretty simple, sink is configured to use Parquet format with zstd compression. I've seen a couple of issues returned by the send() call (Unhandled error in RPC), but that doesn't explain why all (millions) events are missing. Pipeline dashboard shows 0B for Data In (Metrics tab), which doesn't seem right. I've...

Hi team, I am sending events but don't

Hi team, I am sending events but don't see them in the dashboard. Can you please help?

Hi, anyone know if there is a plan to

Hi, anyone know if there is a plan to support S3 for pipelines (beta), currently only R2 is sopported, I wanted to try it out with our Confluent Kafka, but they don't support R2 only AWS s3. From what I undersand Pipelines are similar to Firehose, But I would prefer to stay on Cloudfalre....

Hi!

Hi! Hypothetical situation, let's say we have many clients sending arrays of events to a stream, and stream capacity is reached. How will one particular batch of events from a client be processed? Will the whole batch be rejected? Or will some events of the batch be accepted? I am trying to compare the Stream behavior with Kinesis PutRecords api, where it responds with the list of records accepted and rejected...

Hi !

Hi ! What kind of stream ingest latency can we expect at scale ? I played a bit with the HTTP endpoint for small events and am getting latency in the 500-1300 ms. Is the same latency expected at scale / when pushing events via a workers binding once the beta is over? Currently we are pushing events to Kinesis and seeing latency < 100ms with smart placement. And low latency is very important as we want to acknowledge to the client that the events have been properly ingested (we cannot rely on waitUntil if for some reason some issues occur on the platform)...

I believe my pipeline (

I believe my pipeline (73bd2c7436274b76ab94148aa617dccb) is dropping some events. I have a worker that writes the same event to this pipeline and to worker analytics. I expected analytics to be more unreliable because it uses sampling. But I see all the events in the Analytics dataset, but some are missing from the sink. I also tried sending events directly using the HTTP endpoint and that's not working either. I think end-to-end traceability during the beta period would be useful.

I created a pipeline using the UI. At

I created a pipeline using the UI. At the last step it reported that something went wrong, but no details. I clicked on the "+ Create Pipeline" button again. It said the pipeline already exists. I see the pipeline, stream, and the sink in the UI. But when I try to query the table, wrangler says: 40010: iceberg table not found "default.combined_events". Sink: ae37d2d4e2864b0da3859362d69af79d Pipeline: 0b90bd168ad4490f8f62fb06d520872a

**Pipeline ID** -

Pipeline ID - 8428c7b24c4b44609b11c8bd9319f7b1 Event production method - Worker Binding - tesdash-signal-processor Sample events - {"signal":"FAAAAAAAD.....=","received":1759446232076,"session_id":"c985bc9d-d018-4176-ab42-6e04c84e770b"} Wrangler version - 4.41.0 ...

Not sure if this is disconnect in what

Not sure if this is disconnect in what the interface says and what the intended behaviour is, but when viewing streams, the UI says: "Specify origins that can send cross-origin requests to this stream. Leave empty to allow all origins." But if I leave that blank, I get console errors indicating CORS errors. ...
No description

Invalid partition strings

Also I am playing with custom partitioning strings, and it seems they are sometimes rejected silently, making the pipeline unusable: %F/%H%M%S%L -> Does not produce any file %F/%H%M%S -> Works properly...

Invalid JSON output

Hi ! I am seeing some invalid JSON output from simple JSON schemas : Schema : ```{ "fields": [...

what are the downsides of setting the

what are the downsides of setting the batching interval to 1 second?

Hey 👋

Hey 👋
I'm unable to create streams with dashes in the name (like the example in https://developers.cloudflare.com/pipelines/streams/manage-streams/): ``` ➜ wrangler pipelines streams create my-stream --schema-file ./schema.json --http-enabled...

Feedback

Liking it so far. It would be nice if it was possible to configure custom domains for pipelines. Yeah, I know workers can do this, but with one use case, I am potentially throwing billions of events at it a week and the compute cost on the workers would be 99% of the cost of it. Still, pretty neat stuff. ...

Hello 👋 Pipelines are really great and

Hello 👋 Pipelines are really great and I'm looking forward to this product! I'm encountering a small issue when creating pipelines with Wrangler or the web portal. The first time I create a pipeline, everything is fine. If I delete it, as well as all the associated resources (pipeline, stream, sink, and R2 bucket), to start fresh with the same names, I keep getting errors. If I use new names, things go well again. The problem persists even if I wait for several hours. Are there any known limit...
Next