Cloudflare Developers

CD

Cloudflare Developers

Welcome to the official Cloudflare Developers server. Here you can ask for help and stay updated with the latest news

Join

I created a pipeline using the UI. At

I created a pipeline using the UI. At the last step it reported that something went wrong, but no details. I clicked on the "+ Create Pipeline" button again. It said the pipeline already exists. I see the pipeline, stream, and the sink in the UI. But when I try to query the table, wrangler says: 40010: iceberg table not found "default.combined_events". Sink: ae37d2d4e2864b0da3859362d69af79d Pipeline: 0b90bd168ad4490f8f62fb06d520872a

**Pipeline ID** -

Pipeline ID - 8428c7b24c4b44609b11c8bd9319f7b1 Event production method - Worker Binding - tesdash-signal-processor Sample events - {"signal":"FAAAAAAAD.....=","received":1759446232076,"session_id":"c985bc9d-d018-4176-ab42-6e04c84e770b"} Wrangler version - 4.41.0 ...

Not sure if this is disconnect in what

Not sure if this is disconnect in what the interface says and what the intended behaviour is, but when viewing streams, the UI says: "Specify origins that can send cross-origin requests to this stream. Leave empty to allow all origins." But if I leave that blank, I get console errors indicating CORS errors. ...
No description

Invalid partition strings

Also I am playing with custom partitioning strings, and it seems they are sometimes rejected silently, making the pipeline unusable: %F/%H%M%S%L -> Does not produce any file %F/%H%M%S -> Works properly...

Invalid JSON output

Hi ! I am seeing some invalid JSON output from simple JSON schemas : Schema : ```{ "fields": [...

what are the downsides of setting the

what are the downsides of setting the batching interval to 1 second?

Hey đź‘‹

Hey đź‘‹
I'm unable to create streams with dashes in the name (like the example in https://developers.cloudflare.com/pipelines/streams/manage-streams/): ``` âžś wrangler pipelines streams create my-stream --schema-file ./schema.json --http-enabled...

Feedback

Liking it so far. It would be nice if it was possible to configure custom domains for pipelines. Yeah, I know workers can do this, but with one use case, I am potentially throwing billions of events at it a week and the compute cost on the workers would be 99% of the cost of it. Still, pretty neat stuff. ...

Hello đź‘‹ Pipelines are really great and

Hello đź‘‹ Pipelines are really great and I'm looking forward to this product! I'm encountering a small issue when creating pipelines with Wrangler or the web portal. The first time I create a pipeline, everything is fine. If I delete it, as well as all the associated resources (pipeline, stream, sink, and R2 bucket), to start fresh with the same names, I keep getting errors. If I use new names, things go well again. The problem persists even if I wait for several hours. Are there any known limit...

Hi!

Hi! Let’s say I need 3-4 streams (as per the current limits which are 5MB/s) to handle spikes in load - Is it possible to direct the corresponding pipelines to a single Sink ? - If the Sink is an R2 bucket, can I ensure, using custom partitioning, that files are written in lexical order ? (I had issues with pipelines legacy where files would be created in R2 out of order) Use case : events ingestion using Cloudflare Pipelines -> R2 -> Clickhouse Clickpipes S3 integration with continuous ingest (requires lexical ordering of files)...

Hi everyone, I'm trying the new "Create

Hi everyone, I'm trying the new "Create Pipeline" wizard. I'm getting errors while stream is being created (part of the final step) . I've been using the example schema from here: https://developers.cloudflare.com/pipelines/streams/manage-streams/ It seems to fail validation on the fields of type "list", as I can see the same error even when I try to create a pipeline/stream with a single field of type "list". Anyone experiencing something similar?...
No description

hi, I'm reading gzip file produced by

hi, I'm reading gzip file produced by pipeline saved into R2 using boto3. But using s3.get_object on that object cause FlexibleChecksumError

Hello everyone,

Hello everyone, I am trying to follow along this tutorial: https://developers.cloudflare.com/pipelines/tutorials/query-data-with-motherduck/. I am stuck at the pipeline creation stage. When I run npx wrangler pipelines create clickstream-pipeline --r2-bucket clickstream-data --compression none --batch-max-seconds 5, I get the following error: ``` ⛅️ wrangler 4.27.0 (update available 4.28.0)...

We are using pipelines in two different

We are using pipelines in two different customer cloudflare accounts. Both pipelines are standard default deployments with a worker sending json data to the pipeline. In both cases pipelines ends up leaving aborted multipart data files in r2 which leads to permanent data loss. Anyone have any ideas?

Hi @kagitac I am trying to POC a switch

Hi @kagitac I am trying to POC a switch from kinesis firehose to CF pipelines, initially using the HTTP endpoint but also eventually moving from lambda as the entrypoint to a worker.... So far, I'm seeing pretty sluggish perf on ingestion, ranging from 400 to 1200ms, both when using the http api & a worker binding. With the worker binding im also seeing some Internal Operation Failed (which i've added retry for, but that makes the process even slower) Is this expected perf or could I be doing something funky? ...

Can we assign a custom URL to the

Can we assign a custom URL to the pipeline endpoint?

Hi Cole, yes sure:

Hi Cole, yes sure: bb5b7d2a19c4455faa46a808919b50aa thanks for the help!...

hi @Matt Silverlock, re: dynamic

hi @Matt Silverlock, re: dynamic partitioning – the documentation states "By default, Pipelines partition data by event date and time. This will be customizable in the future.". can you share any rough timelines for when you expect to ship this? pretty critical for multi tenant workloads, so looking to assess if i can give Pipelines a shot or go down another route 🙂 thx!...

Linking Cloudflare R2 as a source - Docs...

My goal is to store events for something like PostHog to injest via R2. Seems like csv and Parquet files format? https://posthog.com/docs/cdp/sources/r2 Would be great to collab on this!...
Next