Loving analytics engine 🙂 When can we expect pricing? Really want to scale it up more to billions..
Loving analytics engine
When can we expect pricing? Really want to scale it up more to billions...

double column compared to the timestamp column? I'm writing ingesting Logpush batches into Analytics Engine, but they are delayed by up to a couple of minutes, so I need to use the timestamp specified in each log entry.Alternatively, if you need to be very accurate, maybe you can filter using both columns? You could increase the range on the timestamp one slightly, if needed.That sounds like the best solution for now. I know for sure the real timestamp won't be larger than the datapoint's, and, unless there's an issue with Logpush, the datapoint will be written within a couple of minutes of the event.


good wayIs there a bad way to do it?
blob it works as expected (image #1). But a double breaks the labelling (image #2).


We recommend the use of the Altinity plugin for Clickhouse for querying Workers Analytics Engine from Grafana.
Cloudflare API in Pages Functions may also have been affected returning intermittent 500 errors.Thanks for the report @thomas_sc
A fix has been implemented and we are monitoring the results.
HAVING on the roadmap? FORMAT JSON /* alerts query */; to each query in the alerts engine, and right now AE can't handle it. Thanks!denoflare ae-proxy that can be run on a local machine and be a more compatible endpoint for the current grafana plugin situation. Install denoflare from latest commit, or to run without any installation:env.ANALYTICS_DATASET type? The docs don't specify this, trying to get to a compile-able TS code for my corresponding wrangler.toml changes:AnalyticsEngineDataset but want to be sure...--remote flag for workers?doubledoubletimestampblobHAVING FORMAT JSON /* alerts query */;denoflare ae-proxydeno run -A https://raw.githubusercontent.com/skymethod/denoflare/6a5c6813715cd66016d584e98a0d3ad7d335236d/cli/cli.ts ae-proxy --port 8123 --basic myuser:mypass --account-id CFACCOUNTID --api-token CFAPITOKENenv.ANALYTICS_DATASETAnalyticsEngineDataset--remote-- The original timestamp is stored in double12 as UNIX nano
SELECT
(intDiv(intDiv(double12, 1000000000), $interval) * $interval) * 1000 as ts,
sum(_sample_interval) AS events
FROM honeypot_spectrum_events
WHERE
-- Use the indexed timestamp to retrieve rows written between $from and $to + 15 minutes
timestamp >= toDateTime($from) AND timestamp < toDateTime($to + 900) AND
-- Use the original timestamp to retrieve events that occurred between $from and $to
double12 >= ($from * 1000000000) AND double12 < ($to * 1000000000)
GROUP BY ts ORDER BY ts-- The original timestamp is stored in double12 as UNIX nano
SELECT
(intDiv(intDiv(double12, 1000000000), $interval) * $interval) * 1000 as ts,
double9 as originPort,
sum(_sample_interval) AS events
FROM honeypot_spectrum_events
WHERE
-- Use the indexed timestamp to retrieve rows written between $from and $to + 1 day
timestamp >= toDateTime($from) AND timestamp <= toDateTime($to + 86400) AND
-- Use the original timestamp to retrieve events that occurred between $from and $to
double12 >= ($from * 1000000000) AND double12 <= ($to * 1000000000)
GROUP BY ts, originPort ORDER BY tsname = "[REDACTED]"
main = "dist/main.js"
compatibility_date = "7/25/2023"
workers_dev = false
analytics_engine_datasets = [
{ binding = "VIEW_ANALYTICS", dataset = "[REDACTED]_DEV_VIEW_ANALYTICS" },
{ binding = "COMMENT_ANALYTICS", dataset = "[REDACTED]_DEV_COMMENT_ANALYTICS" },
]