So I did some _actual_ benchmarks and this doesn't make sense really (serialisation should take tens
So I did some actual benchmarks and this doesn't make sense really (serialisation should take tens of ms). I went back in the discord history, and every time someone has complained about D1 latency, it's either a temporary spike for everyone, or never responded to by a cloudflare employee.
Is there any knowledge/work behind the scenes on this? I'd be perfectly happy with "it's [a network layer | unoptimised code | something else] and we know about it" just to get some confidence. This is APAC, either REST or using the Workers SDK, warm DB (seconds between requests) with no concurrent writes during the read, same performance characteristics through both services.
Is there any knowledge/work behind the scenes on this? I'd be perfectly happy with "it's [a network layer | unoptimised code | something else] and we know about it" just to get some confidence. This is APAC, either REST or using the Workers SDK, warm DB (seconds between requests) with no concurrent writes during the read, same performance characteristics through both services.

