How are you connecting to the MySQL DB? There isn't a connection string that you can use to connect to a D1 DB externally, done through a worker or HTTP
If DB perf is a requirement, you might want to look into on-prem. As far as I'm aware, D1 offers easy scalability (due to CF having 200 trillion servers, real number btw)
I would note that D1 can(once replication goes live) theoretically be faster than on-prem if you still use Workers. As in, you would have to have/managed many DBs to be able to match the scale/locations of D1
We fully transitioned from mongodb to d1 yesterday and it's plenty fast! The python script that runs on our server updates the DB while our pages functions act as the client which return the results on our api. Considering that many requests happen every minute which receive the same output how could we cache the response for example one minute in order to not burn row reads?
Hey all, I'm new here. I'm noticing really slow performance using D1 (cold and hot runs) in other countries like Germany and Australia. I'm getting ~1200ms on cold starts and ~700ms on hot. Is this normal for the beta?
Most recent version, I think? Just created the D1 on Oct 1st and testing with a deployment made yesterday. I think it was in us-east.. but not sure where to check the region.
It is the closest location, in which D1 is supported, from the location you hit when you create it. So if you are in us-east, it would have been created in and around there.
It's via a worker. I'll have to check the meta. Should I expect their to be any regional or edge caching with D1 in the beta? In latency testing in US-Mich, Germany, NL, and Australia, I haven't seen faster latency than a singe region RDS micro instance.
Ah, got it that makes sense for now. I'd make a note in the d1 docs to mention this as a workaround, as it wasn't clear to me if I was going to roll out an app.
Hey Matt, I'll bump this message with another "use case", or more correctly, reason to implement - ORMs and libraries. Lots rely on transactions being a thing, and db.batch doesn't fully replace, as you can't split them and execute business logic inbetween queries.
Do you have sql transactions on the roadmap somewhere?
anyone ran into issues where migrations list works when using --local but not when trying to apply to prod? i just get empty output when running against prod (no errors). trying to migrate a new database for first time
Oh, D1 isn’t actually on the edge just yet then? I had assumed the workers would be querying from the closest D1 instances, making for very fast reads. Isn’t that the main value prop of d1?
I’m not seeing anywhere on the website where it states that the read replicas are disabled. I was looking to launch Monday so this is terrible news for me. Anyone have info on this?
Would be great to see some work done on wrangler validation generally around naming conventions. One of my team created a table in the dashboard with a hyphen in it.
I think people expect D1 to behave more than Turso where DB read and writes are made only to a local db edge instance, which is asynchronously later sent to the primary region. This keeps writes and reads fast for the worker as it doesn't have to wait on the primary region (outside of cold starts sync).
Client applications can connect directly to a replica for read and write operations, but any writes are automatically forwarded to the primary.
Client applications can connect directly to a replica for read and write operations, but any writes are automatically forwarded to the primary.
I get that it’s beta, but I wasn’t expecting the main value prop to be missing at the moment. I mean… I could have kept my Postgres database and just added caching, there’s no difference right?