if you give your d1 dbs a prefix like D1_D1_, you could do something like Object.keys(env).filter(key=>key.startsWith('D1_'))Object.keys(env).filter(key=>key.startsWith('D1_'))
Hi! I wonder about the result object returned from from stmt.all() and .run(). Is it possible that it returns { success: false, ... } without raising an exception?
Hi there, I'm quite interested in D1, but of course find the 100mb quite lacking. Obviously that's an alpha restriction and I'm not concerned about timelines, but somewhere here it was said this is due to a technical limitation because it loads the entire SQLite DB into memory.
My question is whether that (loading into memory) might change in the future? Because, if not, it seems like D1 could only ever be for very small applications (or cost a fortune to pay for GBs or even TB of ram...).
Yes, sorry, I'm aware that you're aware. My question is whether it will change to not load the entire db into ram, so that large dbs could be possible and affordable? If so, I'm happy to work within the alpha and beta limits. But otherwise, there's no point in me dedicating time to this if it won't be possible/affordable to use D1 in production
Ok. I suppose I'll just keep an eye on this then. What I'm curious about is not what the limit will evolve to be, but whether it will be tied to ram (expensive and size-limited) vs ssd (cheap and unlimited in size). If there aren't currently any plans to consider moving out of ram, I can't justify testing D1
I don't think that would work for my architecture/application. And, again, my issue is with it being stored in expensive ram - even if the architecture was split like you suggest, it would still be orders of magnitude more expensive than ssd.