Concurrency limits

Ya... the setup is 2 buckets. One for the public stuff (and it is indeed accessed directly on a public domain and optimized appropriately with Cache Rules and Tiered Cache). That's for things intended to be public (things like user avatars). The private bucket that is accessed via the API is the one he's having a problem with. The objects there are accessed via the API because the ability to view/download those are user permission based. The application checks for the appropriate permissions for the user that's logged in and then passed it through as necessary based on the permissions. He isn't doing a crazy amount of traffic or anything... he's at ~19M class B operations this month across all his buckets. The issue here is with the private bucket... since it's user permission based, the object is passed through via API. It's no where remotely close to 1,000 concurrent read operations even across all objects/users, I'd guess maybe it peaks around 10ish.
8 Replies
Sid
Sid3y ago
It’s been a while since we tried this out, but you should be able to comfortably get around 15K reads/s and about 6K writes/s on the same key. Like Kian said, we use Durable Objects as our metadata store, so there isn’t a hard rate limit we’ve set. We have found that the few people who do end up hitting these have other alternatives, such as sharding your data across multiple buckets (each bucket gets a DO, so these limits apply per bucket), but that’s significant complexity that can often be eliminated entirely by the cache. Yours look far lower than that though. Are you sure you got a 429 and not a 503?
digitalpoint
digitalpointOP3y ago
Was most definitely 429. The error (and JSON) coming back from Cloudflare was: 429 Too Many Requests: {"Code":"ServiceUnavailable","Message":"Reduce your rate of simultaneous reads on the same object."} Now that I know Durable Objects are used on the backend for metadata requests and that Durable Objects are single-threaded, it made more sense that I avoid metadata requests than try to figure out why a few concurrent metadata requests were failing. Given that, I ended up rolling out a change to our users that avoid the metadata request completely (it wasn't something mission critical, rather a sanity check for the abstracted filesystem): https://xenforo.com/community/resources/app-for-cloudflare%C2%AE.8750/update/44961/
Made change to XenForo's attachment data entity to be more efficient (normally XenForo checks if an attachment exists before making an additional call to actually get it). This will reduce an API call for every attachment view because we don't need to check if the attachment exists (we know it does already because we have a record of it in attachment data).
Sid
Sid3y ago
Yeah, we have some tricks to get a higher DO throughput, but then the 429 you're receiving is super weird if you're only doing 10rps. Hey do you have a timestamp for this 429? We've been seeing this with a few other folks, so trying to build up a story
digitalpoint
digitalpointOP3y ago
One of the reports I received from a user had this date/time from their error log:
ErrorException: Cloudflare: Client error: HEAD https://id-xxx.r2.cloudflarestorage.com/attachments/attachments/1159/1159937-3c44ba1e1c54d0114d1f646babd887c8.data resulted in a 429 Too Many Requests response / null src/XF/Error.php:77 Generated by: Marc Aug 23, 2023 at 2:51 AM {"Code":"ServiceUnavailable","Message":"Reduce your rate of simultaneous reads on the same object."}
The date/time is going to be in local time of the user viewing it (which I don't know off the top of my head). I could ask them if it would help.
Unknown User
Unknown User2y ago
Message Not Public
Sign In & Join Server To View
Sid
Sid2y ago
@frederik_b would you be able to look into this if you have a few? Timestamps would help Jason!
Unknown User
Unknown User2y ago
Message Not Public
Sign In & Join Server To View
Ohyzd
Ohyzd2y ago
@Frederik, I've had this happen during a database diff backup on 2023-10-20 06:30:11.164 CEST. The backup lasted around 10 seconds and there is no other traffic to the bucket. According to GraphQL stats, before 429 was returned, there had been 14 list object, 7 put object, 12 get object and 11 head object requests.

Did you find this page helpful?