How to overcome 4 simultaneous connection at the same time in Workers?
Hi guys, I'm trying to run 4+ subrequests at the same time in workers to speed up my work, but it seems like 4 or 5 are the hard limit of CF. Currently I have to batching the Promise batch of 3 items to guarantee that the code would not be crashed. Is there any way to overcome this? Which config do I need to change? Thanks
21 Replies
Should be higher then that
https://developers.cloudflare.com/workers/platform/limits/#simultaneous-open-connections
You can open up to six connections simultaneously for each invocation of your Worker. The connections opened by the following API calls all count toward this limit:Still, if you need a ton, Workers might not be the best pick. Maybe something worth throwing into containers once they leave beta
Hmm, if it's 6 then I guess calling into KV and R2 are also counted.
Currently the state of Container is a bit harsh to use, bc I have to manage the instance manually. Hope that Container will leave beta quick and add the load balancing feature.
it's everything yea, the docs page has full details on the limits
the fetch() method of the Fetch API. get(), put(), list(), and delete() methods of Workers KV namespace objects. put(), match(), and delete() methods of Cache objects. list(), get(), put(), delete(), and head() methods of R2. send() and sendBatch(), methods of Queues. Opening a TCP socket using the connect() API.
Btw should I run some heavy CPU crunching job on container like video rendering? It seems like CF only provides 1/2 vcpu at most and it does not benefit anything on libavcodec tho, if CF does CPU pinning then I only have 1 physically CPU
If the thing you're being limited by is mass KV stuff, the API has a bulk write endpoint:
https://developers.cloudflare.com/api/resources/kv/subresources/namespaces/methods/bulk_update/
not as convenient as the binding itself but
Cloudflare API | KV › Namespaces › Write Multiple Key Value Pairs
Interact with Cloudflare's products and services via the Cloudflare API
yeah that's my best bet for now. I'm using it to bulk put and batch delete
kinda weird that cf supports that on KV api but not the KV binding itself.
It's one of the tasks they talked about but it's indeed rather limited currently. They've talked about adding higher specs in the future
I ran yabs on the 1/2 vcpu plan near the launch, score's not great:

oh... so it seems like cf does not enforce anything like CPU pinning so the CPU misses the CPU cache a lot.
okay, let's hope that containers mature and I can move some of my work from fargate/runpod to containers
I have some more regular testing of Workers Builds single core geekbench, which is bult on Containers. This is the full core so divide it by half to (roughly) hit the 1/2 vcpu the container has at max

They have a few higher spec machines in circulation which score better but it's luck of the draw, eh

hopefully we'll start seeing more of those
so i guess it's still okay for tasks like image transformation.
yea, I mean they're using the same machines they normally use for proxy so they're naturally built to be super high concurrency / not great at single tasks
btw do you guys have any plan on rolling out ARM64 instance for cheaper price?
worth noting I'm not a cf employee, green names/champs are just volunteers who help out and generally know a fair bit
AWS bandwidth is killing me XD
the team hasn't talked about arm at all with containers, and since these are borrowed from their normal proxy job it'd probably mean moving their normal proxy stuff to using arm. Would be interesting, something we'd probably see in one of the server generation blog posts first
let's hope that they improve it more so I can finally move my code to CF. The amount of time I spent for cf workers from the start (when they did not even have miniflare) is countless... hope that they will release some kind of certification like AWS XD
They do have a certification program but it's only for partners currently, been around for a bit
oh... CF is nice but not much folks in VN invest their time in it
thanks again! Have a nice day!