I don't believe it is, although I wish it was. In case it helps, here is a client api I wrote against it - same as you, just based on poking through wrangler
For Cloudflare Containers the included usage is 375 vCPU-minutes/month ... That means the small "dev" instance with 1/16 vCPU can run for 6000 minutes/month, which are roughly 4 days. Is that math correct?
If I set "sleepAfter" to "0" this bascially means that every call to the container will be a 2-3 second cold start, right?
My service gets a call every 20 seconds. So if I set the timeout to 30 seconds, it will be active all the time and burn through the quota with reaching the limit after 4 days... is that correct?
Or will it still be considerably fast when sleeping?
An AWS lambda has a nasty cold start of 2-3 seconds, too, but after that it's "warm" for about 10 to 15 minutes... But I only need to pay the actual usage time, not the time it's idling.
Did I get that right about the difference between a AWS Lambda and an Cloudflare Container?
So the new about capnweb dropped, and it's powering wrangler local<>remote services in dev mode, i.e. it can communicate with Cloudflare services remotely and this is powered by capnweb protocol, so things like kv, worker bindings, queues etc.. are no longer restricted to workers, so I guess in theory if one digs into the source code for wrangler, we could use the same code to allow direct communicate from containers. My question is are you planning to drop a few SDKs for this - node initially, java, dotnet, go, rust? And secondly is this on the short term horizon perhaps this birthday week?
Cap'n Web is a new open source, JavaScript-native RPC protocol for use in browsers and web servers. It provides the expressive power of Cap'n Proto, but with no schemas and no boilerplate.
@kian | Containers can you raise this internally - https://www.cloudflare.com/ips/ is out of date. for example I am making outbound container egress from 104.28.154.x currently
I'm hoping this is only used for container egress, as that a pretty huge oversight if these are proxy ips as well (our iac fetch this ipv4 file to allow cf <-> origin traffic)
I understand that Containers are backed by Durables…
However, my question doesn’t matter anymore as the pricing on Containers that also includes Durables and Workers according to the docs, seems pretty confusing. There are too many parts to understand the total costs.
Given my recent experiences with R2 and the strict „no refunds“ policy this could be very well the next thing I mess up and end up with unexpected/high bills. So I decided that I will stick strictly to workers for everything I can do with them and for everything else they need to call an AWS lambda.
Cloudflare Containers are coming this June. Run new types of workloads on our network with an experience that is simple, scalable, global and deeply integrated with Workers.
On AWS I used the Java images in the past and for the better cold start time I’m moving to native images using Kotlin/Native.
Yes, it’s not only the pricing, but mostly. The other thing (as said above) is that they can’t idle without cost. That’s the deal breaker for me, too.
I’d like to pay only for the time the Container actual does something purely based on the time it takes. Having different prices based on instance size is fine, of course. Paying for Durables extra seems unnecessarily.
Oh that's interesting, I didn't realize Kotlin/Native was ready for server-side work, Ktor is there now i see, but you've got no r2dc or jdbc? I guess if you've got a http client though you can talk to all of aws rest http apis including dynamodb, but I imagine you have zero sdks so you're hand rolling your own high level client code for the dependencies?
personally I decided unless its something that needs to live in cloudflare its honestly easier to have an actual vps running a container vs using cloudflare containers, especially at this stage
AWS is migrating their Kotlin SDK to Native. I expect that to be usable in the near future. Right now I use APIs via HTTP if I need to. Here is a small project I build: https://github.com/StefanOltmann/steam-login-helper it doesn’t use AWS APIs. I still need to figure out how to use them, but that’s the next thing I’m going to do.
Why containers pricing look so costly considering it also require durable objects. I compared this with cloud run (instance based billing): - vCPU : GCP : $0.000018/sec vs $0.000020/sec (CF) - Memory GCP : $0.000002/sec vs $0.0000025/sec (CF)
Plus cloudflare containers run on durable objects + workers which is another extra cost. So overall, this looks like atleast 2x costlier.
I'm already using workers and my use case with containers is to host mongodb atlas with small container (never sleep) for fast requests and mongodb in-built connection pooling (since mongodb require durable object for connection pooling).
I find pricing hard to compare here. A fair comparison would include all other costs a Container has - so workers, durables and whatever else included.
Maybe if/when they put up a pricing calculator that lets me calculate the costs for my scenario: A call approx. every 20 seconds that takes ~250ms to calculate on a machine with 128 or 256 MB RAM that idles between the calls.
On AWS I can translate this into the two metrics they have: Absolute call count and GB/s computing time.
For Containers this is much harder right now.
Also, I don't know if cloud run charges for idle time or not. Makes a difference, too.
Reposting sorry, because i didn't notice there is a dedicated containers channel.
Does the cloudflares containers support a "fire-forget" use case? I'm looking at an ffmpeg encoding scenario where I use containerStub.start({entryPoint: args}) where I don't await the start method.
I tried a few things but it seems my container gets kills (exit code 143) before it even completes.
how long is the ffmpeg encoding taking? so you want to run longer than the request to the container? i.e. /submitjob and return something like { ok: true, queuedId: guid } and keep going?
Basically yeah, start the container with an assigned jobId. On my grunty desktop a 4k video can be encoding into HLS 720p, 1080p, 2160p in about 3 minutes. Video is about 8GB, I would assume it takes a bit longer on highest container type but unsure.
you can probably just adjust sleepAfter to something high and be done, are you sure it's not the process inside the container that causing the shutdown.
Actually this is what I'm saying, cloudflare container is overall 2x costlier compared to aws or cloud run. Here idle time doesn't matter because I calculated based on per second billing.
Possibly more than 2x, it looks like cloudflare charge for any container egress, so thats hitting D1/KV/queues rest endpoints at CF too, also whilst you can get 1TB free on AWS for your first container by smashing the traffic through cloudfront, in CF you have no control over placement yet, I'm getting provisioned in AU ffs most expensive place on planet, and 500gb allowance there. Of course this is a different at scale, but testing crazy, no clarity..
I had these same questions, I was hoping to use Apache Ballista on Containers for Distributed Compute on the edge in the future but it communicates in between the Scheduler & Executor using gRPC so I would need to rewrite these services to use CapnWeb?
We initially set our public limits for concurrent containers pretty low and don’t (yet) have large instance types. This was because we wanted to make sure the system could handle load and the plan has always been that we’ll move these up with time.
Tomorrow, we’ll be bumping concurrent resource limits considerably. We’ll still bump it up further in the future, but it’ll be a big step up (over 10 X where we’re at today). If you need more than that, please reach out!
On instance types, we’ll also add a larger one, but it won’t be huge (8 GiB, 1 vCPU, 12 GB disk). More to come in the future, but we’ll need to be a bit more gradual here to make sure we can place well at scale.
Bad news: Nothing happening this week. Good news: There will be changes soon (think weeks, not quarters).
Changes happening on two fronts: - Firstly, we don’t want people having to think about Durable Object pricing at all if you’re using them for a Container. Most of it will just "go away" - the exception being DO storage (if and only if you’re using it). - Secondly, overall cost will come down. Aside from the DO pricing going away, we’ve got another change that’ll bring cost down.
I have to be a big vague here right now unfortunately (with pricing I don’t want to make any promises before we finalize things) - To set expectations, we still won't be the cheapest spot to run a container, but the net effect on price will be non-trivial.