also, it's highly unlikely. a few hundred K file, HTML with inserted badly formed JSON. it's messy, but not large and not complex. I believe I measured the parser footprint early in the development, it was a couple M if I recall correctly.
The only limit that’s specifically 1,000 is the amount of in-house subrequests per invocation, which afaik should just be a rejected promise on the subrequest rather than a resources exceeded warning. That warning should exclusively be CPU or memory.
maybe my (or Rust's worker runtime's) error handling is leaking memory. dunno how to test for that, needless to say that the worker works okay locally. strange though, this code is WIP and littered with unwrap()unwrap() s which turn every error into a panic!()panic!()
It sounds like you are trying to fetch your Worker Script from your browser to add to a queue, on a webpage at http://localhost:8080
The point of CORS is to prevent other websites/origins from fetching your origin, preventing scripts from accessing resources that they shouldn't have access to.
"no-cors" isn't a magical escape hatch to get around this, it's a very restrictive mode to make your request safe. Stripping most headers from the request, restricting content-type, restricting http request type, and making you unable to read the response body/status/headers: https://evertpot.com/no-cors/
The answer is to change your worker to respond with the required CORS Headers, for example: Access-Control-Allow-Origin: http://localhost:8080 Your real website won't be on localhost:8080, for local testing you could use a proxy that adds the CORS headers for you. For production use though, if you want to continue to have your script on a different website/origin then your website, you will need to add the required CORS Headers. More info: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS This has nothing to do with Queues or even Workers themselves, just do to with Cross-origin requests, so if you have any more questions/etc, please use #coding-help or another channel that fits better
Queues is best at async work. If you're relying on returning a live response to a client, then it may not make sense, but you'll also need to manage your upstream API (in this case, OpenAI's) rate limits as well vs. your own user demand.
doing something like fetch -> upgrade WS -> push WS to DO & enqueue work item w/ DO Id -> ... queue ... -> eventual return via DO WS -> close WS works decently imo. Can reuse the WS / DO as needed & is decently efficient. It'll be a bit more expensive depending on how you implement it (DOs with WS are never evicted so you pay) but it's possible to do & even implement timeouts and that sort of thing.
From reading the change log, it looks like non ack’d messages will only be retried when the consumer crashes (throws an unhandled error.) I’ll add some retry()’s to my consumer that try/catches everything
What do you mean by task? A message? As for processing, do you mean how long it takes the Queue broker to ingest the message? If so, that's done as soon as you send the message, plus whatever latency between the producer worker and the queue broker.
Sorry, haven't touched Queues as much, so not 100% up on the terms. I meant, once a producer creates a task, how long for it to be piped into a consumer?
That depends on a lot of variables! How large is your backlog? This will lead to a longer consumer delay. In the case where your Queue has low throughput, and you have a configured a long max_wait_timemax_wait_time then it might take the length of that delay.