When using CF Workers + JS, I was able

When using CF Workers + JS, I was able to stream large files out of a worker without hitting CPU limits via a ReadableStream. I'm now trying to do something similar via CF Workers + Rust and am having no such luck. As a shot in the dark, is there anything obvious that I'm doing wrong here?
use console_error_panic_hook;
use futures_util::TryStreamExt;
use worker::{event, Context, Env, HttpRequest, Response, Result};

const S3_URL: &str = "https://overturemaps-us-west-2.s3.us-west-2.amazonaws.com/release/2025-05-21.0/theme=buildings/type=building/part-00006-0df994ca-3323-4d7c-a374-68c653f78289-c000.zstd.parquet";

#[event(fetch)]
async fn fetch(_req: HttpRequest, _env: Env, _ctx: Context) -> Result<Response> {
console_error_panic_hook::set_once();

let client = reqwest::Client::new();
let response = client
.get(S3_URL)
.send()
.await
.map_err(|e| worker::Error::RustError(e.to_string()))?;

let mut headers = worker::Headers::new();
headers.append("Content-Type", "application/octet-stream")?;
headers.append("Transfer-Encoding", "chunked")?;

let stream = response
.bytes_stream()
.map_err(|e| worker::Error::RustError(e.to_string()));

Response::from_stream(stream).map(|resp| resp.with_headers(headers))
}
use console_error_panic_hook;
use futures_util::TryStreamExt;
use worker::{event, Context, Env, HttpRequest, Response, Result};

const S3_URL: &str = "https://overturemaps-us-west-2.s3.us-west-2.amazonaws.com/release/2025-05-21.0/theme=buildings/type=building/part-00006-0df994ca-3323-4d7c-a374-68c653f78289-c000.zstd.parquet";

#[event(fetch)]
async fn fetch(_req: HttpRequest, _env: Env, _ctx: Context) -> Result<Response> {
console_error_panic_hook::set_once();

let client = reqwest::Client::new();
let response = client
.get(S3_URL)
.send()
.await
.map_err(|e| worker::Error::RustError(e.to_string()))?;

let mut headers = worker::Headers::new();
headers.append("Content-Type", "application/octet-stream")?;
headers.append("Transfer-Encoding", "chunked")?;

let stream = response
.bytes_stream()
.map_err(|e| worker::Error::RustError(e.to_string()));

Response::from_stream(stream).map(|resp| resp.with_headers(headers))
}
Seeing ✘ [ERROR] Error: Worker exceeded CPU time limit.
2 Replies
alukach
alukachOP3mo ago
For example, this JS-equivalent avoids the CPU timeout:
const s3Url =
'https://overturemaps-us-west-2.s3.us-west-2.amazonaws.com/release/2025-05-21.0/theme=buildings/type=building/part-00006-0df994ca-3323-4d7c-a374-68c653f78289-c000.zstd.parquet';

export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const response = await fetch(s3Url);

if (!response.ok) {
throw new Error(`Failed to fetch from S3: ${response.statusText}`);
}

// Create headers for the response
const headers = new Headers();
headers.set('Content-Type', 'application/octet-stream');
headers.set('Transfer-Encoding', 'chunked');

return new Response(response.body, { headers });
},
} satisfies ExportedHandler<Env>;
const s3Url =
'https://overturemaps-us-west-2.s3.us-west-2.amazonaws.com/release/2025-05-21.0/theme=buildings/type=building/part-00006-0df994ca-3323-4d7c-a374-68c653f78289-c000.zstd.parquet';

export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const response = await fetch(s3Url);

if (!response.ok) {
throw new Error(`Failed to fetch from S3: ${response.statusText}`);
}

// Create headers for the response
const headers = new Headers();
headers.set('Content-Type', 'application/octet-stream');
headers.set('Transfer-Encoding', 'chunked');

return new Response(response.body, { headers });
},
} satisfies ExportedHandler<Env>;
Okay, ChatGPT states that:
let stream = response
.bytes_stream()
.map_err(|e| worker::Error::RustError(e.to_string()));
Response::from_stream(stream)
.map(|resp| resp.with_headers(headers))
let stream = response
.bytes_stream()
.map_err(|e| worker::Error::RustError(e.to_string()));
Response::from_stream(stream)
.map(|resp| resp.with_headers(headers))
what actually happens under the hood is: 1. reqwest on WASM uses web_sys::ReadableStream to fetch the body, 2. bytes_stream() converts that into a Rust Stream<Item = Bytes> in WASM, 3. Response::from_stream immediately converts it back into a ReadableStream for the Worker runtime.
It correctly points out that the following doesn't hit the CPU limits:
use worker::{event, Fetch, Url, Response, Result};

#[event(fetch)]
pub async fn fetch(_req: worker::Request, _env: worker::Env, _ctx: worker::Context) -> Result<Response> {
let url = Url::parse(S3_URL)?;
// This Fetch::Url call hands you back a worker::Response
// whose body is already a native ReadableStream.
let resp = Fetch::Url(url).send().await?;
Ok(resp)
}
use worker::{event, Fetch, Url, Response, Result};

#[event(fetch)]
pub async fn fetch(_req: worker::Request, _env: worker::Env, _ctx: worker::Context) -> Result<Response> {
let url = Url::parse(S3_URL)?;
// This Fetch::Url call hands you back a worker::Response
// whose body is already a native ReadableStream.
let resp = Fetch::Url(url).send().await?;
Ok(resp)
}
I'm actually using object_store which uses reqwest under the hood. Is there a way to make reqwest streams work well wtih the worker response? (I'm also going to look into seeing if I can swap out reqwest for Fetch in object_store)
Unknown User
Unknown User3mo ago
Message Not Public
Sign In & Join Server To View

Did you find this page helpful?