T
TanStack2w ago
deep-jade

Server Routes are quite slow on Railway with bun preset.

Simple test API with 2 serial DB calls taking a total of 2 ms but the http logs shows over 200 ms total duration. Any one facing/faced anything like this ?? This is my API route
import { env } from "@/env/server";
import { db } from "@/lib/db";
import { createFileRoute } from "@tanstack/react-router";
import { json } from "@tanstack/react-start";
import { SQL } from "bun";
import { sql } from "drizzle-orm";

const directClient = new SQL(env.DATABASE_URL);

export const Route = createFileRoute("/api/test")({
server: {
handlers: {
GET: async () => {
const start = performance.now();
const result = await db.execute(sql`SELECT 1 as id`);
console.log(
"Total time Drizzle ORM + Bun SQL:",
(performance.now() - start).toFixed(2),
"ms",
);
const start2 = performance.now();
const directResult = await directClient`SELECT 1 as id`;
console.log(
"Total time direct Bun SQL:",
(performance.now() - start2).toFixed(2),
"ms",
);
return json({ result, directResult });
},
},
},
});
import { env } from "@/env/server";
import { db } from "@/lib/db";
import { createFileRoute } from "@tanstack/react-router";
import { json } from "@tanstack/react-start";
import { SQL } from "bun";
import { sql } from "drizzle-orm";

const directClient = new SQL(env.DATABASE_URL);

export const Route = createFileRoute("/api/test")({
server: {
handlers: {
GET: async () => {
const start = performance.now();
const result = await db.execute(sql`SELECT 1 as id`);
console.log(
"Total time Drizzle ORM + Bun SQL:",
(performance.now() - start).toFixed(2),
"ms",
);
const start2 = performance.now();
const directResult = await directClient`SELECT 1 as id`;
console.log(
"Total time direct Bun SQL:",
(performance.now() - start2).toFixed(2),
"ms",
);
return json({ result, directResult });
},
},
},
});
No description
No description
11 Replies
deep-jade
deep-jadeOP2w ago
@Manuel Schiller Am I doing anything incorrectly ?? Could I possibly try something to make this better ?
stormy-gold
stormy-gold2w ago
how does it behave locally ?
deep-jade
deep-jadeOP2w ago
Total time Drizzle ORM + Bun SQL: 260.94 ms - DB is in Cali, I am in India API response was just 278ms! Very minimal API overhead locally.
stormy-gold
stormy-gold2w ago
try deploying with node instead and see if it is any different ? no idea how railway works
deep-jade
deep-jadeOP2w ago
Will do and let you know
deep-jade
deep-jadeOP2w ago
It's the same with node as well.
No description
deep-jade
deep-jadeOP2w ago
Their own bun function offering is returning same kind of response times, so better to hear back from them I guess. Probably not something to do with Tanstack. https://x.com/SivaramPg/status/1988882997630611743
Sivaram (@SivaramPg)
@thisismahmoud What's the total duration supposed to be in HTTP logs ?? The durations feels wrong for this simple @bunjavascript function. Am I doing something wrong here ?
From Sivaram (@SivaramPg)
X
ratty-blush
ratty-blush2w ago
yeah i'm also seeing this but i have a feeling it's a railway routing/edge proxy issue, rather than start/bun. when i use curl from my san francisco digitalocean server to the us-west railway deployment: timing the execution of the server route, i see 2-5ms as it's just fetching a row from the db over private network looking at the deployment http logs, i'm seeing a total duration of 25-30ms with an upstream duration of 5-10ms, but curl reports 70-100ms so there's an additional 40-80ms or so being introduced somewhere gets even worse when i put cloudflare in front. total duration and upstream duration skyrocket to 130ms while curl reports 550ms for a cache miss then it gets worse again when i request from my local machine. i'm in new zealand, server is in us-west. curl reports over 1 second and the total/upstream duration reports anywhere from 180ms to 550ms. the main problem here is that railway has limited edge regions. i get routed to their singapore region, which then requests to us-west for a total round trip of 300-400ms, whereas if i ping my digitalocean server, i get 160ms because there's a direct connection between new zealand and california i made a thread on the railway support section yesterday but i suspect there isn't much they can do https://station.railway.com/questions/high-http-response-times-w-poor-routing-0062c88d
deep-jade
deep-jadeOP2w ago
@Kairu Saw the same thing. The closest region to me is Singapore and when I migrate all my infra there, it's single digit latency for the APIs. But when I move it to California again, it's tremendously high.
ratty-blush
ratty-blush2w ago
haven't had a reply from railway yet but i've switched to using a bun production server like here https://tanstack.com/start/latest/docs/framework/react/guide/hosting#production-server-with-bun as i noticed js assets weren't being compressed when using the railway domain, so now my main bundle has dropped from 1000kb to 300 which is a solid improvement, but no improvement to the 2 second root documents and no improvement to random latency spikes, which still makes no sense
deep-jade
deep-jadeOP2w ago
I guess it's best to wait and see when they get to this. Most of the the times traffic from US is what matters, but having such a massive jump in response times in the rest of the world is not really acceptable. For now I might just stick with Vercel nitro deploys for something meaningful. But for testing I would just place it close to me in Singapore.

Did you find this page helpful?