Unable to call Streaming Response from FastAPI in production

My StreamingResponse with FastAPI using Hypercorn works in development but not during production on Railway. The deploy logs show a Prisma debug but stops mid way through the function with no error. On the frontend it Errors with 504 because it just Timesout. Is there anything unique I should be aware of with Streaming Responses on Railway?
Project ID: 272293fe-814d-4a92-9d85-82c242f56daa
Project ID: 272293fe-814d-4a92-9d85-82c242f56daa
My API route I am calling is attached
Solution:
I figured it out. When disconnecting from Prisma Query Engine it would just freeze the server. I switched from using Hypercorn to Uvicorn and now it works!
Jump to solution
26 Replies
Percy
Percy4w ago
Project ID: 272293fe-814d-4a92-9d85-82c242f56daa
Brody
Brody4w ago
this is just SSE right?
Simon  📐🛠
Yes its via an API call from a next.js server
Brody
Brody4w ago
no issues with SSE on railway - https://utilities.up.railway.app/sse are you sending SSEs to a client's browser or? need a little more context here
Simon  📐🛠
Yes, sorry, I am sending it to a clients browser. They make an API call from the next.js backend to Railway for this 'gen_query'.
Brody
Brody4w ago
where does fastapi come into play with next and a clients browser
Simon  📐🛠
A call from next/api is sent to fastAPI via:
const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
})
const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
})
the whole route.ts is as follows:
import { NextResponse, NextRequest } from 'next/server'
import { OpenAIStream, StreamingTextResponse } from 'ai'
export const maxDuration = 300;
export const dynamic = 'force-dynamic'; // always run dynamically

// POST /api/
export async function POST(req: NextRequest) {

const { lotInterestAccess } = await req.json();

try {
// const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:5000' : 'https://ideally-api.up.railway.app'}/ideal/zoneinfo?lotInterestId=${lotInterestAccess.interest.id}&zoneType=${lotInterestAccess.interest.lot.zoneType}&zoneDescription=${lotInterestAccess.interest.lot.zoneDescription}`)
const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
})

return new StreamingTextResponse(fetchResponse.body!);
import { NextResponse, NextRequest } from 'next/server'
import { OpenAIStream, StreamingTextResponse } from 'ai'
export const maxDuration = 300;
export const dynamic = 'force-dynamic'; // always run dynamically

// POST /api/
export async function POST(req: NextRequest) {

const { lotInterestAccess } = await req.json();

try {
// const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:5000' : 'https://ideally-api.up.railway.app'}/ideal/zoneinfo?lotInterestId=${lotInterestAccess.interest.id}&zoneType=${lotInterestAccess.interest.lot.zoneType}&zoneDescription=${lotInterestAccess.interest.lot.zoneDescription}`)
const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
})

return new StreamingTextResponse(fetchResponse.body!);
Brody
Brody4w ago
for testing, cut out the nextjs app and call the public domain of the fastapi service
Simon  📐🛠
Okay will do. I have tested several different ways to make API calls but it seems once it hits one error or warning it stalls and I cant call it again... I thought it was a hypercorn thing maybe
Brody
Brody4w ago
this is no doubt a code or config issue, its just a question of where
Simon  📐🛠
What is the best way of logging on Railway during API calls?
Brody
Brody4w ago
json structured logs would be best
Simon  📐🛠
okay i'll try it out. thanks! How come debugging in Deploy Logs is highlighted red with a level: "error" with really no other information besides this? I get it that this means that its printing to stderr
Brody
Brody4w ago
are you doing json logging?
Simon  📐🛠
alot of it is print(). Should I use 'structlog' or is there a preference on Railway?
Want results from more Discord servers?
Add your server