Browsers periodically send protocol level pings so I'd be surprised if this was the reason for premature disconnection.
wrangler tail the DO? Do you see any errors in your dashboard?Huh, and you're saying the client receives 1006?That's right.
I'm on chrome connected to a DO I have, which I think is just the hibernation example we provide in our docs.Maybe I can try to deploy that then iteratively adjust to add the rest of the pieces in my current workflow to debug further.
Can you wrangler tail the DO? Do you see any errors in your dashboard?I can but I notice logs stop after some time, and I see more continous logging coming from the log tab in the web UI so I haven't been trusting it much.
if not it queues it for further work,With an alarm()? Or something else? I wonder if the DO itself is just crashing for some reason. You can DM me your account ID I can try to take a look (but will be a bit busy with other stuff today)
npx wrangler tail gives me:Unknown Event - Ok @ 8/10/2024, 11:10:03 AMTail is currently in sampling mode due to the high volume of messages. To prevent messages from being dropped consider adding filters.Unknown Event - Ok @ 8/10/2024, 11:10:04 AMtailrequest_queue_binding.send to queue the message, then the consumer does work, looks up the websocket in the state (consumer is the same DO object, I guess), and sends the resulting message. It works fine until the 1006 error.wrangler whoamiaccount_id in my wrangler.toml as wellwhoami matches the account_id in the wrangler.tomlconst websocket = this.state.getWebSockets("my-tag")
if (websocket.length === 0) {
console.error("No websocket")
return new Response("No websocket available", { status: 400 })
}wrangler tailnpx wrangler tailUnknown Event - Ok @ 8/10/2024, 11:10:03 AMTail is currently in sampling mode due to the high volume of messages. To prevent messages from being dropped consider adding filters.Unknown Event - Ok @ 8/10/2024, 11:10:04 AMtailrequest_queue_binding.sendwrangler whoamiaccount_idaccount_idwhoamiwrangler.toml