On JF Tracks and Reconnecting (in React)

So I noticed a few things about the react-sdk and JF tracks in general. Note I have react code that works identical to the videoroom demo. If you're connected to a jellyfish room and then abruptly refresh the browser, a new set of media device ids are created which causes a new set of addTrack calls. I'm not sure if I am doing something wrong or this is intended, but since new ids are created, new tracks are added to the peer on refresh without being able to remove the old ones since any clean-up code is never fired on refresh. And even when I disconnect gracefully, the removeTrack call fails as described below. I'm not able to reliably call removeTrack() despite giving it valid local Ids. during development, I have consistently hit these bang operators https://github.com/jellyfish-dev/membrane-webrtc-js/blob/7fbb3123fc1870b974061a9309d4f2512721b253/src/webRTCEndpoint.ts#L1244 in remove and replace tracks, but the tracks are there locally and on the server. I guess the behavior I am expecting is if the peer disconnects, either the old tracks are removed, or they are somehow reused again when they reconnect, but neither happens
GitHub
membrane-webrtc-js/src/webRTCEndpoint.ts at 7fbb3123fc1870b974061a9...
Official JS/TS client library for Membrane RTC Engine - jellyfish-dev/membrane-webrtc-js
J
Jdyn•69d ago
Overall my question would be how can I ensure that if the peer is reconnecting with the same microphone and camera, the jellyfish state uses the same tracks, or at least removes the old tracks. Currently it just appends the new tracks to the peer tracks and infinitely grows. I feel like I am doing something wrong to cause the local device track ids to completely change across refreshes, hot reloads, reconnects etc?
K
kamilstasiak•69d ago
Hi 👋, I’ve just reproduced your error in these steps: 1. Create a room 2. Create a peer 3. Connect 4. Add track 5. Disconnect 6. Now, the Jellyfish server indicates that this peer is disconnected and has one track. This is an error and we will fix that soon. I'll let you know when we finish
No description
K
kamilstasiak•69d ago
What's more, we’re currently working on a connecting mechanism. We plan to fix some errors as well as add the ability to reconnect.
J
Jdyn•68d ago
Great, I appreciate the quick response and looking forward to potential improvements to connect / reconnecting. Since you're here, one thing I am struggling with is trying to fit all the connection, disconnection, and track management into the lifecycles of React. With how often React is rerendering within the useMembraneMediaStreaming hook, and other places that involve media device hooks, sometimes effects run twice, or many times for no consistent reason, causing duplicate things to happen like addTrack that should really have only happened once. I have been toiling with different ways to make the hook more deterministic but I think one part of it is how often the reference to the browser device encapsulated in useCameraResult is "changing" seemingly and changes to the values. Have you guys experienced this as well?
J
Jdyn•68d ago
I noticed this person made some attempts at improving the consistency of connectivity and state here with his separate useMembraneCameraStreaming which seemed to yield some good results overall the issues still persisted for me. https://github.com/jellyfish-dev/jellyfish_videoroom/blob/9b21ef76b39aa88b86fbce3d9054370e3823ebf7/assets/src/pages/room/hooks/useMembraneMediaStreaming.tsx#L137 Are these the things you're aiming to fix with the upcoming connecting mechanism?
GitHub
jellyfish_videoroom/assets/src/pages/room/hooks/useMembraneMediaStr...
Contribute to jellyfish-dev/jellyfish_videoroom development by creating an account on GitHub.
S
shuntrho•68d ago
Hi! We have fixed the issue of tracks remaining after peer disconnecting. We have released new JF version 0.4.2 with the bugfix: https://github.com/jellyfish-dev/jellyfish/releases/tag/v0.4.2 So now, if peer disconnects you have to add the tracks once again when the peer reconnects.
GitHub
Release v0.4.2 · jellyfish-dev/jellyfish
Bugfixes: Fixed tracks not being removed after WebRTC peer disconnects
K
kamilstasiak•67d ago
Right now, our device manager (for toggling camera, microphone, and screen share) doesn’t emit any events. As a developer, you need to listen to state changes to be able to detect important events like “camera track is available” or “camera stopped”. React's strict mode invokes useEffect two times, complicating matters even further. We plan to emit events on every important state change. I want to propose a fix that would include events in it during the following week, so if you could wait a few days, I would suggest doing so. If you can’t wait, I have a few ideas: - You could disable useMembraneMediaStreaming and set autoStreaming: true, preview: false in useSetupMedia. It should automatically add media tracks when the user connects to the Jellyfish. - If you have problems only in development mode, you could disable strict mode for a while and enable it when we release an update. - You could use useRef to store some additional data that will help you identify if a particular track is in the right state.
J
Jdyn•5d ago
Nice thank you Ah yeah moving toward an event based approach instead of reactionary with useEffects should make it really smooth. I can wait no problem. I appreciate your help, also will attempt the change with useSetupMedia thank you Hey @kamilstasiak I've been following your experimental changes on the ts-sdk and react sdk. I was wondering what your thoughts on it so far are? Are the changes working as you envisioned and solving some of the complications with connecting / reconnecting / race conditions / device state? Do you plan on merging them soon or is there still more to be done?
K
kamilstasiak•2d ago
Hi, yes, it solves a majority of problems and simplifies the code significantly. I've removed a lot of code in jellyfish-videoroom. I believe I'll finish releasing it today
Want results from more Discord servers?
Add your server
More Posts
h264 encoder problemshi guys, I'm using h264 encoder plugin for video encoding and sending it via rtp to client. SometimPipeline for muxing 2 msr files (audio and video) into a single flv fileI have the following pipeline which takes 2 msr files (recorded to disk using the RecordingEntrypoinMP3 output is audible, but test not passHi everyone. I made small changes in `membrane_mp3_lame_plugin` to support other input config (the oGrab keyframe image data from h264 stream?We are looking at some h264 coming from an RTSP stream. Membrane is doing a fine job with the HLS deUnity client?Thanks a lot to the Membrane team. The number of examples and availability of code has been incredibWebRTC stream not workingI've hit an issue trying to get an MPEGTS stream displaying in the browser via WebRTC. All seems toError when compiling free4chatThis is probably pretty basic but I'm at square one. I get an error when I'm compiling free4chat, a Screen ShareIs there any reason why jellyfish_videoroom when is running localy share screen is not working ? I sTesting Membrane ElementI'm trying to setup a simple test of a membrane element, but I'm stumped a bit on how to assert thatClustering and scale out behaviourContext: I've a membrane application running in an elixir cluster of 1. It receives a RTSP stream (URTSP authentication problem?Hi, I am playing with an RTSP camera (**Tapo C210**) that is on the local network. The path to the sExtending the jellyfish video room demo with a queueHey I am curious what a scalable way would be to add a queue in front of jellyfish rooms. I have a sDebugging bundlex/unifex errorsHello, I've been tinkering with membrane cross-compiled to Nerves (rpi4). I've had features (e.g. mRTP to HLS Disconnect and Reconnect Audio StreamHello everyone, I currently have a microphone input which is sending UDP to a server and into a RTPCustom RTC EndpointSorry, more random questions! What's the best place to start when trying to get a custom endpoint tHTTPAdaptiveStream issue with hls.jsI was wondering if anyone else has experieneced this issues with HLS? I have a working pipeline genSpinning up a new GenServer for each roomI have been learning from the videoroom demo and I have a few questions. ```elixir # meeting.ex @RTP streamI'm trying to get a simple RTP stream pipeline working but hit the following error: ``` 16:22:57.782React-Native connection?I'm struggling to get a react-native client to connect to my membrane server. I'm just running localRTP demo with RawAudioHello friends, I'm trying to get microphone input (via `Membrane.PortAudio.Source`) packaged into a