Filter with `push` flow_control
Hello, I have a filter that transcribes audio as it receives it by sending it to a transcription service(via a websocket). I also have a VAD filter(applied before the audio data arrives to the Membrane pipeline).
I'm seeing that the audio data only gets sent once the buffer is full(when there is enough voice audio).
I was trying to change the
flow_control
to :push
for the transcription filter for this. (Is that the right solution?)...LL-HLS broadcasting
Hello everyone!
I am trying to make LL-HLS broadcasting work.
I used the demo from webrtc_to_hls and setup partial_segment_duration to 500ms,...
Pipeline children started twice
Hello,
I'm seeing children in a Membrane pipeline get started twice:
I think this might be an issue with how I'm starting the pipeline(everytime a websocket connection is created), but I can't figure out exactly why this is happening....
Writing a `Bin` queuing content from multiple remote files
@skillet wrote in https://discord.com/channels/464786597288738816/1007192081107791902/1224491418626560121
Hello all. New to the framework (and elixir) and still a little fuzzy on how to implement my idea. Basically I want to stitch together a bunch of wav and/or mp3 files and stream them indefinitely. Like a queue where I can keep adding files and the pipeline should grab them as needed FIFO style.
The files will be downloaded via HTTP. So what I'm currently envisioning is a Bin
that uses a Hackney source element to grab the file and push it on down. Then, when it's done it will get replaced with a new Hackney source pointing to the next file.
...
Split audio file into 20mb chunks
Im trying to figure out how to take the file at this URL, and send it to OpenAI in chunks of 20mb: https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/chrt.fm/track/3F7F74/traffic.megaphone.fm/SCIM6504498504.mp3?updated=1710126905
Any help would be amazing!!...
bundlex nifs and libasan
Is it anyone build nifs with libasan support?
Even if I put compiler_flags ["-fno-omit-frame-pointer -fsanitize=address"] it doesn't detect leaks I intentionally left in nif code.
I run elixir with ERL_EXEC="cerl" and set -asan option for vm and erlang is running with address_sanitizer flag....
unifex seg fault on handle_destroy_state
Hi, i'm implementing g772.1 decoder/encoder plugin and have issue with handle_destroy state.
I've taken freeswitch g7221 implementation(https://github.com/traviscross/freeswitch/tree/master/libs/libg722_1/src).
I have the following state:
```c...
Developing an advanced Jellyfish use case
Hey I've been using jellyfish to develop a platform for essentially one-on-one calls between two people and it works really well.
I'd like to now bring in something more advanced. I essentially want to:
1. take the two audio streams of the two peers from jellyfish and convert everything their saying into text using something like bumblebee whisper....
toilet capacity of outbound_rtx_controller
Hi, I'm getting the following error on SessionBin:
```
[error] <0.1282.0>/:sip_rtp/{:outbound_rtx_controller, 1929338881} Toilet overflow.
...
On JF Tracks and Reconnecting (in React)
So I noticed a few things about the react-sdk and JF tracks in general. Note I have react code that works identical to the videoroom demo.
If you're connected to a jellyfish room and then abruptly refresh the browser, a new set of media device ids are created which causes a new set of addTrack calls. I'm not sure if I am doing something wrong or this is intended, but
since new ids are created, new tracks are added to the peer on refresh without being able to remove the old ones since any clean-up code is never fired on refresh. And even when I disconnect gracefully, the removeTrack call fails as described below.
...
h264 encoder problems
hi guys, I'm using h264 encoder plugin for video encoding and sending it via rtp to client.
Sometimes video play on client speeds up or speeds down.
How to debug such cases and what could be a reason of such lagging video? Network issues?
input for encoder is coming from webrtc source...
Pipeline for muxing 2 msr files (audio and video) into a single flv file
I have the following pipeline which takes 2 msr files (recorded to disk using the RecordingEntrypoint from rtc_engine) and need to create a single video+audio file from it (trying flv at the moment but not tied to a specific type, just want something that popular tools can read and manipulate).
My problem is that the end FLV file only plays audio. Here's the pipeline:
```
spec = [...
MP3 output is audible, but test not pass
Hi everyone. I made small changes in
membrane_mp3_lame_plugin
to support other input config (the original repo only support 44100/32/1
).
(patch branch: https://github.com/yujonglee/membrane_mp3_lame_plugin/commits/patch/)
After the change, I run the test, but test does not pass.
But when I play the generated output file, it is audible and feels same as ref.mp3
....Grab keyframe image data from h264 stream?
We are looking at some h264 coming from an RTSP stream. Membrane is doing a fine job with the HLS demo turning it into HLS. But we want to grab images for processing and such. I didn't find anything conclusive on how to do this. I remember being able to do something with keyframes and images but can't find it.
Unity client?
Thanks a lot to the Membrane team. The number of examples and availability of code has been incredibly helpful.
I was wondering if anyone has built a C# client. Specifically to work with Unity and WebRTC: https://docs.unity3d.com/Packages/com.unity.webrtc@3.0/manual/index.html
I'm building an audio experience for VR and the Oculus Quest 2 ... and although I know with Jellyfish there's an Android client (and React Native client, which works great) and TypeScript client as well ... just wondering if anyone has been down the Unity path....
WebRTC stream not working
I've hit an issue trying to get an MPEGTS stream displaying in the browser via WebRTC.
All seems to work, I can add the WebRTC endpoint for the stream, to the RTC Engine, the web client successfully connects, negotiates tracks, handles the offer and sdpanswer etc and I can add the transceiver for the stream to the RTCPeerConnection and get the MediaStream. When I add the MediaStream to the video element srcObject I get a blank screen with a spinner and see a continuous stream of
Didn't receive keyframe for variant:
messages on the server side:
```
[debug] Responding to STUN request [UDP, session q02, anonymous, client 100.120.41.48:50473]...Error when compiling free4chat
This is probably pretty basic but I'm at square one. I get an error when I'm compiling free4chat, a user-contributed elixir app that depends on membrane.
> mix ecto.reset
```...Screen Share
Is there any reason why jellyfish_videoroom when is running localy share screen is not working ?
I start a call, then connect another user in other tab, and press screen share and I can see in my tab, but another tab only 2 videos, without screen share...
Testing Membrane Element
I'm trying to setup a simple test of a membrane element, but I'm stumped a bit on how to assert that the element is sending the correct output. For context, this is an element that accepts an audio stream, sends it to a speech 2 text api and then forwards along a text buffer.
The test fails, as there's no 'buffer' in the mailbox. Though I'm positive that's what my element emits when the stream is complete. I've tried longer timeouts (10s) but that doesn't alter the test.
I could use some advice....
Clustering and scale out behaviour
Context: I've a membrane application running in an elixir cluster of 1. It receives a RTSP stream (UDP packets) from a client and does the whole streaming thing - awesome!
If/when the cluster expands, there will be multiple nodes receiving UDP packets (the packets are load-balanced between nodes). Does Membrane have any handling to route the packet to the correct node? 🤔...