Software Mansion

SM

Software Mansion

Join the community to ask questions about Software Mansion and get answers from other members.

Join

Clear the buffer sent to output pad

Hi there, I'm working on the integration of openai realtime api and I'm trying to implement vad. Now, when the user speaks, the audio is sent to openai and the response is collected through response.audio.delta, and sent to output through a buffer. When I receive the event input_audio_buffer.speech_started I would like to flush the buffer which hasn't been played yet and restart the process (send data to openai and receive the stream)....

Create thumbnail from mp4 video

Hello! I am a bit new with membrane framework, and i read about it and it looks amazing, so i have been trying it out. Now, i want to implement it in a personal project, where i need to extract a thumbnail from a video. I investigated a lot, and i can't find a reference in how to do this, i investigated the process and it looked like Parsing the MP4 video, demuxing it, redirecting the H264 file to a chain, and then decoding it. ...

Echo implementing membrane openai example in phoenix liveview

Hi there! I tried to implement the openai realtime example available here, trying to integrate it into a liveview example app. As for integrating membrane in liveview I followed this example https://github.com/membraneframework/membrane_demo/tree/master/webrtc_live_view In my implementation, what happens is that the audio is captured and is sent to openAI, because I can receive the response. But the problem is that together with the response I receive also all the audio sent as input from mediaCapture (i.e. Echo)....

How to add frames to slow fps source?

I have a very slow source (MJPEG, about ~1 FPS) and I want to add an overlay on it, and store it as 30fps video. The overlay displays the current time (updated every 100ms). Here is my pipeline: ```elixir...

membrane_h26x_plugin handle_stream_format is called 1/10 times only

Hello, we are using membrane_rtmp_plugin and membrane_h26x_plugin to stream RTMP from Adroid devices using RootEncoder lib. The issue is that pipeline got stuck in 9 of 10 cases, and works properly ony in 1 case. Details are in issue: https://github.com/membraneframework/membrane_core/issues/986 Also created a PR with tests which are reproducing the bug: https://github.com/membraneframework/membrane_h26x_plugin/pull/69...

Advice needed: How to synchronize the video and audio segments duration in adaptive streaming

I still have a problem understanding how to synchronize the segments (and partial segments) duration when using membrane_http_adaptive_stream library. As per HLS guidelines, for smooth playback experience on Apple devices we need each playlist's segment to be of the same duration. What I'm struggle to understand is to how to achieve this behaviour. My previous confusion was resolved with keyframes sent at the right interval (GOP size). But now I have different problem: the video track follows the segment_duration and partial_segment_duration parameters provided perfetcly, but my audio track isn't!...

HTTP adaptive stream playlist is changing target duration on the fly

Greetings everyone. I hope you are doing well! I'm encountered very strange behaviour: I have following for my video track configuration: ``` [...

Membrane.AAC.Parser - Not enough information provided to parse the stream.

Hello, I am receiving audio via my reolink camera and I am trying to send it to an RTMP server. Here is the info from my camera using ffmpeg: ```...

Audio Unit or VST processing as realtime Membrane filter?

Hi! I would be very interested to apply one or two quick VST or Audio Unit effects (e.g. compression, reverb, typically) on a given audio Membrane pipeline, for realtime use. Did anyone already implement that?...

Recommended way to time-sync WAV playback (DMX lightning)?

Hello ! I am reading a WAV/MP3 track with a Membrane pipeline, with good success so far. I am starting to annotate the WAV so that I know when the main beats occur ; I now want to generate DMX (lightning) events out of that. Sync latency is essential here....

Docker image with membrane_webrtc_plugin

Hi! I have some issues with running a Phoenix app in a Debian-based docker. Whereas it can be built, when I am running it, it crashes: ``` =CRASH REPORT==== 23-Apr-2025::21:25:12.868807 === crasher: initial call: supervisor:kernel/1...

Creating a Phoenix channel source

Hi! I need to process an audio stream (webm/opus) coming in through a channel (websocket) in phoenix and I'd like to use membrane to do it. Is the way to do it to write my own membrane source that is essentially a "beam message source" - ie. reads messages sent to it from another process and then passes that along to the next stage in the pipeline? And then have the channel process send the message to the "beam message source" element And then similarly if I want to send messages back to the channel at the end of the pipeline, would I be making a "beam message sink"...

Testing a Membrane Bin used in a WebRTC Engine Endpoint

I'm trying to test a membrane bin that I use in WebRTC engine endpoint. Though I'm having trouble getting my test setup properly. I was trying to approach is this way: 1. Start my Membrane.Testing.Pipeline with a simple spec, just the conversation bin 2. Send a {:new_tracks} event to the bin, simulating what the Membrane.WebRTC.Engine does when a new bin is assed as an endpoint...

WebRTC - Add Track to running PeerConnection

I have started out with the Nexus WebRTC example app (https://github.com/elixir-webrtc/apps/tree/master/nexus). Instead of automatically starting the stream, I am now running createPeerConnection() and joinChannel() without starting any local streams. I have then added a button that when pressed, should start the local webcam stream and broadcast it to other peers. On button press I execute:...

High WebRTC CPU consumption

Hey! We are performing some benchmarks with ExWebRTC-based pipeline and on 4 CPU droplet (with dedicated cores) and 10 incoming streams consumes 100% of CPU. Is that expected behaviour? Our pipeline is essentially consuming H264 video and decodes the AAC to Opus, nothing else. The second question is - how do we trace the membrane elements CPU consumption? I was trying to https://hexdocs.pm/membrane_core/Membrane.Pipeline.html#module-visualizing-the-supervision-tree, but I doesn't see the pipeline at all in :observer. The Live dashboard on the other hand lists the processes and I can sort by "Number of Reductions", but over there all elements is just Membrane.Core.Element so it's kind of makes it impossible to distinguish which process causes the most CPU consumption....

Continious RTMP stream without constant output or changing output

Hi there, I am wondering if it's possible to use membrane to do the following: - We have camera's that are streaming RTMP continously to the server....

Testing a filter with flow control :auto?

I'm trying to test a membrane filter where the flow control is set to :auto. I'm using Membrane.Testing.Source and passing in a custom generator function. Though it appears to be only called once. Am I setting this testing pipeline up incorrectly?

RTSP push approach with Membrane.RTSP.Server

We are trying to add RTSP into media server, using Membrane. The main thing is that we need to implement push approach: Elixir TCP server starts listening for incoming RTSP connections from cameras, and then pushes the incoming RTSP video stream to the clients. We used Membrane.RTSP.Server with handler which handles announce, describe and record steps accordingly. On RECORD step, we are passing socket control to pipeline pid: ```elixir Enum.each(tracks, fn {_, track} -> options = [...

Get Video from RTSP and stream by RTMP

Hi guys, i'm build a stream video realtime through rtmp but my camera video source is from an rtsp link. i used Membrane.RTSP.Source but got stuck and many error. please help me, thanks you.

How to split a Raw Audio Buffer with 2 channels within frame into two different buffer

Hey team ! I'm trying to process an FLV stream with AAC codec for the audio, and the audio part has 2 channels that I would like to treat separately, is there a way to split my pipeline in order to handle the 2 channels differently? Here is an overview of the pipeline: ``` child(:source, %Membrane.RTMP.Source{ socket: socket...
Next