Recommended way to time-sync WAV playback (DMX lightning)?

Hello ! I am reading a WAV/MP3 track with a Membrane pipeline, with good success so far. I am starting to annotate the WAV so that I know when the main beats occur ; I now want to generate DMX (lightning) events out of that. Sync latency is essential here. My first stab is using a Membrane filter that does not modify the buffer, but counts the played samples:
# very WIP
defmodule Ticker do
use Membrane.Filter

alias Membrane.RawAudio

def_input_pad :input, accepted_format: RawAudio, flow_control: :auto
def_output_pad :output, accepted_format: RawAudio, flow_control: :auto

def handle_init(ctx, options) do
{[], %{sample_count: 0}}
end

def handle_buffer(:input, buffer, ctx, state) do
stream_format = ctx.pads.input.stream_format
sample_rate = stream_format.sample_rate
sample_size = RawAudio.sample_size(stream_format)
state = Map.put(state, :sample_count, state.sample_count + Membrane.Payload.size(buffer.payload))
elapsed_time = state.sample_count / (sample_rate * sample_size * 2)
IO.inspect(elapsed_time)
{[buffer: {:output, buffer}], state}
end
end
# very WIP
defmodule Ticker do
use Membrane.Filter

alias Membrane.RawAudio

def_input_pad :input, accepted_format: RawAudio, flow_control: :auto
def_output_pad :output, accepted_format: RawAudio, flow_control: :auto

def handle_init(ctx, options) do
{[], %{sample_count: 0}}
end

def handle_buffer(:input, buffer, ctx, state) do
stream_format = ctx.pads.input.stream_format
sample_rate = stream_format.sample_rate
sample_size = RawAudio.sample_size(stream_format)
state = Map.put(state, :sample_count, state.sample_count + Membrane.Payload.size(buffer.payload))
elapsed_time = state.sample_count / (sample_rate * sample_size * 2)
IO.inspect(elapsed_time)
{[buffer: {:output, buffer}], state}
end
end
(this is very WIP) Is it the best approach though? I have no idea if the moment the method is called matches more or less the playback time etc (it could be massively off gradually depending on how buffering works). I have seen the notion of Timers etc, is it a better way to do that? Thank you! -- Thibaut
4 Replies
Nick
Nick5mo ago
The playout of the audio side is tied to the presentation timestamp (often abbreviated pts) on the buffer. In theory, if you generated lighting events with the same pts as the audio buffer they should play at the same time. The tricky bit is I'm not really sure about the DMX format or if it even has a concept of pts, but ultimately what you need to do is synchronize them with the pts of the audio buffers.
thbar
thbarOP5mo ago
Thanks @Nick ! I saw PTS mentions here and there, but I could not grasp things correctly yet ; I see PTS mentions only for the PortAudio source, not the sink (https://github.com/search?q=repo%3Amembraneframework%2Fmembrane_portaudio_plugin%20pts&type=code), so I am unsure how to work with this. But I found another concept which is the "clock" here https://github.com/membraneframework/membrane_portaudio_plugin/blob/d82ed2b66be2e16504056d09296d21fe5703ef5a/lib/membrane_portaudio_plugin/sink.ex#L20-L23
This clock measures time by counting a number of samples consumed by a PortAudio device and allows synchronization with it.
If I could find how to subscribe to this exact clock, I guess the consumption occurs more or less exactly at playback time given how PortAudio works, and I could generate the DMX events (not as a membrane thing) from that method. Do you know how I can subscribe to this clock ? I dove into the docs but so far I haven't found the proper way to do that. Thank you !
Nick
Nick5mo ago
What does your pipeline look like? Take a look at these docs: https://hexdocs.pm/membrane_core/Membrane.Clock.html https://hexdocs.pm/membrane_core/Membrane.Element.Base.html#def_clock/1 I'm not super familiar with how clocks work in Membrane, so we will have to wait for someone from mebrane core team to weigh in. Still though, I think the way you have it is backwards. Take a look at the Membrane.RawAudioParser, in particular the overwrite_pts option: https://hexdocs.pm/membrane_raw_audio_parser_plugin/Membrane.RawAudioParser.html#module-element-options Try creating a pipeline with these elements (double check the options since im just going off memory):
child(%Membrane.File.Source(path: "myfile.wav"))
|> child(%Membrane.RawAudioParser{overwrite_pts: true)
|> child(%Membrane.Debug.Filter{handle_buffer: fn buffer -> IO.inspect(buffer.pts) end})
|> child(Membrane.Debug.Sink)
child(%Membrane.File.Source(path: "myfile.wav"))
|> child(%Membrane.RawAudioParser{overwrite_pts: true)
|> child(%Membrane.Debug.Filter{handle_buffer: fn buffer -> IO.inspect(buffer.pts) end})
|> child(Membrane.Debug.Sink)
You should observe the pts of each audio frame printed to the console. You can look at the docs for Membrane.Time module to see how you can operate on the membrane time units. Then in theory, you can write a filter and in the handle_buffer callback you would have access to the data (if you need to do some signal processing on the frame for example) and the timestamp you need to sync the lighting event with
thbar
thbarOP5mo ago
Many thanks - I'll dive into this! I also just found this which provide interesting pointers https://github.com/membraneframework/membrane_demo/blob/master/livebooks/soundwave/soundwave.livemd (I'll share my pipeline in details).
GitHub
membrane_demo/livebooks/soundwave/soundwave.livemd at master · mem...
Examples of using the Membrane Framework. Contribute to membraneframework/membrane_demo development by creating an account on GitHub.

Did you find this page helpful?