Understanding pd's audio latency
Without knowing exactly what you've already done to achieve lower latency, I'll cover the basics.
Latency is is controlled by several factors:
The Kernel: while you don't need a low latency kernel getting one or using Ubuntu Studio can help. There have been some incompatibilities with graphics drivers from nvidia in the past so YMMV
Kernel configuration: If you haven't read this page, read it - all of it - and do the tweaks that pertain to you. https://help.ubuntu.com/community/UbuntuStudioPreparation
Kernel Sound drivers: you are either going to use ALSA (through jack) or ffado for a firewire audio device. Pulseaudio is not an option here. Pulseaudio is not the devil, it's just not made for low latency work. refer to the link about on how to deal with it.
Jack Configuration once you've tweaked everything above then you can lower your frames/period and period/buffere settings as low as they will go without Xruns
Your soundcard not all soundcards can get down to 2ms latency in Jack, and considering you are using just the builtin audio you may have to consider a external usb audio device. Something like these would work:
Understanding pd's audio latency
hey hey!
probably my problems with pd's audio latency are easy to explain and eliminate, but by now i didn't make it by myself.
basically i am trying to create a looping-module for some simple live overdubbing. i already supposed that i have to factor in the latency for precise timing, but the presetted 50ms doesn't bring correct results. so i measured the latency with this latency.pd patch, using jack to create the feedback, it shows me a latency of about 24.6ms. is this latency constant? and what does the 50ms-preset mean?
and by the way, what are the first steps to optimize (minimize) the latency?
i am using ubuntu studio, pd-extended and right now the built-in sound card of my ibm think pad t41.
thank you!
Latency worse through ASIO than through MMIO
Hello brilliant creators of music,
In an attempt to reduce my sound synthesis latency I picked up a new sound card with ASIO drivers. When I got the card I re-configured PD to use the ASIO drivers, and I set my latency down to ~10 ms, and the audio sounded fine. However, when I tested the latency I found out that I get an extra 100 ms delay on top of PD's delay when I use the ASIO drivers. When I take this mystery latency into account, I get better latency when I use the MMIO drivers.
I noticed that PD uses pulseaudio for routing when it uses the ASIO drivers. I know that pulseaudio is supposed to be low latency, but could this be where my magic 100ms delay is coming from? If so, do any of you know how to fix it?
thanks!
TouchOSC and pd-extended problems
Guys sorry for being a total retard.
But I am too trying to use touchOSC with DJ Software in Win7 but when I load a custom patch, created by touchosc2pd.jar, It gives the same error.
error: [dumpOSC]: OSCx is deprecated!
Consider switching to mrpeach's [unpackOSC] and [udpreceive]
OSCroute object version 1.05 by Matt Wright. pd: jdl Win32 raf.
OSCroute Copyright ᄅ 1999 Regents of the Univ. of California. All Rights Reserved.
error: [OSCroute]: OSCx is deprecated!
Consider switching to mrpeach's [routeOSC]
I added mrpeach to startup but PD failed to open it.
Then I added unpackOSC and udpreceive and then mrpeach. PD loaded mrpeach no problems, but when I ran my patch, it produced the same error.
I am totally lost. Every step of the way touchOSC provides extreme difficulties for me. By the way the patch that I am trying to generate is not a custom one, but merely the patch for mix2ipad, which is one of the two default layouts of touchOSC for ipad.
Edit*** I attached my patch file.
MAJOR EDIT***
Ok, so I looked at the previous attachments, and corrected my pd file. I added udpreceive 8000, unpackOSC, pipelist and declare - lib mrpeach and import mrpeach.
I still can not get midi signals, PD won't respond to my ipad.
What do i do?
FINAL EDIT***
Lol! I got it working, I removed the declare command and opened the dj software, to see my touchOSC inputs generate midi responses. Thanks to all of you!!!
MIDI latency in PD
Hi to all, this is my first post in this forum.
I'm trying to use PD's patches inside my setup, and now I'm struggling with the MIDI latency that PD creates.
First of all, I'm a drummer and I use a drumpad(DrumKat) to trigger samples on my laptop, so for me the latency is a 1st class problem.
I have two setups: one with Ubuntu 10.04(Kernel 2.6.33-29-realtime) + jack, and the other with OSX leopard + jackosx. The first one is the more tested.
I want to know the latency that PD creates between notein-noteout process.
To do this, I simply connected my drumpad to PD's midiin, PD's midiout to PD's midiin, and opened my patch(attached to this post).
If the patch is right, there is a link between jack's latency and PD's MIDI latency.
If I set jack's framebuffer to 256, I have 11.6 msec of audio latency and 5.8 ms of PD's MIDI latency.
If I set jack's framebuffer to 128, I have 5.8 msec of audio latency and 2.9 ms of PD's MIDI latency.
Is this correct? Is there a way to reduce MIDI latency without changing the audio latency of jack?
Thank you and sorry for my english.
Nicola
TouchOSC latency
It took me hours to get TouchOSC working with Pure Data on a wireless local network (no router). Now that it finally works, I find myself in troubles with latency. I can send data in real time with perfect sync from PD to TouchOSC, but it goes crazy with delay everytime I try to adjust a slider or tap a button on the TouchOSC interface.
The weird thing is that it seems to work fine at first but then, as I keep playing with the interface, the lag becomes more and more annoying.
I don't know what to do to solve this. Any advice will be greatly appreciated!
TouchOSC is running on a iPhone3GS + iOS 4.0, Pure Data is on a MacBook Pro + MacOSX 10.5.8.
Midi to TouchOSC
Hi, i tried to send midi to TouchOSC too. I tested with a fader (range 0 to 127) which outlet connected to [/ 127] (to convert TouchOSC fader range), then connected to [send /1/fader1 $1] ($1 carries the fader outlet number converted to TouchOSC), then connected to inlet [sendOSC]. [connect <ip> <port>] also connected to the inlet of [sendOSC]. when i ran the Pd, first i clicked the connect box to connect to TouchOSC then moved the fader with mouse but nothing changed in TouchOSC. Can you help me?
Send to TouchOSC using sendOSC
Hello!
I really googled and searched for days before posting (I even asked on gearslutz first hoping that someone else may have tried this before me, http://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/502944-touchosc-puredata.html), but alas, I failed to come up with a solution, which I suspect may be easy.
Sending data FROM TouchOSC is working fine, and I receive the Midi Messages in my Sequencer (Cubase 5) just fine, but I would like to figure out a way of sending midi FROM my Sequencer TO TouchOSC, but I can't get it to work.
I have attached a screenshot which displays where I am currently at, and I feel I must be really close. I would very much appreciate a nudge into the right direction.
Thanks a lot in advance!
Export patch as rtas?
@Maelstorm said:
If you're on OSX, jack can be used as an insert plug-in so you can avoid the separate tracks, but you still get the latency.
which you can, depending on your host, eliminate by setting the track delay for the track with the plugin. So if the buffer is 512 sample/11.82 ms then set the delay to that and it should be spot on.
I've had the whole Jack graph latency explained to me numerous times by Stephane Letz and it still doesn't go in.....Heres what he told me...
> > Its the Pd > JAck > Ableton latency. (Ableton has otoh 3 different
> > ways of manually setting latency compensation - I'm just not very
> > clear on where to start with regards to input from JAck)
> >>>>
>
> This is NO latency introduced in a Pd > JAck > Ableton kind of
> chain; the JACK server activate each client in turn (that is Pd *then*
> Ableton in this case) in the *same* given audio cycle.
>
> Remember : the JACK server is able to "sort" (in some way) the graph
> of all connected clients to activate each client audio callback at the
> right place during the audio cycle. For example:
>
> 1) in a sequential kind of graph like IN ==> A ==> B ==> C ==> OUT,
> JACK server will activate A (which would typically consume new audio
> buffers available in machine audio IN drivers) then activate B (which
> would typically consume "output" just produced by A) , then activate
> C , then produce the machine OUT audio buffers.
>
> 2) in a graph will parallel sub-graph like : IN ==> A ==> B ==> C
> ==> OUT and IN ==> D ==> B ==> C ==> OUT (that is both A and D are
> connected to input and send to, then JACK server is able to
> activate A and D at the same time (since the both only depends of IN)
> and a multi-core machine will typically run A and D at the same time
> on 2 different cores. Then when A *and* D are finished, B can be
> activated... and so on.
>
> The input/output latency of a usual CoreAudio application running
> is: driver input latency + driver input latency offset + 2
> application buffer-size + driver output latency + driver output
> latency offset.
>
this next part is the important bit i think...
> For a graph of JACK clients: driver input latency + driver input
> latency offset + 2 JACK Server buffer-size + ( one extra buffer-
> size ) + driver output latency + driver output latency offset.
>
> (Note : there is an additional one extra buffer-size latency on OSX,
> since the JACK server is running in the so called "asynchronous" mode
> [there is also a "synchronous" mode without this one extra buffer-size
> available, but it is less reliable on OSX and we choose to use the
> "asynchronous" mode by default.])
>
> Stephane
>
Calculation Frame Size --\> Latency issues
Hi everybody,
I use PD as part of my PhD project. Since audio/haptic asynchronity is an issue I have to keep latency down. My test subjects have to press buttons and virutally instantly hear the sound they triggered. The sounds are stored in arrays and played back using tabplay~
I have just checked the delay between the press of a button and the resulting sound. The buttons are connected to my computer using a custom microcontroller board which is quite common at my university and send the trigger command to a PD comport object. The baud rate is set to 230400 bps and only one byte is needed as command. Latency of the entire button-microcontroller-serial output should be way below 1 ms.
For sound output I use a Lexicon Lambda USB interface at 96000 kHz and 5 ms latency.
When I monitor the delay between physical trigger and output I get latency times ranging from just under 10 ms to over 25 ms. More annoyingly, this latency is not stable but but changes every time I trigger.
Since my "hardware" latency should be just over 5 ms and PD adds up to 20 ms to this I assume PD interally processes things at a certain framerate which should be arround 50 Hz or 20 ms. Is it possible to increase this framerate in any way?
The system runs on Ubuntu Studio 8.04, it would be possible to switch another audio interface (RME Hammerfall).
Any ideas are greatly appreciated.
Alex