PD sending to two devices?
You can assign discreet inputs and outputs but not multiple outputs across different audio devices. Sound software has to access sound hardware through it's drivers and it can't handle 2 sets of calls to two different drivers (if someone has the proper tech jargon, please step in).
There are ways of cheating but they all stink in some way:
On windows: VAC/audio repeater combo
You can use 'Virtual Audio cables' to create virtual devices that you can use for input and output between software. You can piggy-back that audio to different hardware devices by using the audiorepeater utility. So you can set up VAC to make 2 cables (VAC1 and VAC2). PD will use VAC1 and VAC2 as outputs. (Since VAC1 or VAC2 aren't hooked up to real hardware, you don't hear anything.) Then you run 2 instances of audiorepeater. One instance routes VAC1 to 'Onboard speaker out' and the other instance will route VAC2 to 'USB audio device out'.
Downside: Latency, Latency, Latency
The sound is delayed through each step in the chain, and the audio repeater latency can only go but so low before you get scratchy static and dropouts.
On Linux: ALSA virtual device
You can make a virtual soundcard using ALSA that can combine the outputs of two audio cards into one
'virtual' one and then set JACK to output to that device (the setup of which is too complex to explain here).
Each soundcard has it's own timing, and there's no way to keep them in synch, so you can control the latency and you'll have clicks, pops, dropouts, and tons of underruns.
On Mac OSX:
I haven't tried it, but I believe you can also make an aggregate device
There is also a VAC/audiorepeater type program you can use called Soundflower, but like VAC it's really only for routing sound between apps, not between hardware.
Long story short, between the latency introduced by the software and the clock timing differences between soundcards, you can really get anything usable.
0mms delay time in pd is it a dream?
0ms will be impossible I suspect just due to the way that digital audio works.
Start by assuming that everything is dealing with 64 sample blocks of audio. This is a big assumption as I'm not sure how audio hardware is going to do what it does but it should be fine for some rough calculations.
We'll also just be dealing with one channel here.
Audio hardware needs to capture that 64 samples of data from one channel. It won't send that data down the USB/Firewire/PCI bus until it has the full 64 samples which means that there is already delay there, the first sample won't be sent until the last is captured.
(1 second / 44100 samples) * 64 = 0.00145 or about 1 and a half milliseconds
Even with the best audio hardware this limitation is still there.
On top of this you need to add on the time to process the audio, there will probably be some time wasted scheduling everything etc etc. ultimately you're always going to have a couple of milliseconds latency between audio coming into the computer and then going back out.
that said, how much of an issue would 5ms latency be? generally people can't notice anything less than 20ms of latency and I understand that Yamaha engineers aim for about 4 or 5 ms of latency with all their electronic midi controllers/instruments as that's the point that it sounds instantaneous to people.
Someone with more experience here please feel free to correct me if I'm wrong though
Getting started: Pd killing my sound when i open it?
Hi. earlier this week i got Puredata. ready to build, i checked computer audio and went to media>test audio and MIDI. But when I clicked on the test tones I got no sound. Stranger yet, this somehow also kills the audio for the ENTIRE computer until i reboot. I have tried opening other audio apps, like OpenMusic, and for whatever reason, this is killing audio, too. But youtube, Winamp, and the trackers I've been using do not shut down the audio.
please help. I really want to start working with this program. thanks!
External Autio Interface mic problem
I am looking for help, to solve my problem with pure data and my audio interface. I am using PD as life performing software, and I need to use my external audio interface for better quality and avoid sound delay.
Interface I am using, is zoom R24 recorder with interface and some other stuff. PD can find it easy and it use no problem as sound output device, but once I set up input to use mic, it cut the sound off, and mic have no sight of life... I was trying different drivers, and possibilities to turn mic on, but no luck...
Is there anyway how I could use it as audio input and output for life performing... Maybe I need some special drivers, or use special commands to turn it on?
My internal audio card works fine, and the setup as internal audio card as input, and output to external interface works also... But once I switch, no signals! What should I do?!
I conceptualized this the other day. The main reason I wanted to make this is because I'm a little tired of complicated ableton live. I wanted to just be able to right click parameters and tell them to follow midi tracks.
The big feature in this abstract is a "Midi CC Module Window" That contains an unlimited (or potentially very large)number of Midi CC Envelope Modules. In each Midi CC Envelope Module are Midi CC Envelope Clips. These clips hold a waveform that is plotted on a tempo divided graph. The waveform is played in a loop and synced to the tempo according to how long the loop is. Only one clip can be playing per module. If a parameter is right clicked, you can choose "Follow Midi CC Envelope Module 1" and the parameter will then be following the envelope that is looping in "Midi CC Envelope Module 1".
Midi note clips function in the same way. Every instrument will be able to select one Midi Notes Module. If you right clicked "Instrument Module 2" in the "Instrument Module Window" and selected "Midi input from Midi Notes Module 1", then the notes coming out of "Midi Notes Module 1" would be playing through the single virtual instrument you placed in "Instrument Module 2".
If you want the sound to come out of your speakers, then navigate to the "Bus" window. Select "Instrument Module 2" with a drop-down check off menu by right-clicking "Inputs". While still in the "Bus" window look at the "Output" window and check the box that says "Audio Output". Now the sound is coming through your speakers. Check off more Instrument Modules or Audio Track Modules to get more sound coming through the same bus.
Turn the "Aux" on to put all audio through effects.
Work in "Bounce" by selecting inputs like "Input Module 3" by right clicking and checking off Input Modules. Then press record and stop. Copy and paste your clip to an Audio Track Module, the "Sampler" or a Side Chain Audio Track Module.
Work in "Master Bounce" to produce audio clips by recording whatever is coming through the system for everyone to hear.
Chop and screw your audio in the sampler with highlight and right click processing effects. Glue your sample together and put it in an Audio Track Module or a Side Chain Audio Track Module.
Use the "Threshold Setter" to perform long linear modulation. Right click any parameter and select "Adjust to Threshold". The parameter will then adjust its minimum and maximum values over the length of time described in the "Threshold Setter".
The "Execution Engine" is used to make sure all changes happen in sync with the music.
IE>If you selected a subdivision of 2, and a length of 2, then it would take four quarter beats(starting from the next quarter beat) for the change to take place. So if you're somewhere in the a (1e+a) then you will have to wait for 2, 3, 4, 5, to pass and your change would happen on 6.
IE>If you selected a subdivision of 1 and a length of 3, you would have to wait 12 beats starting on the next quater beat.
IE>If you selected a subdivision of 8 and a length of 3, you would have to wait one and a half quarter beats starting on the next 8th note.
How to compile and install a Real-Time Kernel for Debian \#1
original post by Sumidero, cut and pasted from here:
After some exchange of posts in this thread:
I told @katjav and @PonZo that soon I was going to post a log of a Real-time Kernel compilation and installation for Debian based machines, so that Pd would run fast and smooth in any machine, even a small netbook. I only found the log of the compilation of a desktop machine, but the process is the same, adjusting the choices to the equipment features. I hope this encourages you to compile your own Real-time kernel.
You can download the pdf file from here:
You can find out the chipset name of your audio card using this command:
# cat /proc/asound/cards
There are some extra tweaks needed to develop real-time performance:
Install jack with its graphic front end:
# aptitude install qjackctl
Once installed, it prompts us to configure automatically this file:
We agree and then we tweak this other file opening it with any text editor (always in root mode).
# gedit /etc/security/limits.conf
Then we add these lines at the bottom of the limits.conf file to assign high and real-time priorities to the @audio group:
@audio - rtprio 99
@audio - memlock unlimited
@audio - nice -10
Don't forget to add your user to the @audio group:
# adduser <userName> audio
I hope it works for you too. Best regards to all of you.
Understanding pd's audio latency
Without knowing exactly what you've already done to achieve lower latency, I'll cover the basics.
Latency is is controlled by several factors:
The Kernel: while you don't need a low latency kernel getting one or using Ubuntu Studio can help. There have been some incompatibilities with graphics drivers from nvidia in the past so YMMV
Kernel configuration: If you haven't read this page, read it - all of it - and do the tweaks that pertain to you. https://help.ubuntu.com/community/UbuntuStudioPreparation
Kernel Sound drivers: you are either going to use ALSA (through jack) or ffado for a firewire audio device. Pulseaudio is not an option here. Pulseaudio is not the devil, it's just not made for low latency work. refer to the link about on how to deal with it.
Jack Configuration once you've tweaked everything above then you can lower your frames/period and period/buffere settings as low as they will go without Xruns
Your soundcard not all soundcards can get down to 2ms latency in Jack, and considering you are using just the builtin audio you may have to consider a external usb audio device. Something like these would work:
PD within Virtual Box, Audio Problems
PD works better when it has low latency access to the sound hardware...which I don't think it's going to get running in VirtualBox. That said, try checking your audio settings under Media -> Audio Settings and raise the delay(msec) up to 1000 and see if that helps the audio. Jack won't work cause it can't speak directly with the audio hardware through ALSA, the Windows OS as that locked up and Ubuntu/ALSA/JACK can only send audio through the emulated hardware.
I did recall people trying to workaround this by giving VirtualBox exclusive access to a USB/Firewire audio interface and then using that audio device for Ubuntu running inside the VirtualBox - though I'm not sure what came of it. I would check the VirtualBox forums.
GEM alternative for Tk GUIs?
Well I worked a week on that cute GEM GUI, but when connecting the audio processing I stumbled upon the big drawback. Dropouts!
I should have known, because I played around with sonsofsol's ZODIAC which must load the audio in a different Pd instance. Naively, I assumed this is useful for cases when you want to run the processes on different computers. I did not realize that it is absolutely problematic to run GEM and audio in the same Pd (while it's so easy to check that...).
OK then, need to run two processes for one application. On Windows and Linux this is straightforward because if you open a patch by clicking, it is automatically opened in a new Pd. That is how you can have loads of Pd's opened without even being aware of it. On OSX only one Pd is loaded, even if you click to open new patches from the finder. So you need to open a second Pd from the command line. Further, you need to turn audio on and off again in the Pd with the GEM. Why? When you start Pd, dsp is apparently off, but still there is CPU load associated with audio computation. This only stops after turning dsp on and off. If you forget this, some 10% CPU load is waisted on a process doing no audio at all. And another nuisance: two Pd consoles on the screen - which one is which?
I wanted to use GEM to make some patches very user-friendly, with informative GUI elements. But in a two-Pd version, it is no longer so straightforward to even open the patches. For Pd-beginners such an application may be discouraging. Therefore I think, when we want to use GEM as interactive user interface for widely shared patches, the issues should be solved somehow. Maybe a script should be developed to perform all the necessary actions in the correct order.
By the way has anyone tried to use [pd~] for separating processes? I have never used [pd~] so far, maybe this is a very stupid question...
Workshop: Pd as your embeddable audio engine, NYC 4/23
"Pd as your embedded audio engine" will teach all about embedding libpd as the sound engine for your app, whether its iPhone, Android, Java, OpenFrameworks, Processing, etc. This workshop provides a broad spectrum of different ways of connecting Pd to other things. Having hardware isn't a requirement either. The workshop will cover ways of interfacing with Pd from computer to computer. Bring your laptop and devices that you want to install libpd on (Android, iPhone, etc.)
Here is an outline of topics:
* History of Pd as an engine
* Ways to interface with a Pd process
* Midi & OSC
* Python + sockets
* Parsing patches in three languages
* libpd on Android and iOS
* RjDj and ScenePlayer
Purchase a ticket here: http://www.eventbrite.com/event/1491957485
Workshop: Pd as your embedded audio engine
Taught by: Chris McCormick, http://mccormick.cx/chrism
When: Saturday, April 23, 2011 from 1:00 PM - 5:00 PM (EDT)
Where: NYCResistor, 87 3rd Ave, Brooklyn, http://nycresistor.com