Pure Data / Raspberry Pi / Realtime Audio / Permissions
THE GOAL:
I want my Raspberry Pi 2 to automatically start up the Jack server with realtime scheduling, and subsequently start Pure Data with realtime scheduling, load a patch &c. without any user intervention from a login shell.
As a performance artist working primarily with psychodrama (the technology is definitely NOT the important part here), fiddling around at a terminal right before or during a performance is kind of... psychically inconvenient. I need a box that I can plug in, give the audio output to the sound guy, and be ready to go.
PREREQUISITES:
I use Raspbian with a Linux kernel compiled with realtime goodness. I have hand-compiled Jack2 and Pure Data with realtime support in order to take advantage of this. Running a process with realtime priority requires the proper PAM directives set in /etc/security/limits.conf and related places, but that is beyond the scope of this little write-up.
Also somewhat relevant: I use a M-Audio MobilePre USB soundcard (sounds pretty awful by today's standards, but it's an extremely USEFUL box and sounds good enough for the work I do). For full-duplex sound, this requires the RasPi's USB to be set to single speed. In this configuration, I can get just under 2.9ms latency with good CPU overhead for Pure Data to run a few of my 64-voice wavetable and delay line granulators. Yeah!
THE PROBLEM:
Purely by happenstance, I had given the jackd command in my startup script the option “-s” which allows the server to ignore overruns and so on. So things seemed to be working as expected, but I noticed a lot more glitches than when I manually started up Jack and Pd from the terminal without the “-s” option. Upon removing it from my startup script, everything failed! WAH.
So I started piping STDERR and STDOUT to text files so I could read what either Jack or Pd were complaining about. As it turns out, Jack was unable to start with realtime priority due to a permissions problem. (I assume this is one of the things the “-s” options allows jackd to ignore, and thus start up with non-realtime priority. Problem is that Pure Data can’t connect to a non-realtime Jack server when its “-rt” option specified.)
Now, I had already been through the whole rigamarole of setting proper memory and priority limits for the “audio” group, to which the user “pi” belongs. So I thought, okay, I have to execute these commands as “pi”, and while simulating a login shell because the security limits in question are only set during login.
So I did this:
su -l pi -c "/usr/local/bin/jackd -R -dalsa -dhw:1,0 -p128 -n3 -r44100 -S >> /home/pi/jackd.log 2>&1 &"
This says “login as user ‘pi’ and then run the jackd command with these options, piping the outputs to this log file and run it in the background”. Well, I still got all the same errors about not being able to set realtime priority. WHYYYYYYYYY?
THE SOLUTION:
I hunted and hunted and hunted on a Very Popular Search Engine til I decided to try searching “security limits not loaded with su -l” and found this.
(Makes me think of that Talking Heads lyric, “Isn’t it weird / Looks too obscure to me”.)
So by uncommenting the line # session required pam_limits.so
in /etc/pam.d/su
everything started working as expected.
CONCLUSION:
I now know a LOT MORE about PAM and how important it is to keep in mind when and in what order scripts and other little subsystems are executed; but also that sometimes the problem is EXTREMELY OBSCURE and is to be found in some seemingly far-flung config file.
I hope this helps anybody out there working with Pure Data and the RasPi. The second generation board really packs quite a punch and can run several hundred audio grains (run by vline~ and enveloped by vline~ and cos~) simultaneously without a problem. And I'm pretty sure this is just using ONE of the 4 cores!
I'm by no means an expert Linux sysadmin, so if you have any other suggestions or corrections, please let me know! I wouldn't have been able to get this far without all the generous and helpful writeups everybody else has contributed, both within the RasPi and Pure Data communities. If you have any questions about anything I glossed over here, I'll do my best to answer them.
Long soundfiles in PD
I have a similare problem:
using [tabread~] and [tabread4~] to read mono audio sample after 381 seconds audio start a strong distortion.
I have tried with aif, wav, 44.1 and 48 kHz, different setting for Array and Table (yes/no Graph on parent, yes/no save contents) with no better results.
I realized that array draw the sound curve only partially, exactly till 381 secs. So I suppose is a problem of limit in data lenght.
What I want to do is realize a simulations of 2 I have a similare problem:
using [tabread~] and [tabread4~] to read mono audio sample after 381 seconds audio start a strong distortion.
I have tried with aif, wav, 44.1 and 48 kHz, different setting for Array and Table (yes/no Graph on parent, yes/no save contents) with no better results.
I realized that array draw the sound curve only partially, exactly till 381 secs. So I suppose is a problem of limit in data lenght.
What I want to do is realize a simulations of 2 analog device (vinyl turntable or tape recorder) playing same music (audio file) of +- 16 minutes: I am not really interested in wow & flutter simulation, but in not simultaneous start and not exactly same speed rotation: this last means a "transposition" or slightly difference in sample speed reading.
I try to resize but I obtain a resize error
many thanks
Problem compiling external on Windows
@joelakes said:
Hi David,
Did you resolve the problems with the $s_float in the counter example? I am getting the same problem using Dev-C++.
Thanks,
Joe
I was having the same problem. Also in the [pan~] signal class example with &s_signal.
But I finally solved it, or at least found a way around it that seems to work.
In How to Write PD Externals there's a line explaining that s_signal refers to the
string "signal" in a lookup table somewhere, so I tried to just replace &s_signal with "signal" each place it was used. Of course, this didn't work.
Later I noticed that anywhere else a string appeared, it was added with gensym("string"). So this time I replaced &s_signal with gensym("signal") and it actually worked.
My ideas on what's going on are either:
A. The external is compiled before "signal" is added to the table.
B. "signal" is never added to the table, unless you do it
C. The external can't see the table, for what ever reason.
These are just guesses. I'm learning as I go, and not very familiar with C yet.
I would like to know however if this is a good practice or a hack.
Also, should I use gensym() every time s_signal appears, or does it just need to be called once? ie Will it cause any problems to call gensym() multiple times with the same argument?
Hopefully this will help someone else out there having the same problem.
Set up the path for abstractions
hey,
I don't know but I wonder if maybe there isn't a more general problem going on right now (though it seems simple and wierd so I dont get it)
I am also having trouble loading abstractions even though I have done it quite a lot before, in particular (I posted this problem a few days ago but am posting in response to your problem in hopes that it might be a more global problem - though my abstractions, other than the ones mentioned below, load so I dont understand)
specifically, was the problem with the path not being recognised?
anyway -
Hello everyone,
This is a strange problem because I have loaded libraries and things with the 'Paths' dialog under the file menu before and had no problems.
I am trying to get Chris McCormick's s-abstractions to load. The folder is in the same folder as my other libraries and it is listed in both the "paths" and "startup" areas. It wasn't before but I added it.
I am running windows 7 64 bit. Another oddity that I noticed is that in my Program Files (x86) which is where the 64 bit programs live, I have pd installed and the folder is simply called pd. However in the Program Files folder (missing the x86, where the non-64 bit programs live) I have a folder called "Pd-0.42.5-extended"
I wonder if that couldn't be the problem. The s-abstractions folder is included in both though...
Hopefully somebody has some idea about this...
((anyway sorry to post twice but I hoped maybe there was some common problem there))
Pix\_image problem on Linux
Hello,
This problem is discussed before on Pd list but still no real solution. When I create [pix_image] in a pd on Debian Linux, the object causes heavy CPU load even before it is connected to anything, some 4% per object instance. On Pd list it was suggested by several people to send message [thread 0( to the object. This will stop the crazy CPU load indeed, but now the object can no longer load images.. Sending [thread 1( reintroduces CPU load, but does not reenable image loading.
edit: the loaded images are lost (it is possible to load new images with the 'open' message, however not with initialisation argument).
This problem does not exist on Windows with the same hardware, and also not on my Macbook. On Debian, I first used the binary Gem package, but later compiled from source to include font support. Both these versions have the same problem. Can anyone confirm this issue, or report about a Gem for Linux where this problem does not occur? Or could it be a driver-related issue?
Thanks for any help or comment.
Katja
Couldn't create under \*nix :(
Hi all!
I'll use this first message to introduce myself to this forum and to ask for some noob-help
I'm Zoten, from Italy, and I'm studying computer science at Udine university.
A little time ago I've been get started with pd and data-flow programming for a little project, that I've just decided to enlarge.
So, my aim is now to make some interaction between some c++/qt application and PureData by LAN/web, but the problem is muuuuch before.
I'm an open source supporter, so I decided to make all under free OSs.
So, here's the problem:
I've got two stations, that I (want to) use to try my programs in real situation.
Station one comes with Sabayon 3.5 as OS, Pd compiled from sources, and station two with Ubuntu 8.04 (Hardy), Pd installed by GUI, but the problem is the same: I open my old project (a simple client/server synth that I want to use, between the others, to sniff some packets) and it starts to give error messages like
sendOSC
..couldn't create
OSCroute /adsr
..couldn't create
and so on.
If I try to send something via web it comes with an error,
error: inlet: expected '' but got 'send'.
Searching the forum for similar issues, I tried installing pd-extended on the 'buntu machine, but the problem, if possible, degenerated, with more "couldn't create" issues.
On Windows XP SP2 the project seems to work, so I can easily say that it is some installation-path-lib problem.
Anyone has any idea?
Arrays being double-spaced?
Hi all,
I'm having a strange problem with arrays. I'm using [tabread] to read from an array, and every read from an odd-numbered index is returning 0. It's a bit like the array data is "double-spaced", i.e. I have to double the index to get the index I actually want.
For example, if I store {1, 2, 3, 4, 5, 6, 7}, I get back {1, 0, 2, 0, 3, 0, 4}. The array graph looks normal, and examining the array as a list looks fine as well.
On saving and reloading the patch, however, the array graph changes, showing every second value as 0 and the array contents are spread out. The list view also has a 0 at all the odd-numbered indexes. What's worse, [tabread]ing the array contents reveals that it's now quadruple-spaced, i.e. {1, 0, 0, 0, 2, 0, 0}!
I used Pd recently on Windows XP and Mac OS X and don't remember arrays behaving like this. I'm using Pd 0.40.2, compiled from source, on Gentoo GNU/Linux, kernel 2.6.19-gentoo-r5 for AMD64. I was using 0.39 when I first noticed the problem and grabbed the latest source. Could it perhaps be a 64-bit thing?
Thanks for any help you can offer,
screwtop
Loopback devices, virtual audio devices?
i'm looking for a free solution too. i don't think my original idea is going to work out because i don't have the time to implement it. maybe some of you have ideas for the problem i am trying to solve.
the problem is this:
i am helping a professor with some research. for his research he is doing a case study on 3 composers. he is asking them to record a narrative of there thoughts on the composition process as they compose. for this, the composers will be working an a mac studio workstation putting the composition together in logic. a second computer, a pc , running audacity will be used to record there sounds. when the composer reaches what they consider a significant change in the program, we are asking them to save their project to a new file (so we end up with a series of files showing the various stages of the composition). we would like a way, however, to map the timestamps of those files to the 'timeline' of their narrative.
here are a few solutions that are not exactly desirable:
a. do not stop the recording at all and make a note of what time the recording started. this means that you can calculate what time speech is taking place by adding the number of minutes and seconds (and hours) to the time at which the recording started. the problem with this is that it will yield very large files which are not very practical, especially considering that we have to transcribe these files.
b. have the composers start each segment of narration with a timestamp: "it is now 9:15 on tuesday...." as part of the research methodology, this creates problems with the flow of a more natural narrative of the compositional process.
c. have the composers save each segment of narration as a seperate time-stamped file. the problem here is that this takes more time, and could create a lot of files that would be very annoying to work with when it comes to transcribing.
c. my idea was to have, instead of just input from the microphone, 2 streams of audio input,one on the left channel and one on the right channel. on the left, would be the recorded narrative. on the right, would be an audio signal that encodes a time stamp. i was think of simply convert a number such as DDMMHHMM (day, month, hour, minute) into DTMF tones. these could then be translated back into a timestamp. an 8-tone dtmf sequence would be generated every 10 seconds or so. this way, as long as the narrative segment was longer than 10 seconds, it would contain a timestamp. the problem with this is that i have no way to mix such a signal with the input from the microphone.
any suggestions would be greatly appreciated. thanks.
2 audio files in 1 array
i want to record a soundfile, and have the option of pausing the recording, and then continuing later.
i got writesf~ going ok now, but it only has start and stop....no pause button.
the other way i can do it, is to write the audio into an array, but tabwrite~ also has no pause button.
the other option is to write the first part of the audio into 1 array, the second part of the audio into another array, the third part into another...etc.
but then the only way i can see to join these arrays is to play them back in sequence, in realtime, and tabwrite~ into a master array.
there is stuff in the documentation about how to join 2 arrays by using [tabwrite], but not how to do it by using [tabwrite~], or something similar.
any ideas????
Compiling for x86\_64
The only problem i have so far (and it is a big one) is that it isn't reading the example files (bells, voice, and voice2) because it says they have malformed headers or something. I hope its just the files and not the program itself,
Unfortunately this is very likely a problem with the program. Different processors expect different alignment of words in memory. For example, if you declare a C data structure of { int32, int8, int32 } and one processor A expects int32's to be on 16bit boundaries and another processor Bexpects int32's to be on 32bit boundaries, the C data structure will be represented in memory differently:
A: { int32, int8, pad8, int32 }
B: { int32, int8, pad8, pad8, pad8, int32 }
The padding is just empty space to align the data on the correct word boundaries.
The problem occurs when the program's assumptions of data layout in memory do not match what the compiler does, and instead of reading a file format word by word and populating the data structure, reads a chunk of data and assumes that it matches the data format in memory.
Compare this with problems transferring data between big-endian and little-endian machines - you have to translate to and from the file format, not assume that the data in the file will match the data in memory for all architectures.