How to compile and install a Real-Time Kernel for Debian \#1
original post by Sumidero, cut and pasted from here:
http://puredata.hurleur.com/sujet-5605-compile-install-real-time-kernel-debian
Hello everybody:
After some exchange of posts in this thread:
http://puredata.hurleur.com/sujet-5578-old-toughbook-useful-live-events
I told @katjav and @PonZo that soon I was going to post a log of a Real-time Kernel compilation and installation for Debian based machines, so that Pd would run fast and smooth in any machine, even a small netbook. I only found the log of the compilation of a desktop machine, but the process is the same, adjusting the choices to the equipment features. I hope this encourages you to compile your own Real-time kernel.
You can download the pdf file from here:
http://dl.dropbox.com/u/6086343/HowToCompileARealTimeKernelForDebian.pdf
Extra data:
You can find out the chipset name of your audio card using this command:
# cat /proc/asound/cards
There are some extra tweaks needed to develop real-time performance:
Install jack with its graphic front end:
# aptitude install qjackctl
Once installed, it prompts us to configure automatically this file:
/etc/security/limits.d/audio.conf
We agree and then we tweak this other file opening it with any text editor (always in root mode).
# gedit /etc/security/limits.conf
Then we add these lines at the bottom of the limits.conf file to assign high and real-time priorities to the @audio group:
@audio - rtprio 99
@audio - memlock unlimited
@audio - nice -10
Don't forget to add your user to the @audio group:
# adduser <userName> audio
I hope it works for you too. Best regards to all of you.
Sumidero
Synthetic drum-machine with chaos super-powers
hi!
this is my first 'big' patch, i'm a noob, so advice/critique/encouragement welcome!
it's a completely synthetic, controllable percussion machine. it has five tracks and four sequencers which makes it polyphonic. kick, snare, hihat and two other digital percussive sounds. then there are few randomizers and a slicer.
edit: apologies. trying to upload again.
Recording Video pix\_write, pix\_record, screen capture
use a video card output plugged into some sort of external recorder, then just record in realtime. i'm considering using this random VCR my roommate left here to do this, but some realtime dvd recorder would be ideal. unless you have a serious powerhouse it's kinda ridiculous to put your system through all the realtime rendering as well as recording to HD. although nowadays machines are way more powerful. a few years ago i had a 1.5 ghz athlon with a geforce 3 and the method i ultimately settled on was recording the audio separately to .wav, dumping each frame of the gemwin to a jpg (which on this machine had to be a smaller resolution and TERRIBLE terrible quality), then i had a complex script system to use mencoder and ffmpeg to put the stills together to a video then add the audio. eventually it somewhat worked, took LOTS of trial and error, but ultimately it never captured the true potential of what i was trying to record. recording externally will let you save ALL your overhead for realtime audio/visuals, at as high a resolution and complexity as you machine will allow (and ultimately the recording resolution of the device).
just my 2 cents, i really know nothing about external dvd recorders and have a crappy machine that can't handle the 3d i need it to so i haven't recorded in a while. but here's an example of my old method recording (and if you notice, the sync is off almost a whole beat, it's supposed to trigger changes only when notes are triggered, i got it sync'd but realized after the fact it was off a beat)
another similar video, sync is way better on this one
but ultimately these had to be dumbed down to be able to record via this backwards ass method.
record externally, then upload some HD for us!
Phasor~ vs metro
I guess maelstorm is referring to [vline~] in case your writing a drum machine in pd, to use it for envelopes. I've once tried to midi or osc sync pd on my machine to a reaktor laptop and we ended up syncing by ear, because the timing was very unreliable. If you are doing everything on one machine, you might look at the connection between pd and reaktor?
CrazyA/Vmachines@Nk-Berlin
Crazy A/V Machines: a puredata workshop
When:
Monday to Saturday September 27th-October 2nd 12:00-16:00
Where:
NK Elsenstr. 52 2HH 2Etage 12059 Berlin
Who:
Oscar Martin and Luca Carrubba
The aim of this workshop is to learn simple programming strategies for the creation of audio/video software tools for live performances. Generative or interactive piece of software that let artist creates and controls a video stream or an audio processing in real time.
During the workshop we will use Pure Data, a free (as a bird and as a beer) graphical programming language particularly focused on the processing of audio and video data in real time. Pure data is a tool for artists whom wants experiment with a
different creation pattern, with a freedom to build from the scratch they own instruments or interactive installation. You dont need to be a programmer to begin to build your tool. In 5-6 days long workshop you will learn the basis of language, how to manipulate or create a video and how to play with audio. At the end of the workshop all participants will play in a jam session together in the Lab space or exhibit their prototypes.
Participation is limited to 12.
workshop program and registration:
http://www.nkprojekt.de/pure-data-crazy-av-machines-with-oscar-martin-luca-carruba/
--
when Art become pratical
we call it technology.
When Technology become useless
we call it Art
GEM over a network - or - GLX support?
hi pd123!
i didnt think this was possible, but after some research im not so sure anymore.
i got this interesting dokument
OpenGL Graphics with the X Window System (pdf)
at the bottom of this page
http://en.wikipedia.org/wiki/GLX
one thing i didnt get:
do you want to send the rendered data as a video live stream or just the rendering instructions, so the image has to be rendered on the 2nd machine?
i just wonder about how practical it would be to take data from the gfx card, load it in the cpu, compress, send through network, load it again in the cpu, uncompress, and finally on the other gfx card to output on the other machine. i experienced that with resolume ages ago, wich provides this feature to send videostreams over network...and it really sucked. even wit 320x240!
or are you talking about a setup, where all maps exist on every machine? could you explain what exactly you want to share over network? maybee there is a simpler solution. like an analog framegrabber.
Remember MIDI information
ok, so currently you have a headless OSX box, i'm guessing this is a Mac Mini from the sound of it?
you want to have pure data start right when you start it up? or are you going to be using some sort of remote login (do macs have SSH?? OSX is basically Unix these days so i would think they do)
basically the command line arguments are passed to pure data when you start it up
so for example, lets say you;re starting it from a short cut. the shortcut has the path to the PD executable. if edited the short cut so it was
"/path/to/folder/pd.exe -midiindev 1 -midioutdev 2"
then pure data wouldstart up and get the arguments passed to it and then know that its using midi device 1 for input and midi device 2 for output.
o lets say you want to start up PD without the GUI, with midi device 1 as input, midi device 2 as output and you want the patch awsumsounds.pd to start up, your command line would be
"/path/to/folder/pd.exe -nogui -midiindev 1 -midioutdev 2 -open /path/to/awsumsounds.pd"
apologies if you already get this, wasnt sure how basic to start so i figured the beginning is best heh.
if you wanted pure data to start automatically when you turn the machine on then youd have to put a shortcut in whatever mac uses to run things at startup.
alternativly you could start the machine up, ssh into it from another machine and then start pure data yourself using the command line above, however you might need to play tricks to get it to carry on going after you log out.
hope thats all helpful, i'm afraid that i dont know about some of the more mac specific stuff
Export patch as rtas?
@Maelstorm said:
If you're on OSX, jack can be used as an insert plug-in so you can avoid the separate tracks, but you still get the latency.
which you can, depending on your host, eliminate by setting the track delay for the track with the plugin. So if the buffer is 512 sample/11.82 ms then set the delay to that and it should be spot on.
I've had the whole Jack graph latency explained to me numerous times by Stephane Letz and it still doesn't go in.....Heres what he told me...
> > Its the Pd > JAck > Ableton latency. (Ableton has otoh 3 different
> > ways of manually setting latency compensation - I'm just not very
> > clear on where to start with regards to input from JAck)
> >>>>
>
> This is NO latency introduced in a Pd > JAck > Ableton kind of
> chain; the JACK server activate each client in turn (that is Pd *then*
> Ableton in this case) in the *same* given audio cycle.
>
> Remember : the JACK server is able to "sort" (in some way) the graph
> of all connected clients to activate each client audio callback at the
> right place during the audio cycle. For example:
>
> 1) in a sequential kind of graph like IN ==> A ==> B ==> C ==> OUT,
> JACK server will activate A (which would typically consume new audio
> buffers available in machine audio IN drivers) then activate B (which
> would typically consume "output" just produced by A) , then activate
> C , then produce the machine OUT audio buffers.
>
> 2) in a graph will parallel sub-graph like : IN ==> A ==> B ==> C
> ==> OUT and IN ==> D ==> B ==> C ==> OUT (that is both A and D are
> connected to input and send to, then JACK server is able to
> activate A and D at the same time (since the both only depends of IN)
> and a multi-core machine will typically run A and D at the same time
> on 2 different cores. Then when A *and* D are finished, B can be
> activated... and so on.
>
> The input/output latency of a usual CoreAudio application running
> is: driver input latency + driver input latency offset + 2
> application buffer-size + driver output latency + driver output
> latency offset.
>
this next part is the important bit i think...
> For a graph of JACK clients: driver input latency + driver input
> latency offset + 2 JACK Server buffer-size + ( one extra buffer-
> size ) + driver output latency + driver output latency offset.
>
> (Note : there is an additional one extra buffer-size latency on OSX,
> since the JACK server is running in the so called "asynchronous" mode
> [there is also a "synchronous" mode without this one extra buffer-size
> available, but it is less reliable on OSX and we choose to use the
> "asynchronous" mode by default.])
>
> Stephane
>
More midi output device options needed
Hello,
I am working on PD patch that converts my behringer bcr2000 into a controller for my liveset. I use Ableton Live (sequencer and sounds), an access virus (sounds) and a mc505 (more sounds).
I use my pd patch to route signals to the machines and to expand te possibilities of my midi controller. Like combination of keys change control messages, stuff like that.
The problem I have now is that PD only offers two Midi Output options. This means I can only send my Midi messages to two machines, but I have more machines that want to be controlled. How can I make more midi output device options?
Me and several synthesizers would really like to now this.
Thank you all
A controller for the korg ER-1
Hi there,
I've been trying to have my ER-1 drum machine controlled by pd for a while and always failed at properly parsing NRPN messages using existing abstractions (I'm on winXP), so I decided to try to find a way to do this by myself. The attached abstraction is the result of this learning process. It is designed to control all the parameters of the first synth part of the machine, it can easily be modified to control any other part (synth or sample) using the Er-1's midi documentation (or any midi monitoring application).
the cool part of this is that it extends the sonic capabilities of this extraordinary little piece of gear by making you able to automate several parameters when the machine's inner automation system can only remember changes for one parameter. When I'm done with building one abstraction like this for all the parts, I should be able to lay down some pretty freaky rythm sequences (!).
Please feel free to use it if you own an ER-1 and let me know what you think of this.
D.S

, then JACK server is able to