conditionally mixing signals
I have a very simple mixer in my app that mixes two signals at a time and takes the sum and mixes with another signal and then once again for a fourth signal.
This approach worked well before for just two signals but now with four signals to combine it seems like there are some audio quality issues when some of the voices are nil.
I'm no pd expert and I'm stumped on how to do this better.
should I toss this approach or is there a way to preserve audio fidelity when either or both of Voice 2 and Voice 3 have no signal.
It looks like I need a moses~ to conditionally bypass a mixing stage but I have no idea how to pull this off. If anyone can point me to an example that would be awesome.
I can only use vanilla objects.
Please help: Random sampling gets more complicated
@whale-av Thank you, I tried and read everything but unfortunately this algorithm is probably more complicated that I needed...I am not able to update it...However I made new version, that suits my project even better and I hope it will be easier since I am a beginner and need to manage finishing this project in only 3 days ...Yet I still cant figure out some (probably basic) problems.
New project description:
I have 4 folders, that have each different number of sounds in each folder.
1st step: I want to make the main "random" pick that will "randomly" chose 1 of the folders.
2st step: from this chosen folder it will "randomly" play all sounds that are in this folder...
3rd step: the process go back to 1st point, but will now choose from 2 remaining folders.
4st step: when all the folder were played, than the process starts again.
Unfortunately I am still stuck in these obstacles that I have been trying to solve all day:
how to make the generator only use numbers once?
--- For example instead of: 1,1,2,5,4 to make him generate 1,3,5,2,4 - where all the numbers are used once and not repeatedly?
after this, how than make the generator stop and not continue in the process of "random" generation?
after this, how than make the algorithm go back and generate new folder?
This is one of the "folders" with for now only 5 sounds...I want to connect this "folder" to remaining 3 after solving problems 1 and 2.
Snímek obrazovky 2016-06-15 v 20.52.05.png
If you could recommend me anything I would be really glad!
I tried everything I could find on the internet but with no success.
im a newbie to PD, but not to programming (C++) and Im pretty competent with Max/MSP.
I decided to give PD a try, as I want to use it on a Raspberry PI and Linux environment.
anyway, my issue is:
Ive written a program (in C) that sends alot of messages via OSC to be picked up by OSC, using mrpreach udpreceive/unpackOSC/routeOSC etc.
and Im testing it on my Mac, and PD is seriously lagging behind processing the osc messages. yet the CPU load is pretty load.
... to me it appears that perhaps a control rate needs to increased?
(assuming PD is like Max and Reaktor which has two rates for message processing)
I need to process about 1000 osc message/sec, perhaps more ... though I think its lagging behind at substantial less than that.
exactly the same setup works absolutely fine on Max/MSP (7.2) , so I think its something to do with my PD setup.
but I tried both pd-extended and pd-vanilla (as it mores up to date) and both had the same issue.
- is there some concept of 'control rate' in PD? can I increase this some how?
- is this likely to be a Mac only issue, or on all versions of PD? (e.g Linux)
- are these the best processing objects for OSC?
thanks for any help
p.s. any recommendations on best PD setup for the Mac? pd-extended seems very old, it seems perhaps pd-vanila + the defin (?) extension finder is best option.
PD 0.46-6-64 bit (vanilla) + mrpeach extensions
Mac OSX 10.11.2
(also tried latest PD-extended, 32 bit, same issue)
Pure Data / Raspberry Pi / Realtime Audio / Permissions
I want my Raspberry Pi 2 to automatically start up the Jack server with realtime scheduling, and subsequently start Pure Data with realtime scheduling, load a patch &c. without any user intervention from a login shell.
As a performance artist working primarily with psychodrama (the technology is definitely NOT the important part here), fiddling around at a terminal right before or during a performance is kind of... psychically inconvenient. I need a box that I can plug in, give the audio output to the sound guy, and be ready to go.
I use Raspbian with a Linux kernel compiled with realtime goodness. I have hand-compiled Jack2 and Pure Data with realtime support in order to take advantage of this. Running a process with realtime priority requires the proper PAM directives set in /etc/security/limits.conf and related places, but that is beyond the scope of this little write-up.
Also somewhat relevant: I use a M-Audio MobilePre USB soundcard (sounds pretty awful by today's standards, but it's an extremely USEFUL box and sounds good enough for the work I do). For full-duplex sound, this requires the RasPi's USB to be set to single speed. In this configuration, I can get just under 2.9ms latency with good CPU overhead for Pure Data to run a few of my 64-voice wavetable and delay line granulators. Yeah!
Purely by happenstance, I had given the jackd command in my startup script the option “-s” which allows the server to ignore overruns and so on. So things seemed to be working as expected, but I noticed a lot more glitches than when I manually started up Jack and Pd from the terminal without the “-s” option. Upon removing it from my startup script, everything failed! WAH.
So I started piping STDERR and STDOUT to text files so I could read what either Jack or Pd were complaining about. As it turns out, Jack was unable to start with realtime priority due to a permissions problem. (I assume this is one of the things the “-s” options allows jackd to ignore, and thus start up with non-realtime priority. Problem is that Pure Data can’t connect to a non-realtime Jack server when its “-rt” option specified.)
Now, I had already been through the whole rigamarole of setting proper memory and priority limits for the “audio” group, to which the user “pi” belongs. So I thought, okay, I have to execute these commands as “pi”, and while simulating a login shell because the security limits in question are only set during login.
So I did this:
su -l pi -c "/usr/local/bin/jackd -R -dalsa -dhw:1,0 -p128 -n3 -r44100 -S >> /home/pi/jackd.log 2>&1 &"
This says “login as user ‘pi’ and then run the jackd command with these options, piping the outputs to this log file and run it in the background”. Well, I still got all the same errors about not being able to set realtime priority. WHYYYYYYYYY?
I hunted and hunted and hunted on a Very Popular Search Engine til I decided to try searching “security limits not loaded with su -l” and found this.
(Makes me think of that Talking Heads lyric, “Isn’t it weird / Looks too obscure to me”.)
So by uncommenting the line
# session required pam_limits.so in
/etc/pam.d/su everything started working as expected.
I now know a LOT MORE about PAM and how important it is to keep in mind when and in what order scripts and other little subsystems are executed; but also that sometimes the problem is EXTREMELY OBSCURE and is to be found in some seemingly far-flung config file.
I hope this helps anybody out there working with Pure Data and the RasPi. The second generation board really packs quite a punch and can run several hundred audio grains (run by vline~ and enveloped by vline~ and cos~) simultaneously without a problem. And I'm pretty sure this is just using ONE of the 4 cores!
I'm by no means an expert Linux sysadmin, so if you have any other suggestions or corrections, please let me know! I wouldn't have been able to get this far without all the generous and helpful writeups everybody else has contributed, both within the RasPi and Pure Data communities. If you have any questions about anything I glossed over here, I'll do my best to answer them.
Midi to hz, and hz to midi formulas
I've recently taken "the tour" to Mathematics Stack Exchange. Initially, I thought it was going to be some sort of light-hearted pics plus some words about the site. Fortunately, it was much better than that.
"The tour" is an interesting way of inviting you to read the rules. I think these rules also act as a reminder, a reminder of the important stuff.
You can, of course, take the "tour" by yourself, but these are some of the things you can read there.
- Ask questions, get answers, no distractions.
- This site is all about getting answers. It's not a discussion forum. There's no chit-chat.
- Get answers to practical, detailed questions.
- Focus on questions about an actual problem you have faced.
- Not all questions work well in our format. Avoid questions that are primarily opinion-based, or that are likely to generate discussion rather than answers. Questions that need improvement may be closed until someone fixes them.
As @dangrondang (F.) said, hope this helps.
Compiled external saying "couldn't create" when being added to PD
I'm having trouble getting my external to work, It compiles with 5 warnings
sineq.c:48: warning: unused variable ‘x’
sineq.c:49: warning: unused variable ‘in1’
sineq.c:50: warning: unused variable ‘in2’
sineq.c:51: warning: unused variable ‘in3’
sineq.c:52: warning: unused variable ‘in4’
It does a "make" successfully but I get this warning message
/usr/bin/ld: warning: cannot find entry symbol xport_dynamic; defaulting to 00000000000007f0
but when I try and add it in PD it says "couldn't create". I've looked at the pan~ tutorial and the d_osc.c file as recommended, which did help. I tried to take pieces from the two which I thought were applicable to my situation but I'm still having some issues.
Here's a link to the workflow (dropbox)
Here's a link to the C code online (pastebin)
My external is a reproduction of the sinewave equation with 4 inputs and one output my logic is to have 4 inlets one for the frequency,amplitude,phase and vertical offset and an output for the created signal. Granted this isn't the final equation but this will help me understand how to create the full equation once done. If you want to see the full equation I'll be using here's a link to it below. Basically it's a 1 second periodic signal with the sample rate at 44100 which the equation gives me control over the frequency,amplitude,phase and vertical offset.
Another question I have is what do I use for the t (time) for my final equation is that the t_sample object in PD? or do I need to create a for loop counting from 1-44100 for a 1 second 44100 sampled equation?
PS: I'm compiling on ubuntu 10.04 using gcc
PD from Max, missing Umenu
Lurked for a while, first post now.
I've been using Max/MSP pretty heavily since last December, and I've fallen in love with the visual coding paradigm. For a variety of reasons PD appeals to me more than Max (though I did get used to the bells and whistles). I've replaced all my hardware effects (loopers, stompboxes, synths, sequencers) with max patches, and I'd be comfortable enough making the same over again in PD, but I'm having trouble translating a few objects that I relied on heavily in Max.
Umenu is a big one, the ability to receive numbers and output corresponding text comes in handy with scripting lots of objects, and routing sends and receives. It's ease of populating, rewriting, and deleting of entries makes many complex tasks easily manageable. It also functions as a great label when I hid things away. I suspect there's an equivalent method, but nothing leaps out. Should I be looking at some sort of coll type object? Struct?
In the DSP realm, I'm a little worried not to see Rate~ or some analog of it. I use phasor~ to control my loops, and Rate~ works magic on polymetric sequences. I could probably rig up a similar system scaling and wrapping a signal, but straight up Rate would be best.
In order to be able to slot effects in and out of different parts of the signal chain, I've been using combinations of bpatchers and polys, with scripting to switch between different stompbox abstractions. I found the graph on parent option, but it doesn't seem (at least not obviously) that I can use scripting on it to call up different subpatches. Please correct me if I'm wrong.
I think those three might just cover it (at least for the time being). I've found the externals pulled from the max library to be an amazing resource, and between overlapping objects, manuals, and the answers on this forum most of what I need to know is readily available. If those objects lack analogs in PD I may just bite the bullet and get into java/c/python...at the moment I have no experience with text based programming, but I'm just looking for an excuse, really.
I'll be hitting the manuals cover to cover as best I can in the morning, but I'd be glad to have my patches running sooner all the same.
All help is much appreciated.
Foreground/background gestion in a patch with \[knob\] and \[image\]
I'm trying to make a nice gui using knobs surimposed over an image loaded by [image]. First I noticed (long time ago) that a knob created before the [image] will be in a layer under it and becomes invisible if placed where the image stands.
My workaround is creating knobs after the image. But if I want a 'two-state' image, thus refreshing the latter the knobs disappear forever.
Is there a way to handle background/foreground plane numbers or something to set my knobs (and other elements) 'always on top' ?
Workshop: Xth Sense - Biophysical generation and control of music
April 6, 7, 8 2011
Xth Sense – biophysical generation and control of music
2.Hinterhaus Etage 2
12059 Berlin Neukölln
The workshop offers an hands-on experience and both theoretical and practical training in gestural control of music and bodily musical performance, deploying the brand-new biosensing technology Xth Sense.
Developed by the workshop teacher Marco Donnarumma within a research project at The University of Edinburgh, Xth Sense is a framework for the application of muscle sounds to the biophysical generation and control of music. It consists of a low cost, DIY biosensing wearable device and an Open Source based software for capture, analysis and audio processing of biological sounds of the body (Pure Data-based).
Muscle sounds are captured in real time and used both as sonic source material and control values for sound effects, enabling the performer to control music simply with his body and kinetic energy. Forget your mice, MIDI controllers, you will not even need to look at your laptop anymore.
The Xth Sense biosensor was designed to be easily implemented by anyone, no previous experience in
electronics is required.
The applications of the Xth Sense technology are manifold: from complex gestural control of samples and audio synthesis, through biophysical generation of music and sounds, to kinetic control of real time digital processing of traditional musical instruments, and more.
Firstly, participants will be introduced to the Xth Sense Technology by its author and led through the assembling of their own biosensing wearable hardware using the materials provided.
Next, they will become proficient with the Xth Sense software framework: all the features of the framework will be unleashed through practical exercises.
Theoretical background on the state of art of gestural control of music and new musical instruments will be developed by means of an audiovisual review and participatory critical analysis of relevant projects selected by the instructor.
Eventually, participants will combine hardware and software to implement a solo or group performance to be presented during the closing event. At the end of the workshop, participants will be free to keep the Xth Sense biosensors they built and the related software for their own use.
~ Perspective participants
The workshop is open to anyone passionate about sound and music. Musical background and education does not matter as long as you are ready to challenge your usual perspective on musical performance. Composers, producers, sound designers, musicians, field recordists are all welcome to join our team for an innovative and highly creative experience. No previous experience in electronics or programming is required, however participants should be familiar with digital music creation.
Participation is limited to 10 candidates.
Preregistration is required and can be done by sending an email to firstname.lastname@example.org
Requirements and further info
Participants need to provide their own headphones, soundcards and laptops with Pd-extended already installed.
Musicians interested in augmenting their favourite musical instrument by means of body gestures are encouraged to bring their instrument along. More information about the Xth Sense and a video of a live performance can be viewed on-line at
6-7-8 April, 11.00-19.00 daily (6 hours sessions + 1 hour break)
EUR 90 including materials (EUR 15).
A collection of GLSL effects?
i get these errors...it doesnt want to link
error: [glsl_fragment]: shader not loaded
linking: link 9.80909e-45 7.00649e-45
linking: link 9.80909e-45 1.12104e-44
[glsl_program]: linking with uncompiled shader
[glsl_program]: Link failed!
[glsl_vertex]: Vertex_shader Hardware Info
[glsl_vertex]: MAX_VERTEX_ATTRIBS: 16
[glsl_vertex]: MAX_VERTEX_UNIFORM_COMPONENTS_ARB: 4096
[glsl_vertex]: MAX_VARYING_FLOATS: 64
[glsl_vertex]: MAX_COMBINED_TEXTURE_IMAGE_UNITS: 32
[glsl_vertex]: MAX_VERTEX_TEXTURE_IMAGE_UNITS: 16
[glsl_vertex]: MAX_TEXTURE_IMAGE_UNITS: 16
[glsl_vertex]: MAX_TEXTURE_COORDS: 8