Camomile v1.0.1 - An audio plugin with Pure Data embedded
Great job Pierre! I'm very interested on spatial issuess but now I'm working on a plugin modulator. I would like to ask you if current version of Camomile allows externals to be included into the plugin folder? Thanks. All the best. Gus.
Camomile v1.0.1 - An audio plugin with Pure Data embedded
Here's a short demo video of the last plugin I'm working on: Space VBAP - an implementation of the Vector Base Amplitude Panning for sound spatialization. It's still a work in progress but you can see some of the new features such as the dynamic graphical interface or the adaption to the input/output layouts submitted by the digital audio workstation.
I hope you'll enjoy!
fixing vbap so that it works on more than 16 speakers
So, a while ago I was discussing an issue on here in which vbap crashes when asked to configure more than 16 speakers. This issues turned out to be a known bug: https://sourceforge.net/p/pure-data/bugs/1012/
In the meantime Pierre Guillot has written a brilliant new vbap~ but this one, as indicated by the tilde is a audio rate object and the other is not, which makes it a lot less expensive if you are using a lot of them...which is not unlikely if you are using a lot of speakers.
LUCKILY, according to the thread in the bug report above the original vpab has been repaired! This is great news, except I cannot find it. IOhannes m zmolnig says in that bug report ""with rev.17464" is a specific revision of the vbap external (the current one), as found in the SVN repository." but I do not see it there (maybe one of you can point me to it?) and also I wonder if it will be compiled? I have been having an issue in that many of these great developments for spatial audio either do not work or windows - in particular windows 7 -or are completely macOS- centric, which is a little bit sad as Mac is such an expensive platform. I love the HOA library for example, but the cream externals it depends on do not work on windows *7) at least not in my experinece, though they work brilliantly on mac. but anyway I am on a tangent...I was just wondering if anyone can tell me how to get this fixed vbap from the SVN and if it will be compiled?
3D Sound Spatialization
Hi, I am the owner of a tech company that needs 3D sound spatialization and some modifications integrated into an existing patch.
If you are interested in bidding on this work, please email me at csschupp@gmail.com. We need to have this work completed by May 16th.
dbap spatialization...but object doesn't create on windows?
I am trying to run the externals in this folder:
https://github.com/kronihias/dbap
its a fascinating approach to spatialization, and very cool that Matthias Kronlachner (http://www.matthiaskronlachner.com/)
has ported this library over to pd, but I can't seem to get the actual [dbap2d] or [dbap3d] objects to create. anyone else interesting in testing this or know why? have these objects by chance only been build for use on mac?
J
FFT freeze help
Brace for wall of text:
My patch is still a little messy, and I think I'm still pretty naive about this frequency domain stuff. I'd like to get it cleaned up more (i.e. less incompetent and embarrassing) before sharing. I'm not actually doing the time stretch/freeze here since I was going for a real time effect (albeit with latency), but I think what I did includes everything from Paulstretch that differs from the previously described phase vocoder stuff.
I actually got there from a slightly different angle: I was looking at decorrelation and reverberation after reading some stuff by Gary S. Kendall and David Griesinger. Basically, you can improve the spatial impression and apparent source width of a signal if you spread it over a ~50 ms window (the integration time of the ear). You can convolve it with some sort of FIR filter that has allpass frequency response and random phase response, something like a short burst of white noise. With several of these, you can get multiple decorrelated channels from a single source; it's sort of an ideal mono-to-surround effect. There are some finer points here, too. You'd typically want low frequencies to stay more correlated since the wavelengths are longer. This also gives a very natural sounding bass boost when multiple channels are mixed.
Of course you can do this in the frequency domain if you just add some offset signal to the phase. The resulting output signal is smeared in time over the duration of the FFT frame, and enveloped by the window function. Conveniently, 50 ms corresponds to a frame size of 2048 at 44.1 kHz. The advantage of the frequency domain approach here is that the phase offset can be arbitrarily varied over time. You can get a time variant phase offset signal with a delay/wrap and some small amount of added noise: not "running phase" as in the phase vocoder but "running phase offset". It's also sensible here to scale the amount of added noise with frequency.
Say that you add a maximum amount of noise to the running phase offset- now the delay/wrap part is irrelevant and the phase is completely randomized for each frame. This is what Paulstretch does (though it just throws out the original phase data and replaces it with noise). This completely destroys the sub-bin frequency resolution, so small FFT sizes will sound "whispery". You need a quite large FFT of 2^16 or 2^17 for adequate "brute force" frequency resolution.
You can add some feedback here for a reverberation effect. You'll want to fully randomize everything here, and apply some filtering to the feedback path. The frequency resolution corresponds to the reverb's modal density, so again it's advantageous to use quite large FFTs. Nonlinearities and pitch shift can be nice here as well, for non-linear decays and other interesting effects, but this is going into a different topic entirely.
With such large FFTs you will notice a quite long Hann window shaped "attack" (again 2^16 or 2^17 represents a "sweet spot" since the time domain smearing is way too long above that). I find the Hann window is best here since it's both constant voltage and constant power for an overlap factor of 4. So the output signal level shouldn't fluctuate, regardless of how much successive frames are correlated or decorrelated (I'm not really 100% confident of my assessment here...). But the long attack isn't exactly natural sounding. I've been looking for an asymmetric window shape that has a shorter attack and more natural sounding "envelope", while maintaining the constant power/voltage constraint (with overlap factors of 8 or more). I've tried various types of flattened windows (these do have a shorter attack), but I'd prefer to use something with at least a loose resemblance to an exponential decay. But I may be going off into the Twilight Zone here...
Anyway I have a theory that much of what people do to make a sound "larger", i.e. an ensemble of instruments in a concert hall, multitracking, chorus, reverb, etc. can be generalized as a time variant decorrelation effect. And if an idealized sort of effect can be made that's based on the way sound is actually perceived, maybe it's possible to make an algorithm that does this (or some variant) optimally.
Is there potential danger in mixing with headphones?
A thought just occurred to me. When you are using speakers, the sound from the left speaker will affect the sound from the right speaker, so if the same signal is present in both channels but it's delayed in one, the phase difference can cause the signals to be added or subtracted.
In the "sweet spot" between the two speakers, if the two channels are out of phase, you will hear certain frequencies being amplified or attenuated.
If I am listening with headphones, I can hear the signals independently in each ear, so even if I'm listening to two sine waves that are 180 degrees out of phase, I will be able to hear both independently, and I will interpret the difference in phase as a delay that will effect my spatial imaging of the sound.
If I were listening with speakers instead, wouldn't this same situation with two sine waves that are 180 degrees out of phase cause the sound to be completely attenuated (depending on where I am seated in the room)? Does this mean that mixing with headphones can lead me to overlook some issues that I would normally hear with speakers?
Sorry, I guess this isn't really a PD-specific question; it's more of a general audio production thing that will be important no matter what applications I'm using.
Ambipan~ ambimic~ object tutorials?
So far, so so..
What it is actually doing..
Mapping recorded pitch class contour as an interpolated 2D ambisonic trajectory in Pd. I've produced a contour for pitch class, frequency, midi note, and interval class. The system allows one to map the pitch class contour to an X or Y spatial plane.
Polyphonic guitar system for linear expression
Thanks for that. I will send on a link to my thesis once it's signed off. I have one more year left to complete the project so it may be a little while yet. I studied Max as part of my undergraduate degree and didn't revisit object orientated programming until recently, so I must confess I'm still relatively new to PD. So, thanks for the kind words. 
Meaningfulness in music, especially in performance systems, is really interesting to me and really an important part of the project. I've taken pitch structure as "figurative" gesture (symbolic versus physical effective or ancillary gesture), since pitch structure is meaningful to a pitch orientated instrument. Obvious, but something which I hope will be effective in the long run. This is why I have implemented pitch and interval class. I hope the system will in turn reflect pitch structure in timbral and spatial processes, attributes which are meaningful but often overlooked by traditional players. So I hope the system will encourage musicians to interrogate all available cues.
The system is definitely available to any object that produces pitch and I use Live because I like it. I'm starting to build my own DSP in PD, so that may change in time. I'd be keen to hear your thoughts on this. Thanks again for watching.
Ricky
Mapping triplets of integers to pixels
Hi!
Im making a spectrum analyzer and Im having some issues trying to graph the data thats coming out of it. I need to map triplets (x,y,z) to pixels in a screen where the values x and y correspond to spatial coordinates and the value z corresponds to a shade of gray. I found a way to do it, but it involves making an abstraction that includes one object per pixel on the screen! Im pretty sure there is an easier way to do it. Any ideas?

