What about migrating to some better forum engine ?
Hi, every body,
Sorry for the very long time to respond, i have not really time to take care of this forum.
I would like to make the migration to discourse ( http://www.discourse.org/ ) but i don't success to find free time to make it happen.
If someone is brave enough i could give access to a zip of the files of this forum + mysql dump. You'll need to host the new forum.
Hit me at firstname.lastname@example.org
@ Maelstorm: ...and how tricky will it be to get a new CMS?! ..I got no clue about that stuff, I just googled CMS and, yeah.....
I've never personally done it myself, but it's pretty tricky, and probably not worth it.
Personally I don't have an account at puredata.info (do you?!)
...since the page occured a little messy and partially outdated to me. And my browser doesn't trust some sections (there's no valid certificate) AND I even got a popup displaying some ??spanish?? text, no idea what that was!!!
Yeah, I get those on occasion.
Keep in mind that the content on puredata.info is a user-contributed community site. If something is out-dated, it's because no-one has bothered to update it. The idea is that if you see something is out-dated, you can fix it yourself or ask the author to update it. I'm sure there's plenty of out-dated information on this forum, too.
.. Anyhow I think since pd.info uses that structure successfully, it's must be good idea..
And if it's successful, I don't see why we should try and compete with it.
Granted it's a little hacky with only "personal posts" instead of "folders", but hey, that's not an hack but a workaround.. it just corresponds to the nature of pd...
Maybe some users would like that idea but just don't think about it, ..so just as an advice for a little more structure..
Yeah, I'm not saying it wouldn't be cool to have, and I'm not discouraging you or anyone from sharing your whole library on one topic in patch~ (hell, I did it once). I just don't really think using the forum as a code repo is really what this site is about.
I personally think it's just a better idea to put your stuff on another site (like GitHub or puredata.info or your own site) and link to it in your signature.
Another comparable issue:
For example this thread: http://puredata.hurleur.com/sujet-6481-mindset came up recently in the forums "extra~"-section...
...if this goes on, we should start a new thread, shouldn't we?!
mod and I have been talking about doing some reorganization along with cleaning up the forum, so some of that stuff is being addressed.
Speaking of things being in the wrong section, if you want to continue this discussion, start a thread in the "this forum" section. We're kind of hijacking the thread at this point.
FFT freeze help
Brace for wall of text:
My patch is still a little messy, and I think I'm still pretty naive about this frequency domain stuff. I'd like to get it cleaned up more (i.e. less incompetent and embarrassing) before sharing. I'm not actually doing the time stretch/freeze here since I was going for a real time effect (albeit with latency), but I think what I did includes everything from Paulstretch that differs from the previously described phase vocoder stuff.
I actually got there from a slightly different angle: I was looking at decorrelation and reverberation after reading some stuff by Gary S. Kendall and David Griesinger. Basically, you can improve the spatial impression and apparent source width of a signal if you spread it over a ~50 ms window (the integration time of the ear). You can convolve it with some sort of FIR filter that has allpass frequency response and random phase response, something like a short burst of white noise. With several of these, you can get multiple decorrelated channels from a single source; it's sort of an ideal mono-to-surround effect. There are some finer points here, too. You'd typically want low frequencies to stay more correlated since the wavelengths are longer. This also gives a very natural sounding bass boost when multiple channels are mixed.
Of course you can do this in the frequency domain if you just add some offset signal to the phase. The resulting output signal is smeared in time over the duration of the FFT frame, and enveloped by the window function. Conveniently, 50 ms corresponds to a frame size of 2048 at 44.1 kHz. The advantage of the frequency domain approach here is that the phase offset can be arbitrarily varied over time. You can get a time variant phase offset signal with a delay/wrap and some small amount of added noise: not "running phase" as in the phase vocoder but "running phase offset". It's also sensible here to scale the amount of added noise with frequency.
Say that you add a maximum amount of noise to the running phase offset- now the delay/wrap part is irrelevant and the phase is completely randomized for each frame. This is what Paulstretch does (though it just throws out the original phase data and replaces it with noise). This completely destroys the sub-bin frequency resolution, so small FFT sizes will sound "whispery". You need a quite large FFT of 2^16 or 2^17 for adequate "brute force" frequency resolution.
You can add some feedback here for a reverberation effect. You'll want to fully randomize everything here, and apply some filtering to the feedback path. The frequency resolution corresponds to the reverb's modal density, so again it's advantageous to use quite large FFTs. Nonlinearities and pitch shift can be nice here as well, for non-linear decays and other interesting effects, but this is going into a different topic entirely.
With such large FFTs you will notice a quite long Hann window shaped "attack" (again 2^16 or 2^17 represents a "sweet spot" since the time domain smearing is way too long above that). I find the Hann window is best here since it's both constant voltage and constant power for an overlap factor of 4. So the output signal level shouldn't fluctuate, regardless of how much successive frames are correlated or decorrelated (I'm not really 100% confident of my assessment here...). But the long attack isn't exactly natural sounding. I've been looking for an asymmetric window shape that has a shorter attack and more natural sounding "envelope", while maintaining the constant power/voltage constraint (with overlap factors of 8 or more). I've tried various types of flattened windows (these do have a shorter attack), but I'd prefer to use something with at least a loose resemblance to an exponential decay. But I may be going off into the Twilight Zone here...
Anyway I have a theory that much of what people do to make a sound "larger", i.e. an ensemble of instruments in a concert hall, multitracking, chorus, reverb, etc. can be generalized as a time variant decorrelation effect. And if an idealized sort of effect can be made that's based on the way sound is actually perceived, maybe it's possible to make an algorithm that does this (or some variant) optimally.
A person at the TouchOSC forums could tell me the following:
it looks like you miss libraries from puredata. “couldn’t create” means either it cannot find the function or that the function cannot be used (e.g. if there is a conflict. it would be the case if [dumpOSC 8000] only was in red : it would mean the 8000 port is used elesewhere. but the fact OSCroute also is red means the probleme is elsewhere).
try re-installing the last puredata extended version.
you’d better ask on puredata forums since it’s related to puredata only. and these forums have a nice community.
Data flow in PD or how goes audio through reblocked patches
Can anyone tell me how audio data goes through a pd patch? here is my scenario and my sugestion about it. the main patcher has by default a block size of 64. on it i have a sub patch reblocked to 1024. the signal flow is: adc~ on main patcher goes into the reblocked subpatch (just goes unprocessed through it) and back to the main patcher with 64 block-size (i've attached a pd file). okay. what happens to the audio stream?
my sugestion is: the adc~ object throws every 64 samples (block-size/samplerate of 44100 = 2.9ms) a block of 64 numbers( i.e. bits) out of its outlet and into tihe inlet of the reblocked subpatch. within the 1024 reblocked subpatch pd waits until it gets 16 frames from the main patcher (64*16=1024) and throws this as on 1024 frame out of its outlet back to the main patcher. in the meantime the main patcher has send 16 empty frames to the dac~ until the 1024 frame comes from the subpatch. but how does the main patcher process the big 1024 frame?? does it only takes the first 64 bits of it?? how does this work?
Looking for a 'good' forum about Max/Msp similar to our 'patch' forum
I feel good on Pd. I use it every day. But the collective I'm working with uses Ableton Live and Max4Live... and sometimes I bang my head against technical issues.
Could someone advise me a 'good' Max/Msp forum, similar to our beloved 'Patch~' forum here on 'Le Hurleur' ? English, French and Italian ones will be useful.
Thank you so much.
P.S.: tonight I'll try 'http://codelab.fr'
EDIT : (Doh !) Instead of a 'patch~' forum, I'm actually looking for a 'Technical issue' forum
Fft in pure data
Sure. It's really about dealing with complex numbers.
The outputs of [fft~] are the real and imaginary parts of the spectrum. It is the complex number in cartesian form.
z = x + jy
Cartesian math is less computationally expensive, but it doesn't necessarily give you the information you're looking for on its own. Converting to polar form can give more intuitive information. Polar form looks like this:
z = A*e^(jθ)
Here, A is the magnitude (or amplitude) and θ is the angle (or phase). Converting from polar to cartesian is pretty straight forward:
A*e^(jθ) -> A*cos(θ) + j*A*sin(θ)
Going the other way is trickier. The whole "squared, added, and square-rooted" thing is to find A. The way it works is by plotting the cartesian form as a point on the cartesian plane, so that the real part is the x-axis and the imaginary part is the y-axis. The distance between the origin and that point is A, and finding that distance is a matter solving that Pythagorean theorem for right triangles that we all thought was useless in high school:
A^2 = x^2 + y^2
I assume it doesn't do that stuff automatically because it keeps it more general and less computationally expensive. You don't have to go through square roots and atan2 (which is needed find the phase) by sticking to cartesian form. And [rfft~] doesn't normalize automatically because normalization is window-dependent.
Pure Data Convention Registration Deadline today!
direct link to the registration form sice the university server is down:
they load ok for me on firefox.
when you click those links they don't load?
actually, there are a few admins now, and we do battle behind the scenes to keep the forum spam-free. but the only one with real keys to get into the forum engine is neko.
maybe if we can get more clarification here why this is happening to some people and not to others, we can figure out what's going on.
Better sounding guitar distortion ... beyond \[clip~\] and \[tanh~\]
Glad to hear you got it working! I don't have time to look at it right now, but I'll try and answer some of your questions.
I never found info about which H(z) equation does the [biquad~] object rely on. Now I see that the denominator is written as a substraction rather than an addition, just a matter of convention. Maelstorm, can I ask you where you picked this peculiar info?
Typically, a biquad filter is defined as subtracting the feedback part in the difference equation, and I don't really know why that is. But, you don't always see it defined that way, so you have to look out for it. Also, the conversion from the difference equation (y[n] = ...) to the transfer function (H(z) = ...) causes the signs of the feedback coefficients to be reversed, so subtraction in the difference equation becomes addition in the transfer function. More confusion. In fact, it caused quite a bit of frustration when I was making those filters. I'm not sure if the comments in those patches are 100% correct.
Anyway, it turns out, [biquad~] is one of the filters that doesn't follow that convention. The help file gives the difference equation in the more confusing Direct Form II. I'm guessing it's because Miller wanted to illustrate how it's implemented, since Direct Form II can be done using only two delays instead of four. But the results are exactly the same, so I don't see why that matters from the user's end. Anyway, in Direct Form I (as it's more commonly seen), [biquad~]'s difference equation is this:
y[n] = b0*x[n] + b1*x[n-1] + b2*x[n-2] + a1*y[n-1] + a2*y[n-2]
Which, in the z-domain, becomes:
b0 + b1*z^-1 + b2*z^-2
H(z) = -----------------------------
1 - a1*z^-1 - a2*z^-2
So, in order to make that line up with convention, the signs in front of a1 and a2 should be reversed.
The Wikipedia page on digital biquad filters mostly talks about the difference between Direct Forms I and II, and also gives the more conventional form:
Maelstorm, did you use the method described 'in (9)', which is explained in an american patent document, or just the 'classic' one with upsampling ?
I just upsampled. I haven't taken a look at that patent. But my impression from the wording of the paper is that (9) uses upsampling, too.