-
manuels
@porres said:
@manuels said:
Sorry, Im not good at explaining ...
well please help me understand how you are doing interpolation with such graphs, and what are "basis functions" or "kernel" supposed to mean in this context... or please give me some references to check. Your patch just gave me a new dimension to look at interpolation and I wanna get it
Maybe this is the missing piece of the puzzle? ... Interpolation is just a special type of convolution
The term "basis functions" (that I probably used incorrectly) doesn't matter, and by kernel I was just refering to the function (whether piecewise polynomial or continuous) the input signal is convolved with.
The difference between my examples and some of the others you correctly spotted is also mentioned in the linked resource in the section "Smoothed quadratic". One advantage of a (non-interpolating) smoothing function is that no overshoot is produced. But of course, if you need actual interpolation you have to use different functions.
Another related topic is resampling. This thread may also be helpful: Digital Audio Demystified
-
manuels
@porres Sorry, Im not good at explaining ...
There is nothing special about the shift register in those patches, and you are right: Doing linear interpolation with six points is of course a waste since only two points are used. I guess I did it that way because I wanted to switch between different interpolation/smoothing functions in order to compare the results. Only in the case of the Gaussian kernel the precalculation might actually make sense. Maybe not even then. -
manuels
@ddw_music Just to throw in another way of doing the same thing …
Shift register with precalculated basis functions (or rather: basis functions read from a precalculated kernel): interpolated-noise.pd
I used this for an interpolated version of the Gendyn stochastic synthesis algorithm, which can be used as a simple noise generator as well … gendyn-interp.pd
-
manuels
@spoidy23 You mean something like a variable kernel density estimation (with adaptive bandwidth)? I guess, that would be difficult ...
-
manuels
@spoidy23 Interesting ... I didn't know about KDE, but it seems to be more or less the same thing. The iterative approach I proposed could be seen as a way to find the "right" bandwidth (whatever that is). To illustrate this, I added a plot for the corresponding kernel, which is just the impulse response of the filter: filtered-histogram-kernel.pd
-
manuels
@jameslo said:
It even looks like what I was proposing, a moving RMS window, might be another instance of a zero-phase lowpass filter
Yes, indeed! Or to be exact, that's only true as long as there are data points available for the whole RMS window. At the beginning and end of your dataset, you still have a phase shift (of half the effective window size). But that's always a problem, and I don't have a solution for that ...
BTW you could also try using a weighted average. I did that indirectly by repeating the process many times (which has the effect of approaching a Gaussian filter).
-
manuels
@jameslo What's wrong with low-pass filtering? Not being a data-scientist I would have thought that this is what they are doing all the time.
But yes, of course, there's a problem with low-pass filtering that you have to be aware of (and that you are maybe refering to): If the output of your filter only depends on the current input and previous input or output values, then you will always have some phase shift, which is certainly not what you want when dealing with a histogram! So you have to use a zero-phase filter.
Here's how I would do it ... filtered-histogram.pd
-
manuels
@Element-S I can't help you with the Pultec EQP-1 implementation, but since you're talking about being late to the party it might be worth pointing out that at least some of [bob~]'s weaknesses, mentioned in this thread, may have been the result of a bug, that was fixed around three years ago. If I remember correctly, the output wasn't taken after the fourth stage of the filter but before the first (= input + feedback). So no surprise, that this was considered a bad implementation of the Moog ladder filter.
Edit: Had to look it up ... It was actually a bit different from what I remembered: The incorrect output was the first instead of the fourth state variable. So the output was taken AFTER the first stage.
-
manuels
@trummerschlunk Here's my implementation of the 2nd (more accurate) version: dynamic-smoothing.pd
Not tested yet, so I'm not sure that it actually works ... -
manuels
@trummerschlunk I made a little test patch for the different options that you described, hope it's helpful in some way ... crossover-filter-test.pd
The version with shelving filters shouldn't be more cpu-expensive (it might even be cheaper), and the math for the gains is quite simple. Or am I missing something?
But I don't think any of these approaches is suitable for as much as 32 bands! Wouldn't you need much steeper filter slopes for that? Of course, you could use higher order Butterworth shelving filters, but that's gonna be really expensive! So maybe, as @jameslo suggested, you should go for frequency domain techniques in that case.
-
manuels
@melter If I understand correctly what your abstraction is doing, you would have to apply its output (the desired rms value) to the analyzed signal after you have divided it by the actual rms value ....
BUT: That's much too complicated! You don't even have to analyze the signal with [env~]. Under the link that you posted there's a table from which you can see, that doubling the distance corresponds to a decrease of sound pressure by 6dB, which is 1/4 power or 1/2 amplitude. So I guess all you have to do is to multiply the signal with d1/d2.
-
manuels
@jameslo said:
Teach a man to fish: you know this because you looked at source code, or some other way?
I just compared the output of both filters, and it turned out to be exactly the same.
Oh look, and they take the cube root for vcf_hp6~ when they chain 3 filters. I wasn't aware that Q worked that way. Do you know what they are doing with the cutoff freq signal, [iem_cot4~]?
Not sure if you can say in general that Q works this way. It might depend on how the filter is normalized. [vcf~], for example, has unit gain at the resonant frequency, but other filters produce higher gains at the resonant frequency. So if you put them in series the gains potentiate, and there has to be some compensation for that. What the [iem_cot4~] does, I don't know. Unfortunately I can't read C code.
-
manuels
@jameslo With just vanilla objects that's gonna be difficult, because you'd have to use the raw filters like [cpole~]. Since the iemlib filter is a series of two 2nd order filters, my first guess would be to do the same with ELSE's [highpass~], which is also 2nd order. Its exact formula might be different though.
Edit: It actually seems to be the same, but you also have to take the square root of the Q factor!
-
manuels
So here's the fixed version: velvet-noise-fixed.pd
It wasn't just the issue with small floating point numbers, but more importantly it now didn't work with integer periods.
In this case, of course, the period must never be reduced by one sample!
-
manuels
@ben.wes Thanks again for testing, really helpful!
Yes, it's true, that I didn't care about consecutive non-zero samples. For now I'm not sure what's more important: to comply to this constraint or to behave properly above nyquist.
The problems with frequencies above nyquist, that you found, probably have to do with the representation of small floating point numbers. I should have used [expr~ int($v1)] to get real zero values. Can't fix it right now, but I'll try tomorrow ...
-
manuels
@porres said:
I'm trying to find a better and more sophisticated idea for the algorithm. We need to find the number of samples in a period and then randomly choose where to place it.
I'm not in C, but I think I kind of did exactly that in my latest version, trying to solve the problem with non-integer periods. Here it is (commented in the patch) ...
Could certainly be optimized in some ways (especially using ELSE objects). But does it even work? I don't have a good method to test it, unfortunately.
-
manuels
@ben.wes Thanks! So no surprise that it doesn't work ....
-
manuels
@seb-harmonik.ar said:
I do think @manuels original idea of having a sample-increment offset still works without missing periods?
Why don't you think it works with non-integer period lengths?a phasor~ is always guaranteed to have its last sample be within the last sample increment regardless of whether an integer-period or not, and
[wrap~]
will pretty much perfectly wrap the phasor~'s phase..With non-integer period lengths you have effectively a changing number of samples per period. That in itself I don't think would be a problem, but adding a random value and wrapping the result can give you sample-increments that are either smaller or bigger than the calculated value.
Maybe an extreme example can help to clarify: Consider a period length of 2.5 samples. The sample increment is in this case 0.4. If you have a period with the sample values 0.3 and 0.7, add 0.8 as a random value and wrap, what you get is 0.1 and 0.5, so the last sample doesn't get above 1 when you add the calculated sample-increment of 0.4. Am I missing something?
@ben.wes Did you do the testing with non-integer fractions of your sampling frequency? You mentioned 3kHz at 48kHz SR as an example, which shouldn't make problems ....
-
manuels
@manuels said:
By multiplying the frequency with 1/SR you get the phase increment of the phasor. Since there is exactly one sample in each cycle where the sum of the current phase and the increment gets bigger (or equal to) 1, you get the impulse. Now all you have to do is to randomize the phase to get the impulse at a random position.
Just realised, that I didn't consider one important thing here: If the period isn't an integer number of samples, then the method of adding a random number and wrap doesn't work anymore!
-
manuels
@porres said:
Never liked using the samplerate to add an increment, and wrap~ and everything, so what we need is just being able to catch the transition above 1 and make it an impulse out of it, so [op~ >= 1] into [status~] solves that!
Well, it may not be nice, but it was my attempt to solve the problem of missing periods (see further up in the thread). If you have a phasor cycle of, say, 0.1 - 0.3 - 0.5 - 0.7 - 0.9 - 0.1 - etc., then adding small random values to it won't give you any transition above 1.
Edit: ... and the critical case of a cycle 0 - 0.2 - 0.4 - 0.6 - 0.8 - 0 - etc. may illustrate why you need >= instead of > in the [op~].