• ### Lower limit to phasor~ frequency?

Hi!
Is there a limit to how low of a frequency argument can be sent to phasor~?

I'm currently finding that if I send my phasor~ objects an argument of 0.00005 (5e-5) it works fine, but if I send an argument of 0.000005 (5e-6) it doesn't work at all, the output just stops.

any thoughts?

• Posts 7 | Views 1849
• @yannseznec yes, pd uses a 32-bit fixed-point phase accumulator for phasor~. If the phase increment per sample is less than 2^-32 then the change cannot be represented. The frequency is samplerate/period in samples, and the period in samples is 1/phase increment (so the frequency is samplerate*phase increment). If we set phase increment to 2^-32 and the samplerate to 44100 you get a frequency of 0.000010267831385 hz, that is the lowest possible frequency for phasor~ @ a samplerate of 44100.

• ok interesting! thanks for the great explanation. I'm pretty terrible at math - does that mean that if I use a higher sample rate then I can send slower frequency arguments to phasor~?

• @yannseznec no the opposite, the higher the samplerate the higher the lowest possible frequency since the lowest possible frequency is (2^-32)*samplerate.

think about it like this: you have a set minimum phase increment you can step every sample (2^-32). Since there are are fewer steps per second at lower samplerates, the sum of those fixed-size steps over a second will be smaller at lower sample rates, which means a lower frequency since the phase is accumulating slower.

• oh right yes ok! that makes sense.

I'm looking at other approaches to generating very long signal-rate ramps now...is there a similar limit to how slowly I can drive the line~ object, for example? I've found that I'm able to make it go from 0 to 1 over the course of 2 hours without any issue, but is there a limit to that? Can I make it go for 2 days, or 2 months, or 2 years?!

• @yannseznec there is a limit in precision for any finite numerical representation. However, for floating point that becomes more complicated to calculate. The relevant code is

``````x->x_biginc = (x->x_target - x->x_value)/(t_float)nticks;
x->x_inc = x->x_1overn * x->x_biginc;
``````

and in the dsp function:

``````x->x_1overn = 1./sp->s_n;
``````

so the inc will be (target value - current value)/(total time in samples). Time in samples will be rounded to the block size. I believe whether or not this number will increment the line~ depends on how big or small the current value of the line~ is. (again, it's complicated since it's floating point).
for instance if the current value of line~ is 1 then inc would have to be less than 2^-24 to not be able to increment I think. this would correspond to going from 1 to 2 over 16777216 samples, or ~6 minutes 20 seconds @ 44100 samplerate. (so you couldn't go from 1 to 2 any slower than that and have it represent the correct values within the block). Every time the value of line~ doubles so does the smallest representable increment.
however, line~ also uses the biginc variable, which means that after every block it will be able to update using a bigger increment. This means that line~ will still be able to increment up to blocksize times more than that calculation ^ after every block, though values inside every block would be the same. (so ~6 hours 46 minutes @ blocksize 64 according the above calculation I think)

if going from 0 to 1 all of those values would be doubled (it could represent increments corresponding to twice that time, bounded by the lowest representable increment that corresponds to going from 0.5 to 1)

there are other considerations of precision as well. If the increment can only be represented with a certain number of binary digits when added to the current value then there will be round-off errors in the values generated. (but if you need values of that precision you would have round-off errors somewhere else anyways probably)

another numerical bound on the use of line~ is the use of an int to represent ticksleft. If we assume this is a 32-bit signed integer then there can only be 2,147,483,647 blocks, which is ~36 days @ 44100 samplerate and a blocksize of 64. (this would be longer than whatever limitation the floating-point would impose tho I think)

this is all assuming that the size pd uses for samples and floats are 32-bit floating point. If pd is compiled to use 64-bit doubles instead then all of those values would be 2^29 times longer

edit: actually, looking at the code vline~ does use doubles for everything, so if you need really long ramps you should have no problem if you use vline~ instead of line~, even in normal non-double pd. It would take a time longer than 6,472 years for a vline~ going from 1 to 2 to stop being able to increment within a block of 64 samples @ 44.1k. (and a time of 414216 years to stop incrementing at all across blocks)
In the case of vline~ the bounding factor of precision might be in the representation of time actually since it doesn't use ticksleft
edit 2: it couldn't represent incrementing ~1.45 ms which is the time for a block of 64 @ 44100 samples if the current time were ~ 2^53 ms, which would be 9007199254740992, or 285,421 years before stopping to work completely.

long story short: you should be able to use vline~ (but not line~) for ramps of at least a few years long (depending on the range of its values) before it stops incrementing within a block. For the specific case of going from 0 to 1 @ 44.1k, you should be able to run a vline~ for ~129,000 years before it stops incrementing within a block (though it would still increment between blocks)

• ah that's too bad, I was really hoping for 130,000 years but seriously - thanks so much for the brilliant explanation. vline~ seems like the easiest approach. It also occurred to me that I could just divide a very long timeframe up into smaller chunks and run line~ the requisite number of times, with offsets to make up for the resulting output. it wouldn't be perfectly accurate but it would probably work fine too.

Posts 7 | Views 1849
Internal error.

Oops! Looks like something went wrong!