\[zerox~\]
The reason using bangs to reset [phasor~] doesn't work is because the phase updates at block boundaries. So if you reset [phasor~] in the middle of a block, the phase won't update at that sample; it will be at the beginning of the block.
I made an abstraction that gets around this limitation and so allows the phase update to be sample-accurate. While this might seem more intuitive, I'm not sure it would be more efficient than the above method. And, you won't be able to smoothly change the sync rate. You can download the abstraction here:
http://puredata.hurleur.com/sujet-4039-phasor-sample-accurate-phase-reset
Swept sine deconvolution
Hi Guys,
I was referred to this thread by Serafino Di Rosario, and I will test this PD patch for performing ESS measurements. Everything is very interesting for me, and it seems that Katja did a very good job!
Regarding the problems encountered, I give you these infos:
-
sine-phase-matched sweep. This method is very useful when performing distortion measurements, or computing multiple-order IRS to be used in not-linear convolution processor (for emulating the nonlinearities of a device). For the method to work, it is mandatory that the sine sweep is sine-phased not only at the beginning, but also at the end of each octave. This way, each harmonic-order IR will be phase matched with the linear IR. The provided formulation solves this problem, and it is very good to see it explained here so simply.
The importance of using a phase-synced exponential sweep was first discovered by Antonin Novak, a Ph.D. student of the universities of Prague and Le Mans. -
The ripple at low frequencies can be controlled by proper fade-in. The choice of the "optimal" fade law is still a big subject under scientific discussion. Hann windowing is just a very initial, suboptimal approach. I plan to investigate further the choice of the optimal fade law, and publish something on this topic, soon.
-
The concept of cutting away everything before the arrival of the direct sound is wrong, in my opinion. The "silence" before the arrival of the direct sound has a very important physical meaning, it is the "time-of-flight" of the sound, and provides an accurate measurement of the distance between the source and the receiver. Furthermore, it contains "background noise", which is a very important quantity to know, for example when deriving STI from the IR measurement.
So PLEASE, do not cut away this initial silence! If the IR has to be used as a filter for a convolution-based reverb plugin, the plugin must be intelligent enough for analyzing the IR, and giving the user the possibility to keep this initial silence or cut it away. For example, IR-1 from Waves gives these possibilities. In any case, a measured IR of a room should always contain the time-of-flight... Publishing "pre-cutted" IRs is wrong, and in the long run will cause a lot of troubles... -
The "Fractional delay", for the same reasons, should NOT be corrected! If the time-of-flight is fractional, good, let's stay with this fact. As pointed out, cutting (time-shifting) improperly the measured IR can alter its spectrum. So, please, keep every measured IR as it comes out from the convolution with the inverse sweep... If the higher-order distortion products are not needed, it make sense to only keep the linear part only, but always starting from the true "zero time". Let's make an example: I generate a 20s-long IR at 48 kHz, that is 960,000 samples.
The inverse sweep will also be 960,000 samples long.
I play the sweep, and record the room response for, say, 1,200,000 samples, for being sure of capturing the complete reverberant tail even at the higher frequencies.
Now I convolve the recorded signal (1,200,000 samples) with the inverse sweep (960,000 samples), and I get a convolved signal which is long 2,159,999 samples.
If I want to keep a 4s long IR, containing only the linear response, you should throw away the first 959,999 samples, and keep the following 192,000 samples.
As this signal starts from the true "zero time", the main peak will not be at the very beginning, but delayed of an amount corresponding to the source-receiver distance. If it was 10m, it will be 10/340=0.0294 seconds... -
For performing efficiently convolution of very long filters (in the example above, the Inverse Sweep was nearly 1 million points) it is advisable to employ a partitioned convolution scheme. That is, the filter is splitted in a number of blocks, so that instead of performing a single, very long FFT , a number of shorter FFTs is performed instead. On my web site you find a couple of papers explaining the partitioned convolution algorithm. This is the same algorithm employed in the well-known BruteFIR open-source program by Anders Torger.
Bye!
Angelo Farina
Phasor~ as index to tabread~ with del and line~ envelope glitch
Hey
I'm using phasor for an index to a tabread~ to play a sample.
I'm also using line~ as an envelope to control audio output.
The timing for the envelope is set by the size of the sample size and samplerate~ as well as the frequency for the phasor~.
The magnitude of the phasor is adjusted to the sample size.
The sample player can be re-triggered and when this happens a line~ is set to go to 0 in 5ms,
a delay is set for 5ms,
then bangs another line~ to go to velocity in 0,
as well as setting phasor~ frequency to 1/t and phase to zero.
At which time another delay is setup at samplelength in ms.
After the sample is played the phasor~ frequency is set to 0 then
another line~ to 0 in 5ms is sent to the [*~] .
This causes a glitch when the sample is retriggered because the phasor~ is reset to zero and starts replaying the sample.
This glitch can not be heard when the sample is not re-triggered so maybe it's a control vs signal timing issue.
I did hear the glitch at the end of the sample re-triggered or not using vline~.
So my question is how do you do audio rate envelope triggering of envelopes ? I would post the patch but it is a mess. A good answer or pointer to some reference material would be greatly appreciated. I haven't quite wrapped my head around the sample and hold sampler examples yet.
Phasor~ vs metro
The choice really depends on how you want to work. With [phasor~] it's (generally) pretty easy to use one master [phasor~] and divide the signal into whatever subdivisions you may need. You can also do smoother tempo changes since the frequency can be updated at audio rate. But while you have a sample-accurate clock, converting it into messages/bangs loses that accuracy. [edge~] and [threshold~] are the only objects that I've used to convert a [phasor~] clock to bangs, and both conform to block boundaries and have a minimum limit of 64 samples. So if you need sample accuracy with [phasor~], then you have to find a way to keep your events triggered with audio signals, which is not very easy.
[metro] is really just easier, and for a drum machine it's probably the best way to go. Pd's [metro] is pretty damn accurate (it's the one in Max that is prone to drift). Most of the time you'll be triggering events at message rate, and so using [metro] with objects like [vline~] will actually be more accurate than [phasor~]. And for a drum machine, you might find that pretty important.
\[zerox~\]
If you plan on using [xerox~] to phase sync two oscillators, it probably won't cut it. Generally, you want those things to be sample accurate. [xerox~] will give you a click corresponding to zero crossings out its right outlet, but, as far as I know at least, Pd's oscillators can't really use that for phase syncing ([xerox~] is actually based on a Max object, yet strangely Max's oscillators can't use it either). It would require a conversion to message rate to reset the phase, which kills sample accuracy, not to mention the fact that the phase input of [phasor~] quantizes to block boundaries (default 64 samples in Pd), which also kills sample accuracy.
However, if you know the ratio between your two oscillators, phase syncing can be achieved using a master [phasor~] to run both oscillators. Use the master sync frequency to drive the [phasor~], then multiply the output of the [phasor~] by the ratio between the synced (slave) oscillator and the master one. In other words, it should be multiplied by:
slave frequency / master frequency
Then, you just [wrap~] the signal and viola, you have a new synced phasor signal to drive the slave oscillator. The attached patch should hopefully clarify.
Transdetect~ and transcomp~: transient shaping and detection
transcomp~ uses transdetect~ to shape the initial attack and release of a signal.
Requires IEM's FIR~, fexpr~ and dbtorms~ which are provided in PD-Extended.
To work properly the transdetect folder should be added to PD's path.
Start by opening help-transcomp~.pd
01 Implementation:
transdetect~ works by using two pairs of envelope followers. The first pair
subtracts an envelope follower with a slow attack from an accurate follower,
the result of which is a signal containing the initial attack. For the initial
release, the second pair subtracts an accurate envelope follower from one with
a slow release.
An envelope follower measures the mean square power of a signal over time
(see 3.audio.examples/H06.envelope.follower.pd for details on implementing an
envelope follower). To do this we must use a low pass filter at a very low
frequency. In order to achieve an accurate follower a linear phase FIR filter
was used (using IEM's FIR~ external). Unfortunately this introduces a phase
delay.
In order to facilitate the use of different envelope follower implementations,
transdetect~ requires a filter type as a creation argument implemented in
followernameTransDetectEF~.pd. 4 linear phase fir implementations are provided:
181, 251, 451 and 501 taps filters. The 501 taps filter provides the most
accurate filter but with a phase delay of 5.668 ms at 44.1kHz (raise the
sampling rate to lower the phase delay). They were all generating using
http://www.dsptutor.freeuk.com/FIRFilterDesign/FIRFiltDes102.html with a
cutoff frequency between 5 and 10 Hz.
A compromise between accuracy and phase delay might be achieved by using
minimum phase FIR filters. A 5th implementation using PD's native lop~ object
is also provided under the designation iir (FIR~ not required).
Along with different possible envelope follower implementation transdetect~
also requires an attack and hold type implemented in
attacknameTransDetectAttackShape~.pd and holdnameTransDetectHoldShape~.pd
respectively. These implementations dictate the kind of attack and release
curves used on the envelope followers (linear, slow[er|est] and fast[er|est]).
All implementations provided use fexpr~. A more efficient external could be
made to take fexpr~ place.
02 Use
In help-transcomp~.pd patch enable start and pay attention to the snap in the
hit. Disable the green toggle button to disable the compression make the snap
go away. Check out the tables on the left to see the results of the transient
compression.
transcomp~ is useful when used with recorded drums to maximize or minimize
its transient (to make it punchier or to make snare drums less clappy).
transcomp~ uses transdetect~. By itself transdetect~ can be used to synthesis
hits from a recording. For example, take a bass drum recording and use the
signals generated by transdetect~ to shape the frequency and envelope of a
synthesized kick drum.
Would love to have some feedback and some help in turning the linear phase filters into minimum phase filters.
Bank of oscillators - most efficient method
> Obi - doesnt the timbre suffer if all the phases are equal (sounds a bit static)? I
> think in the expensive version they are all running at different phases which
> makes the result richer. Can wrap~ be used to offset the phases and still be
> cheaper than a bank of osc~ objects
Yes, but it depends on how the wave is used. "Suffer" might not be the best word. Some sounds thrive on phase synchrony.
Having free running (independent) oscillators or a common phase makes very little difference to a constant timbre, a drone/sustained note. The ear doesn't pick out any phase relationships, even if they change slowly within the sound.
But if you want a very percussive sound, like a struck string, to sound correct and reliably trigget on each hit, you need to sync the phases.
The method given above is equivalent to using [sinesum( messages with [tabosc~] or waveshaping - the component phases are governed entirely by the driving waveform. In a polyphonic instrument each voice would be identical and the total result would sound dry/sampled/2D. But with independent oscillators each voice would start with subtly different component phases, and the total result is much deeper/richer/fat.
To compromise efficiency and quality it's good to supplement very terse methods like the one shown with some chorus/flanger/phaser
About [wrap~], it is unneccesary in this case because [cos~] is a periodic function which already remaps the domain. In fact the domain offset _is_ the phase offset, but in out case they are all integers (multiples of 2 * pi if we didn't have Pd rotation normalised functions) so each is a harmonic that aligns with one beneath it. [wrap~] could be used to align phases from a line. In fact a [line~] plus a [wrap~] is a [phasor~]. But we wouldn't get different frequencies by taking the cosine of shifted copies, instead we need to multiply each new phase by a constant to change its slope. The slope multiplier, m in the equation y = mx+c, gives the rate of change and hence the frequency. Interestingly that means if the phases are synced perfectly there's always a transient at the start t=0 where every cosine must be simultaneously 1, so great for struck body sounds.
Loop Slices
To get reverse playback you could subtract the (phasor output x sample size) from sample size, ie. for sample size of 44100:-
(phasor~)
|
(*~ -44100)
|
(+~ 44100)
|
(tabread~)....etc
To jump around slices you could use a sample + hold object to add chunks of a specified number of samples at each phase wraparound, like this:-
{100} {200} {300} {400} etc... messages to send to samphold~
|
| (phasor~)
| |
(samphold~)
|
(+~ ) ... add phasor output times the chunk size (eg. 100 in this case)
the phasor output triggers the samphold object to output its current value at the start of each phase, so that each time a chunk has finished being played back, the index for tabread can be forced to jump to a new location. shouldn't be too hard to send an automatically cycling pattern of index numbers to the tabread object, and then introduce reverse in there too...
Brett