Convolve effect???
@ClaudiusMaximus said:
->
convolve ->
->=
-> rfft ->
* -> irfft ->
-> rfft ->where * is complex multiplication (a+bi) * (c+di) = (a*c-b*d)+(a*d+b*c)i
so if i understand right, programing:
in1-> rfft -> A in2-> rfft ->C
-> Bi ->Di
and then (A*C-B*D)-> irfft -> out
(A*D+B*C)->
should make a convolution of in1 and in2 ??
Convolve effect???
->
convolve ->
->
=
-> rfft ->
* -> irfft ->
-> rfft ->
where * is complex multiplication (a+bi) * (c+di) = (a*c-b*d)+(a*d+b*c)i
Zero pad an array
n+m-1 because when i convolve a signal (length m) with a filter (length n eg. a long ir of a room). Then the result will be length m+n-1.
And if not zeropadded the excess samples will be wrapped around.
i got this idea from: http://www.dspguide.com/CH9.PDF
where is stated:
"Now consider the more general case in Fig. 9-9. The input signal, (a), is 256
points long, while the impulse response, (b), contains 51 nonzero points. This
makes the convolution of the two signals 306 samples long, as shown in (c).
The problem is, if we use frequency domain multiplication to perform the
convolution, there are only 256 samples allowed in the output signal. In other
words, 256 point DFTs are used to move (a) and (b) into the frequencydomain. After the multiplication, a 256 point Inverse DFT is used to find the
output signal. How do you squeeze 306 values of the correct signal into the
256 points provided by the frequency domain algorithm? The answer is, you
can't! The 256 points end up being a distorted version of the correct signal.
This process is called circular convolution. It is important because you want
to avoid it."
8 bands parametric EQ. Best options.
Hi,
I have been working on this problem, with zero phase distorsion, in offline processing for the moment, to perform audio measurements and audio scenes cartography (notably) using speech, music or any complex (either monophonic or stereophonic) stimuli rather than linear or exponential chirps or "pure" sinusoids.
I have been trying to realise a pd external to perform a fast but FFT-free temporal convolution, to be used (in future) to perform a 10-band analysis and partial to complete re-synthesis in real time, if pd and the computer can manage the CPU power necessary to manage 10-band sharp FIR filtering.
With two assitants (doing a huge and great job), we have been working to port my scilab prototype of the perceptive analyser in C++ to be able to offer cross platform sources and binaires by the end of the year. Sorry, there is a lot of stuff to do!
If you need some more information, you can try to access to the following Audio Engineering Society (AES) convention papers (I am not sure I can send the papers or put them on a web site for free downloading... as the AES sell the convention acts or papers):
-
Millot L., Pelé G., and M. Elliq, Using perceptive subbands analysis to perform audio scenes cartography, 118th Convention of the Audio Engineering Society, Barcelona, Spain, 2005 may 28-31, Paper 6340.
-
Millot L., Some clues to build a sound analysis relevant to hearing, 116th Convention of the Audio Engineering Society, Berlin, Germany, 204 may 8-11 mai, Paper 6041.
Another paper would be available by the 05/21, after the AES Convention in Amsterdam, dealing with the FFT-free but fast temporal convolution algorithm. And, maybe I would also have finished some externals to make real time demonstrations by monday 05/19:
Millot L. and Pelé G., An alternative approach for the convolution in time-domain: the taches-algorithm, 124th Convention of the Audio Engineering Society, Amsterdam, Nederlands, 2008 may 17-20.
Other papers dealing with the use of the perceptive analyser have also been proposed not only within the AES conventions. Let me know if you are interested.
Sincelrly yours,
Laurent Millot
Using arrays and signals in a same external
@ClaudiusMaximus said:
Generally, it's useful to look at the source of Pd and other externals to see how its done. I recommend looking at pd-0.41-4/src/d_array.c lines 686-839 ([tabsend~] and [tabreceive~]).
Note that in pd-0.41-4 the array access API has changed: the old API was not 64-bit safe. If you have issues with that change, I can show you how I handled it in my code.
Hi, ClaudiusMaximus
Thanks a lot for your answers.
I have already worked with the "HOWTO write an external for Pure Data" and have some difficulties with, notably (I put the first compiled version of the external in a subfolder to use it within pd, while trying to see the code differences compiling the library elsewhere, without noticing any diference, obviously...), a potential bug in the code for span~.
-
within the text:
typedef struct _pan_tilde {
t_object x_obj;
t_sample f_pan;
t_float f;
} t_pan_tilde; -
within the code:
typedef struct _pan_tilde {
t_object x_obj;
t_sample f_pan;
t_sample f; // Here is the difference: t_sample rather than t_float (better for the use?)
} t_pan_tilde;
I have made a first partial version (I have not translated the whole appendix) of a translation in French, adding some information about cross compilation, found in another external example.
I will consider to post it, if interesting, as soon as the translation would be enough good according to me. I will try to quote some potential mistyping errors to the author and german remaining formulation within next weeks, as, for the moment, I must first achieve several projects.
I have also studied another external code processing audio signals and begun to study the "m_pd.h" file and the d_array.c file or d_delay.c (quite fastly as I was in a hurry last night). I was searching for quoted versions and for another simple examples.
And I did not think, at this time, to see the code for tabsend~ or tabreceive~ functions: when you have the head in the handlebars, you do not see further than the end of your nose....
So, thanks a lot for the advice.
At this time I will just consider data in 16, 24 fixed or 32 floating point bits, so, I would not need the 64-bit representation: I do not use the FFT, working with an alternative potentially fast temporal method for convolution, needing far less requantization operations and then a less precise format for a more satisfying result.
But I could be quite interested in the future to consider 64-bit operations, so I would be happy to se how you manage this in your code.
I must have finished the related external(s) before next sunday as I will present the algorithm at the Audio Engineering Convention on monday 2008/05/19 at 10 am and I really would like to be able to perform a real time demonstration of the algortihm using pd...
I have much more experience with scilab and matlab prototyping, and I have been working since mid-january, with two assistants (doing a great and heavy work), to perform a C++ cross platform port of a scilab prototype of a perceptive audio scenes analyser I have designed, using the temporal convolution algorithm. So, I have been studying how to realise pd externals to have light but powerful real time demonstrators.
I will publish the sources (with a LGPL-like licence) and offer the binaries (Max OS 10.4 and 10.5.x for PPC or Intel-based processors, Windows XP and Vista at least, maybe Linux if its works directly) for downloading by the end of the year, once the documentation and the code will be enough satisfying and operational.
Once again, thanks a lot for your help. I will give feedback information and, maybe, ask for some more help, in next days... ;O)
Sincerly yours,
Laurent Millot
Convolution
Hello!
I'd like to know how to implement a convolution in PD. More explicitly, I have a table with the coeficients of a FIR filter and I'd like to convolve this with audio samples coming from [adc~]. It's much like the [FIR~] external, but I'm not interested in using it.
Frozen reverb
"Frozen reverb" is a misnomer. It belongs in the Chindogu section along with real-time timestretching, inflatable dartboards, waterproof sponges and ashtrays for motorbikes. Why? Because reverb is by definition a time variant process, or a convolution of two signals one of which is the impulse response and one is the signal. Both change in time. What you kind of want is a spectral snapshot.
-
Claudes suggestion above, a large recirculating delay network running at 99.99999999% feedback.
Advantages: Sounds really good, its a real reverb with a complex evolution that's just very long.
Problems: It can go unstable and melt down the warp core. Claudes trick of zeroing teh feedback is foolproof, but it does require you to have an apropriate control level signal. Not good if you're feeding it from an audio only source.
Note: the final spectrum is the sum of all spectra the sound passes through, which might be a bit too heavy. The more sound you add to it, with a longer more changing sound, the closer it eventually gets to noise. -
A circular scanning window of the kind used in a timestretch algorithm
Advantages: It's indefinitely stable, and you can slowly wobble the window to get a "frozen but still moving" sound
Problems: Sounds crap because some periodicity from the windowing is always there.
Note: The Eventide has this in its infiniverb patch. The final spectrum is controllable, it's just some point in the input sound "frozen" by stopping the window from scanning forwards (usually when the input decays below a threshold). Take the B.14 Rockafella sampler and write your input to the table. Use an [env~]-[delta] pair to find when the
input starts to decay and then set the "precession percent" value to zero, the sound will freeze at that point. -
Resynthesised spectral snapshot
Advantages: Best technical solution, it sounds good and is indefinitely stable.
Problems: It's a monster that will eat your CPUs liver with some fava beans and a nice Chianti.
Note: 11.PianoReverb patch is included in the FFT examples. The description is something like "It punches in new partials when theres a peak that masks what's already there". You can only do this in the frequency domain. The final spectrum will be the maxima of the unique components in the last input sound that weren't in the previous sound. Just take the 11.PianoReverb patch in the FFT examples and turn the reverb time up to lots.
Blur in Gem
a blur can be described by a convolution i would have thought. [pix_convolve] will do it maybe.
Anyone interested in hacking plugin~ for OSX?!?
Running pd on the mac has upsides and downsides...
I use both. The PC is handy for running wierd stuff like the VST~ objects, but my mac has better audio support, plus its a laptop so its more handy for gigs etc... An added bonus is my mac doesn't sound like a harrier jump jet, unlike the fan on my PC. However, the PC does keep my feet nice and warm under the desk at this time of year.
Sad to say it, but the mac actually crashes more often than the PC now days, something i never thought i would find myself saying
As for running linux on the mac, you could run a linux build with a nice GUI, however, having tried the mac-linux install myself in the past I wouldn't recomend it unless you really know what your doing and don't mind messing about with various bits of your mac's built in kit (especialy wifi cards) for a few weeks to get the damn things to work...
A simple problem regarding PD patch initialization
I really don't why it is like this.
Anyway, this have been quite a nice surprise to use the expr instead of anything else. Because when an expr block is banged it seems that is "reading" (?) the number2 objects which are feeding the expr inlet but without even touching the number2 objects once you open a patch.
I know use expr object with small bang object beside it. Every bang of this type receive a global loadbang. And it work ... while keeping the number inlet opened. ...
Ok, back to work I am just trying to catch up something with fractional delays and block convolution which drive may crazy because it consume time!!!