FFT Abstraction
In the subpatch [pd fft] you have these multiplications inbetween [rfft~] and [rifft~]. Real and imaginary coefficients of the input signal's spectrum are multiplied by the values in table 'fft', which acts as an amplitude spectrum of a filter in that case. It really is what they call (zero phase) filtering via frequency domain, an equivalent of convolution in time domain. Filters can not add frequency components which were not in the signal originally, they can only amplify or attenuate existing components, that is, the non-zero real- and imaginary coefficients in this case. Any spectrum coefficient which is zero will remain zero after multiplication....
For inserting new spectral components you could use [+~] instead of [*~] there, for a start. Then you are really creating new spectrum coefficients which translate to signal domain as new frequency components, albeit with rough method. But it would be wise to first normalise the real and imaginary coefficients from [rfft~], then insert new coefficients at a level which is in accordance with the signal's spectrum itself (check with [scope~] for example, because [snapshot~] will only show the first value in each block which happens to be DC component here). You'd need to auto-erase values in array 'fft' after each new pitch detect.
One more remark: blocksize should be set to a power of two, as Slur mentioned, otherwise Pd will reduce blocksize to a power of two without notice and your fft array length does not match spectrumsize.
Happy to see you're still busy with it. Good luck!
Katja
Meditation background generator
Thank you all for your replies, I didn't expect such feedback. Especially thanks jamesmcn - everything was described well, I just have something to add:
1 and 2. The top section performs two functions - it's smth. like "probability generator" (leftmost part), which, in accordance with LFO, defines how often droplet sound is being generated at the moment, and (rightmost one) an array containing tones, at which [vcf~]s of certain droplets resonate (0,2,3,5,7,8,10 in it mean C-minor scale). And every 5 seconds [route] picks these tones (notes) up from the array randomly and sends them to [vcf~] objects after [stream~] abstractions. These tones are transformed into frequencies in Hz to control bandpass filters.
3. Yes, the [stream~] is the meat of the synthesis, but it is not "very carefully" filtered noise, although should be. I just tried to make it sound like water droplets, and made some variations in cutoff frequency (first creation argument) and stream density (2nd creation argument) to make droplets sound more diverse. The stream~.pd itself is actually a very simple noise generator, and this is the oldest part of the patch itself. I wanted to make something like a rain noise, made this abstraction, and put it off until it came into play.
4 and 5. You know, if you delete the mixer/processor section (except reverb), you may not notice much difference: it just makes left and right channel slightly different from time to time - for merciless freezeverb to mix up left and right channels in one stream anyway. Buy the way - [freezeverb] is just an enhanced version of that you can see in help -> browser -> G08.reverb.pd. The only serious difference is that it uses delay_time_counter.pd, which calculates the times for delay lines in accordance with this formula: t = t1/2^(n/numlines)-t1/2, where t1 = the largest early reflection delay time value, numlines = total delay lines number (28 here), and n = current delay number (starting from 0). I found this algorithm here: http://musicdsp.org/archive.php?classid=4#44 but changed it a bit (actually, added "-t1/2" to make echoes appear earlier. I still don't understand completely how [freezeverb~] works. To be more precise, I don't understand what actually does [early_reflection_delay_line] do - but Miller Puckette in his example applied similar [reverb-echo-del] abstraction, and it works well! It makes a "power-preserving" mix, very useful thing in recirculating reverbs.
6. Two sine wave oscillators take their frequencies from two first randomly picked up notes from an array (see item 1 and 2). There are also two frequency modulators, 1846 Hz and 4 Hz sine waves, to saturate their spectrum. So it sounds a bit like noise, mainly because there are already too much sine waves from the [stream~] abstraction, and I thought it worth adding something at higher frequencies. And reverb smoothes these oversaturated sine waves, making them sound noisy.
7. How reverb similar to [freezeverb] works is described in help browser, I just can't understand why power preserving mix works. Also I tried to make one stereo reverb based on Miller Puckette's model, but a couple of experimental ones failed. This one is my best reverb ever.
8. Yes, and [master] abstraction is just a place where volume control, or spectrum analysis, are handy to perform from. Something to put all the wires at, and to listen to its output.
Visualizing the connection-order!!..
Hey,
I want to build a little "patch-analyser" (in pd since I don't know about "deeper" programming in C and so on). Mainly because I dislike the [trigger]'s & I just want to avoid using them.
It never ever happend that the order of objects being computed suddenly changed without triggers, but triggers slow things down alot. <- Just measure the "[realtime]" of banging e.g. "[until]" 100000 with and without [trigger]'s...
So as far as I can see there "technically" is no need for those triggers.
The order is saved within the save-file!
They just help visualizing the processing-structure! Some people consider using triggers as rather clean which may be true for looking at the patch but not for processing it.
..in short: without trigger's I build patches from right to left (because right inlets are the passive ones most of the time) so that most right objects (the ones you placed first) are computed at first!
It's different with "audio"-data (the vectors). There an object's output only is computed after all inlets have received data... (this makes sense sice each element of one vector can "interact" with each element of another vector & they are connected to the sample-rate)
Anyways, not using triggers but trying to keep a clear visual structure may "uglify" the appearance of the patch sometimes, or just doesn't work (more than one active inlet and so on..)
...Sooo the easiest way to keep the information about the order visualized is "on demand".
Imagine this: You select an object (msg etc.) and its outgoing (and incomming) connections get numbers written next to them according to the order they are processed.
This can be programed in pd itself since pd is capable of creating object's via text (msg) & saves the processing-order with the save-file (hopefully even before)
Next step would be beeing able to change the processing-order just by changing those numbers (helps alot with multiple connections & even [trigger]'s are not comfortable in this situation..) ... but that's far future...
What do you think of this?!
-Or an switchable extra graphical-layer containing those numbers for all objects (at once)... I gonna upload a patch that demonstrates this soon...
Now here comes the important part (still reading?! ):
Does anyone know if and where data (e.g. a pre-savefile) is stored as long as it's not saved???!!! I ask because I managed to analyse the text of save-files somehow, just by reading out the file with some textread-objects in pd... but I don't know where to find data of unsaved patches. (and it's a little annoying to save your patch every time just to analyse the connection-order)
Aaand is there a method to identify a selected object?? (For now I use a unique message that is sent to the object I want to "select" and then trace that connection in the save-file)
Actually I found something like this (just have to look where that was..), but actually it didn't work since it was made for earlier versions and it had to change some pd-dll's (yep -Windows..).
As usual don't hesitate to correct me if I'm wrong.
And please respond even if you just like or dislike this idea...
Bye Flipp
How to find frequency of real time sound...?? Please help me
[fiddle~]
you could use FFT to create a signature of each vocalist's voice, then perform spectral analysis on the input to determine which FFT signature the input mostly resembles.
But if you can do that, you may as well perform your pitch analysis on the same thread as your FFT analysis since [fiddle~] itself is doing the same thing.
New tutorial on PD externals on real-time audio analysis
Hello all,
I would like to announce that our project on writing some audio analysis externals with a programming tutorial has been published.
The examples contain the following objects:
-sound intensity vector analysis
-histogram function
-Mel-cepstrum coefficient analysis
Source code and documentation are currently available for download at
http://www.tml.tkk.fi/~tervos/downloads.html
Thank you!
FFT and DWT for any kind of signal (25Hz, 200Hz, 1kHz) ???
Well
I wanted to put zero in between to make the interpolation.
During upsampling the converters generally put zero between then there is a low-pass filter that makes the interpolation.
This is why I wanted to put zeros in between, yet you might be right, i'm gonna try with another software to make the interpolation the way you suggest and look if it does change the spectrum a lot, then if it works do it back in "real-time" with Pd
Then DWT is a "wavelet transform" it's quite close to a fourier transform but there are some very important differences
the base of projection is not made of sinus but of a "family of wavelet" that are some translated and dilated version of a "mother-wavelet"
It makes the transform able to work for signal which spectrum changes during time, and other kind of stuff (it is used for compression too, etc... very powerful and as long as IDS-analysis of Laurent Millot won't be able on Pd...)
This is a very useful way to analyse very low frequency (much better than fourier), and the phenomenon i'm working on seems to happen in very low frequency (between 0-20Hz )
I thought [block] just was up to decide the number of samples in a sound-block (then it makes you calculate FFT on different number of samples with the consequences on the precision of the frequency-axis) then I did look and ther is something about up and downsampling... gonna look too
Sound from video signal
Hello,
I'm a newbie too.
My wish is the same as yours : I'm working on a "cinema-concert".
So, I've got a film ( black and white ), I load it in Gemhead, I play it, and that I want is to generate data from variations of this film ( real-time black and white analysis ; real-time brightness analysis ; number of pixels .... ).
After that, I'd want to set up these data's variations for audio effects controlling through MIDI.
I've ever put pix_info in my patch.
First, it don't analys that I want from the film.
Second, it's not real-time ( I mean, it generates one info and after nothing )
I've found some infos about motion analysis on the net, but people get often oder softwares for to make these real-time datas.
I hope my english is not too bad, and that someone can help me.
Many thanks in advance,
Damien
Spectrum Analyzer
Hi, I'm trying to build a spectrum analyzer for a multi-band parametric equalizer in Pure Data but can't find any information on how to make one anywhere. I was wondering if anyone could explain how to make a spectrum analyzer or direct me to a link with the relevant information.
Thanks!
Isolating a specific frequency-band
So, if you have extended you should check out all of the info that Miller Puckette has put together - mainly the audio examples where he will show you nifty ways of graphing the spectrum. You'll need this so that you can see how successful your efforts are. In my version you have to right click on the PD app and "show package contents" and it's under documentation 3) audio examples.
You won't need line unless you want to change the cutoff freq, and in that case you will want to use [vcf~] anyway (it's also a bp filter.)
Reading Puckette's chapter on filters has me pretty confused (real/imaginary anyone?) but the main thing that I took from it is that digital filters are bullshit compared to analog filters.
You couldn't get BP to work at all? What you will want to do is [bp~ 700 5] You may have better luck with [vcf~ 700 5].
It sounds as if you want a very steep curve at 400 Hz and 1000Hz. In analog circuitry a steeper cutoff would be achieved by basically stacking high pass filters [hip~] at 400Hz and stacking low pass filters [lop~] at 1000Hz. You can do the same in PD, though I'm not sure how well this will work. Again, digital filters are shit, which is why you need to study up on the spectrum graphing so that you can see just how bad they are. If I were you, I would get creative and just throw tons of the built in filters together (hip lop bp and vcf) until you get something that "sounds" like what you want. For spectrum testing you will want to feed your filter network with a [noise~] object.
What are you trying to do anyway? And if you are thinking "extract voices from a mix" you might want to study before you post a response.
Feel free to ask questions about how to make PD make sound and basic stuff, it's really a pain in the ass but really fun once you understand.
Audio recognition with FFT
Check this post
http://puredata.hurleur.com/sujet-2508-frequency-analyzer
I posted a patch in there "spect.pd" which analyzes and write the spectrum data in a table which you can then save, recall, and so on.
There is a problem with what you have in your mind though: You want to compare the spectrum of two sounds, whereas the spectrum of a sound is NOT something stable that you can get the "exact" same result each time you analyze even exactly the same sound. You should focus on specific partials a particular sound has dominating over the others in its spectrum, then you can compare the two sound that has same characteristics in their timbre.
There is another patch in there called "spect2.pd" which has a threshold you can set to have only the partials with an amplitude above the threshold pass through. By using such a function, you can detect which partials are dominant in the sound you have in your mind as basis for comparison, and make a patch that looks for those in the spectral data you provide for it.