Swept sine deconvolution
@lead said:
maybe process the lists from cartesian to polar coordinates (or visa versa) then *-1 +1?
Not sure what you mean exactly. Admittedly, my suggestion was rather cryptic as well.
Anyway, I was thinking about the situation of a test signal: how long is that signal? Probably too long to fit in one fft frame of reasonable length. In that case, I don't think you can invert it's phase spectrum in frequency domain. But no problem: just do a time reversal in signal domain, and it's phase spectrum will be inverted automatically. To be explicit here: I mean you must time-reverse the samples of the original test sweep (not the test result signals). Time reversal does not change the magnitude spectrum, only the phase spectrum.
So remains inverting the magnitude spectrum of the time-reversed test sweep. A log sweep does not have a flat magnitude spectrum, so you need to compute the multiplicative inverse of the magnitude coefficients (in frequency domain, using polar coordinates indeed here). Beware of possible zero-magnitude points in the original, they can not be inverted of course. After magnitude inversion, go back to cartesian coordinates and from them to time domain. Now you have your deconvolution kernel in time domain.
The deconvolution kernel is the test sweep with phase and magnitude spectrum inverted. It can be considered a (very long) FIR filter. With this kernel you can do fast convolution of the test results, to deconvolve these. For this, you can use Ben Saylor's [partconv~] object. It does 'partitioned convolution' in frequency domain, with zero padding to avoid circular convolution.
Good luck. Please let us know if you have succes. It is an interesting technique to create impulse responses of acoustic conditions. Expensive reverb simulators probably rely on similar techniques. What is actually your purpose for it?
Katja
Meditation background generator
Thank you all for your replies, I didn't expect such feedback.
Especially thanks jamesmcn - everything was described well, I just have something to add:
1 and 2. The top section performs two functions - it's smth. like "probability generator" (leftmost part), which, in accordance with LFO, defines how often droplet sound is being generated at the moment, and (rightmost one) an array containing tones, at which [vcf~]s of certain droplets resonate (0,2,3,5,7,8,10 in it mean C-minor scale). And every 5 seconds [route] picks these tones (notes) up from the array randomly and sends them to [vcf~] objects after [stream~] abstractions. These tones are transformed into frequencies in Hz to control bandpass filters.
3. Yes, the [stream~] is the meat of the synthesis, but it is not "very carefully" filtered noise, although should be.
I just tried to make it sound like water droplets, and made some variations in cutoff frequency (first creation argument) and stream density (2nd creation argument) to make droplets sound more diverse. The stream~.pd itself is actually a very simple noise generator, and this is the oldest part of the patch itself. I wanted to make something like a rain noise, made this abstraction, and put it off until it came into play.
4 and 5. You know, if you delete the mixer/processor section (except reverb), you may not notice much difference: it just makes left and right channel slightly different from time to time - for merciless freezeverb to mix up left and right channels in one stream anyway. Buy the way - [freezeverb] is just an enhanced version of that you can see in help -> browser -> G08.reverb.pd. The only serious difference is that it uses delay_time_counter.pd, which calculates the times for delay lines in accordance with this formula: t = t1/2^(n/numlines)-t1/2, where t1 = the largest early reflection delay time value, numlines = total delay lines number (28 here), and n = current delay number (starting from 0). I found this algorithm here: http://musicdsp.org/archive.php?classid=4#44 but changed it a bit (actually, added "-t1/2" to make echoes appear earlier. I still don't understand completely how [freezeverb~] works. To be more precise, I don't understand what actually does [early_reflection_delay_line] do - but Miller Puckette in his example applied similar [reverb-echo-del] abstraction, and it works well! It makes a "power-preserving" mix, very useful thing in recirculating reverbs.
6. Two sine wave oscillators take their frequencies from two first randomly picked up notes from an array (see item 1 and 2). There are also two frequency modulators, 1846 Hz and 4 Hz sine waves, to saturate their spectrum. So it sounds a bit like noise, mainly because there are already too much sine waves from the [stream~] abstraction, and I thought it worth adding something at higher frequencies. And reverb smoothes these oversaturated sine waves, making them sound noisy.
7. How reverb similar to [freezeverb] works is described in help browser, I just can't understand why power preserving mix works. Also I tried to make one stereo reverb based on Miller Puckette's model, but a couple of experimental ones failed. This one is my best reverb ever.
8. Yes, and [master] abstraction is just a place where volume control, or spectrum analysis, are handy to perform from. Something to put all the wires at, and to listen to its output.
FreeBeats
I transformed this patch into a RjDj Scene. You can find it here: http://rjdj.me/sharescene/free-beats/
It'll be soon avaible among the others free scenes you can download from the rjdj server.
Now it uses environment sounds only so you need an iphone or an ipod touch with microphone.
Just tap your tempo on the touch screen, go around, sing, make some noise...It'll play along with you.
enjoy
Pd/rjdj skillshare @ Eyebeam, NYC, Dec 5th
http://eyebeam.org/events/rjdj-skillshare
December 5, 2009
12:00 -- 1:30 PM : Introductory workshop on Pd with Hans-Christoph Steiner
2:00 -- 6:00 PM : SkillShare w/Steiner and members of RjDj programming team
Free, capacity for up to 30 participants
RSVP HERE: http://tinyurl.com/ykaq3l3
Hans-Christoph Steiner returns to Eyebeam with members of the RjDj programming team from Europe to help turn your iPhone or iPod-Touch into a programmable, generative, and interactive sound-processor! Create a variable echo, whose timing varies according to the phone's tilt-sensor or an audio synthesizer that responds to your gestures, accelerations and touches. Abuse the extensive sound capabilities of the Pure Data programming language to blend generative music, audio analysis, and synthy goodness. If you're familiar with the awesome RjDj, then you already know the possibilities of Pure Data on the iPhone or iPod Touch (2nd and 3rd generation Touch only).
Creating and uploading your own sound-processing and sound-generating patches can be as easy as copying a text file to your device! In this 4-hour hands-on SkillShare, interactive sound whiz and Pure Data developer Hans-Christoph Steiner and several of the original RjDj programers will lead you through all the steps necessary to turn your phone into a pocket synth.
How Eyebeam SkillShares work
Eyebeam's SkillShares are Peer-to-Peer working/learning sessions that provide an informal context to develop new skills alongside leading developers and artists. They are for all levels and start with an introduction and overview of the topic, after which participants with similar projects or skill levels break off into small groups to work on their project while getting feedback and additional instruction and ideas from their group. It's a great way to level-up your skills and meet like-minded people. This SkillShare is especially well-suited for electronic musicians and other people who have experience programming sound. Some knowledge of sound analysis and synthesis techniques will go a long way.
We'll also take a lunch break in the afternoon including a special informal meeting about how to jailbreak your iPhone!
Your Skill Level
All levels of skill are OK as long as you have done something with Pd or Max/MSP before. If you consider yourself a beginner It would help a lot to run through the Pd audio tutorials before attending.
NOTE: On the day of the SkillShare we will hold an introductory workshop from 12:00 until 1:30 PM, led by Steiner, for those who want to make sure they're up-to-speed before the actual SkillShare starts at 2:00. The introductory workshop is for people who have some done something in Pd or Max/MSP but are still relative beginners in the area of electronic sound programming.
What You Should Bring
You'll need to bring your iPhone or iPod Touch (2nd or 3rd generation Touch only), your own laptop, a headset with a built-in mic (especially if using an iPod Touch) and the data cable you use to connect your device to your laptop. Owing to a terrific hack, you won't even need an Apple Developer License for your device!
More Information
RjDj is an augmented reality app that uses the power of the new generation personal music players like iPhone and iPod Touch to create mind blowing hearing sensations. The RjDj app makes a number of downloadable scenes from different artists available as well as the opportunity to make your own and share them with other users. RjDj.me
Pd (aka Pure Data) is a real-time graphical programming environment for audio, video, and graphical processing. Pd is free software, and works on multiple platforms, and therefore is quite portable; versions exist for Win32, IRIX, GNU/Linux, BSD, and MacOS X running on anything from a PocketPC to an old Mac to a brand new PC. Recent developments include a system of abstractions for building performance environments, and a library of objects for physical modeling for sound synthesis.
kill your television
PD trouble
It could well be the 8 freeverbs that your running...probably definatley
I reckon you should just have one instance and send the output of each subpatch to this. I recommend using catch/throw to send signals, so attach [throw~ reverb] to each subpatch.. THen attach a [catch~ reverb] to the reverb
also you don't need the multiple [dac~] , just the one. And feed the output of the reverb to the one [dac~].
This should free up resources
Live loop
Have a look at the source for rjdj app (Pure Data) World Quantizer
http://trac.rjdj.me/browser/trunk/rjdj_scenes/WorldQuantizer.rj
It's pretty complex, but that's what you'd expect from something that does something amazing!
Alternatively, if you need something simple, you have to record into a table and play that back how you want, and do to it's out put what you want. All explained in the help browser.
Convolve effect???
Hi,
I was just wondering if you managed to get convolution reverb working using this method? My maths knowledge is next to none, so it would be great if you could explain how to achieve this effect with FFTease? (I've installed it, examples sound great, but how do I make it work as a convolution reverb...?) I just need a processor to take a clean sample and an impulse response and perform convolution on these two.
By the way, this is my first post and I've been using Pure Data for about two weeks! I'm simply loving it!
Many thanks!
Vytis
Frozen reverb
"Frozen reverb" is a misnomer. It belongs in the Chindogu section along with real-time timestretching, inflatable dartboards, waterproof sponges and ashtrays for motorbikes. Why? Because reverb is by definition a time variant process, or a convolution of two signals one of which is the impulse response and one is the signal. Both change in time. What you kind of want is a spectral snapshot.
-
Claudes suggestion above, a large recirculating delay network running at 99.99999999% feedback.
Advantages: Sounds really good, its a real reverb with a complex evolution that's just very long.
Problems: It can go unstable and melt down the warp core. Claudes trick of zeroing teh feedback is foolproof, but it does require you to have an apropriate control level signal. Not good if you're feeding it from an audio only source.
Note: the final spectrum is the sum of all spectra the sound passes through, which might be a bit too heavy. The more sound you add to it, with a longer more changing sound, the closer it eventually gets to noise. -
A circular scanning window of the kind used in a timestretch algorithm
Advantages: It's indefinitely stable, and you can slowly wobble the window to get a "frozen but still moving" sound
Problems: Sounds crap because some periodicity from the windowing is always there.
Note: The Eventide has this in its infiniverb patch. The final spectrum is controllable, it's just some point in the input sound "frozen" by stopping the window from scanning forwards (usually when the input decays below a threshold). Take the B.14 Rockafella sampler and write your input to the table. Use an [env~]-[delta] pair to find when the
input starts to decay and then set the "precession percent" value to zero, the sound will freeze at that point. -
Resynthesised spectral snapshot
Advantages: Best technical solution, it sounds good and is indefinitely stable.
Problems: It's a monster that will eat your CPUs liver with some fava beans and a nice Chianti.
Note: 11.PianoReverb patch is included in the FFT examples. The description is something like "It punches in new partials when theres a peak that masks what's already there". You can only do this in the frequency domain. The final spectrum will be the maxima of the unique components in the last input sound that weren't in the previous sound. Just take the 11.PianoReverb patch in the FFT examples and turn the reverb time up to lots.
Frozen reverb
See /pd/doc/3.audio.examples/G08.reverb.pd -- if you set the feedback to 100% it lasts forever. The problem is if you keep feeding audio into it, it gets louder and louder...
Attached is a really simple reverb abstraction based on G08.reverb.pd, next post will be an example of adjusting the feedback level so it doesn't blow up.
Good reverb?
hi all
i'm looking for a simple stereo reverb. i've been using the abstraction from the reverb tutorial patch that comes with pd but it distorts a little. in the patch it says that some of the included libraries contain nicer reverb implementations. i am using pd extended. can anyone suggest a good reverb lib/object?
cheers
nay


