Acreil - I'll admit that some of that was a little over my head, but some aspects of it sound a little like what Zynaddsubfx (a softsynth) does in its 'padsynth' algorithm? Basically it takes a simple waveform and spreads/"smears" it, in a gaussian distribution, over a range of frequencies, with some slightly complex-looking frequency domain mathematics. As you suggest, it's quite similar to a chorus effect, really...
I'll have to admit it's partly over my head too. I don't really know that much about frequency domain stuff. I just read some papers, made some connections between them, and dicked around with example I09. But I guess I hit on something a little more unique than I expected. I'll upload the patch if I get it cleaned up a little more. It should be more efficient too...
I think as far as Padsynth goes, you can imagine playing a sound into a reverberator with an infinite decay time, then sampling and looping the output. Only it leaves out the reverberator (and the coloration, etc. that it can add) and produces the result (randomized phase) directly. The output is inherently periodic, so it loops perfectly with no additional effort. I think Image Line's Ogun uses the Padsynth algorithm, and that NOTAM Mammut program can do much the same thing (I think it actually illustrates the effect really nicely). Padsynth does smear out the frequency components a little (I guess windowing sorta does that for STFT...), but the phase randomization is the important part if you're processing arbitrary audio input.
Here are some samples of my patch (I probably should have posted them before):
http://www.mediafire.com/?6nf525vnv1xew58 (wet and dry mix)
(this actually makes for surprisingly good test material since it's mono, pretty dry, and includes vocals, guitar and transient sounds)
http://www.mediafire.com/?c62tr5ox07r4tdc (wet only, you may be able to guess the original)
Both use time variant decorrelation, feedback (reverb), pitch shift, nonlinear filtering, and also random filtering. But these don't demonstrate some of the more extreme effects since it favors very, very slow and sparse source material. I didn't use any other effects, just the one patch.
You can pitch shift the FFT data with [vd~], just like in time domain, only it works the opposite way, i.e. stretching the spectrum shifts the pitch up and compressing it shifts it down. [rifft~] ignores the second (redundant) half of the frame, but if you're doing extreme pitch shifting you have to take care not to spill over into the next frame. Or you can just write the IFFT's output to a table and transpose it like you would any sampled data (as Padsynth does). I'm no expert in this department either; I just messed around until it worked. And I'm not entirely sure what you're going for, either.