Convolution improvement
Hi guys,
I'm trying to vocalize a whisper speech using fft convolution; starting from two distinct inputs <a sampled vocal and a whisper speech> I have seen that is possible to generate a sort of vocal/normal speech.
I wonder if is there a way to improve the quality of the convolution process so that the result will sound more similar to the input vocal timbre than it is now.
Thanks in advance.
wts.zip
Perfect Filter/ Square Shape Filter?
@Kitty-Dyson also it is possible to do with an fft as gsagostinho does in the link whale-av shared. This is equivalent to convolution with an FIR windowed sinc filter I think (but I'm unsure if the timing would be the same with overlapping windows and all)
edit: here's more info on how to use the overlap-add fft to implement windowed sinc convolution if you're interested:
https://ccrma.stanford.edu/~jos/sasp/Overlap_Add_OLA_STFT_Processing.html
https://www.dspguide.com/ch18/2.htm
Fast Walsh-Hadamard Transform
@paulspignon Wow that’s crazy ! What were you working on if I may ask ?
About music applications, do you know what the guy meant?
Seems to me you can acheive a lot of the same stuff as fft, but with different sounding artifacts ? That could be interesting.
Also, since you can acheive convolution, it could greatly improve performances of say partitionned convolution algorithm I don’t know
Anyway, I really hope we can prove this Signal Processing Guru wrong!
ELSE 1.0 beta 17 & Live Electronics Tutorial 1.0 beta 7) Released
17th Beta release of ELSE 1.0 (needs Pd 0.49-0 or above)! See https://github.com/porres/pd-else/releases/tag/v1.0-beta17 - get binaries also directly from Pd (Help => Find Externals).
I haven't been posting every release here so you may want to see earlier changelogs to catch on what's new. The main new thing is that there's a [conv~]* object that performs partitioned convolution and with that my Live Electronics Tutorial now solely depends 100% on the ELSE library, and what makes this release special! Check it out: https://github.com/porres/Live-Electronics-Tutorial/releases/tag/v-1.0beta-7
- The [conv~] object is a partitioned convolution abstraction, but a compiled object should come up sooner or later.
cheers
Messages and numbers to ascii numbers and viceversa with Pyext.
@ingox first of all, thanks so much for the vanilla toascii abstraction. It works wonders! Other than thanking you I am here to ask you two things:
- I find it mindblowing that to convert a symbol to a list (unicode, not ascii) in vanilla we have to go through all that convoluted process, while the zexy library with the symbol2list object did it elegantely in one move. Most probably all the convolution was done in the backend of the object itself, but still: am I missing something? Why doesn't vanilla offer an easy and straightforward way to convert a symbol exactly as is to a unicode list? Just thinking out loud, 'been banging my head for the last hour trying to find an easier way.
- If I were to use your toascii.pd abstraction in an Arduino communication example patch to share on GitHub, how would you like me to reference you in the patch? Do you have a standard ref. to stick in a comment inside the patch? I tried to see if you had already uploaded it on GitHub but it seems like you didn't!
Thanks again!
Scrambled Hackz - how did he do it?
@oystersauce very nice project. it seems to be made with the timbreID library.
i would also like to know how to do something like that.
regarding your questions:
1:
the patch is mentioned in this paper: http://williambrent.conflations.com/papers/timbreID.pdf
"4.2. Target-based Concatenative Synthesis
Some new challenges arise in the case of comparing a con-
stant stream of input features against a large database in
real-time. The feature database in the vowel recognition
example only requires about 20 instances. To obtain inter-
esting results from target-based concatenative synthesis, the
database must be much larger, with thousands rather than
dozens of instances. This type of synthesis can be achieved
using the systems mentioned in section 1, and is practiced
live by the artist sCrAmBlEd?HaCkZ! using his own soft-
ware design [5]. The technique is to analyze short, overlap-
ping frames of an input signal, find the most similar sound-
ing audio frame in a pre-analyzed corpus of unrelated audio,
and output a stream of the best-matching frames at the same
rate and overlap as the input."
2:
just discovered that pure data can read csv files with binfile from mrpeach.
perhaps that is a way to read an external database?
vanilla partitioned convolution abstraction
Ok, here's the working vanilla partitioned convolution patch!
https://www.dropbox.com/s/05xl7ml171noyjq/convolution~.zip…
there are two subpatches for testing, one is light with a relative big window partition (1024) and a short Impulse Response (2 secs).
The other is quite heavy, it's an 8 sec long IR with a window size of 512! This one takes about 57-58% of my CPU power, and I'm on a last generation macbook pro (2.6Ghz processor)... but I need to increase the Delay (msec) from 5 to 10 in the audio settings, otherwise I get terrible clicks!
William Brent's convolve is ridiculously much more efficient, the same parameters take about 14% of my CPU power and I can use a delay of 5 ms in the audio settings.
But anyway, this is useful for teaching and apps that implement a light convolution reverb (short IR/not too short window) need pure vanilla (libpd/camomille and stuff)
ps. Bug, for some reason, you may need to recreate the object so the sound comes out. I have no idea yet why...
Cheers!
Purr Data and timbreID, can't load library
Hi all,
I'm trying to use the timbreID library in Purr Data, downloading the Mac release here (http://williambrent.conflations.com/pages/research.html) and adding it to the startup search paths, but to no avail, the PD window shows error 'timbreID: can't load library' every time. I've been scratching my head and reading forums all afternoon but can't find a definite answer, is there something I'm missing? Is it a compilation problem?
Thanks in advance for any advice you can offer!
streamstretch~ abstraction not working
Tried using the streamstretch~ abstraction by William Brent (http://williambrent.conflations.com/pages/research.html) recently.
However, when I loading the patch from the associated help file in pd-extended, it won't work. I get the following error messages:
clone ./lib/streamstretch-buf-writer-abs 100 2415
... couldn't create
text define $0-streamstretch-chord-text
... couldn't create
text get $0-streamstretch-chord-text
... couldn't create
text tolist $0-streamstretch-chord-text
... couldn't create
text fromlist $0-streamstretch-chord-text
... couldn't create
clone ./lib/streamstretch-buf-writer-abs 100 1536
... couldn't create
Tried loading the patch in Pd Vanilla and I still get error messages:
clone ./lib/streamstretch-buf-writer-abs 100 2415
... couldn't create
clone ./lib/streamstretch-buf-writer-abs 100 1004
... couldn't create
I'm just wondering if anyone else gets the same kinds of error messages when they try to load this abstraction.
Spectral convolution
Hi everyone,
Here is a patch I developed for a specific project, but I think it's quite interesting in itself to get strange soundscapes and so I thought I'd share it.
The idea developed from this spectral delay patch, with the difference that the selected frequency bands of the input signal, rather than being delayed, are sent to two convolution reverbs (which can be set to work in series or parallel). With the right kind of impulse responses, and with the ability to select which part of the spectrum is being convoluted. the results can be very interesting, though hard to predict most of the time.
The patch can save presets, which include all settings, the frequency band tables and the impulse responses.
spectral.convolution1.0.zip
(requires zexy and bsaylor externals.)
Here is a recording I made while working on it (with the addition of a pitched delay on the output)