-
elden
posted in technical issues • read more@alexandros I've never heard of this website. Maybe I'm living on the moon or something... thanks! Got it now.
-
-
elden
posted in technical issues • read more@whale-av Oh, great! Thanks a lot! And where do I get the whole library?
-
elden
posted in technical issues • read moreHi,
I used [list-random] in an old patch of mine, but I don't know where I got it from. Does anyone know which external it's been in?Thanks
-
elden
posted in technical issues • read more@lacuna I'm afraid I won't find anything there that's useful in my scenario...
-
elden
posted in technical issues • read more@nicnut No, I need to calculate warping paths between two audio files to measure the tonal distance between them. Not by ear. Please google "dynamic time warping".
-
elden
posted in technical issues • read more@nicnut in my case dynamic time warping is used to measure the tonal similarity of audio files with slightly different length.
-
elden
posted in technical issues • read moreHi, in order to compare sounds with each other, I need dynamic time warping. I know that this is possible with PD since at least 10 years, but I don't know which externals I have to use for that. Any suggestions?
-
elden
posted in technical issues • read moreOk, thanks. Great work! That should do the job I need it for. I'll get back to you if I run into some hurdles. Thanks a lot!
-
elden
posted in technical issues • read more@alexandros By the way: do you have a scheme of the internal operations of a single neuron inside [neuralnet] for me to check out?
-
elden
posted in technical issues • read moreYes, it does. Now that makes me think about leaving the FFT-realm and turning to [TimbreID] to directly extract features from a sample over time and have them compared using [neuralnet]. Then it's no longer an FFT-issue, but simply a comparison of several X-Y-curves. I'll see what I can achieve. Thanks for your help!
-
elden
posted in technical issues • read moreOk, thanks a lot! Yes, the audio signal will be recorded and encapsulated as a sample of the same length like the target sample. No recurrent network needed. I just don't get how I could distribute the inputs of the network across the fft waterfall spectrum, you know? Every single input has to detect an amplitude value on a certain position in time and frequency and I don't know how [neuralnet] does that. Or is that already implemented somehow?
-
elden
posted in technical issues • read more@alexandros I checked out your [neuralnet] external and it looks really promising. Now, let's say I wanted to create a complex perceptron that recognizes the spectral evolution of a recorded sound of a certain legth with high fft-bin-resolution over time and classifies it as a certain sound "debit", so to speak. That would require the input layer to have thousands of inputs scanning for the amplitude on certain positions on the frequency domain as well as on the time domain in the fft-spectrum. Is that possible with [neuralnet] ?
-
elden
posted in technical issues • read more@lacuna The flucoma guys referred me to their forum "discourse.flucoma.org" in which I posted my concern a couple of weeks ago, I just noticed. I forgot about them as nobody could help me there. I'll try [neuralnet] now, but I'm pretty sure that I'll not understand a bit about how to use it. Help with it is highly appreciated @alexandros .
-
-
elden
posted in technical issues • read moreHello,
in order to patch a target drive for my synth sound breeding genetic algorithm "Ewolverin", I need to measure the tonal distance of different sounds to a target sample. Is there a way to do this with an artificial neural network in Pd? If yes, who knows how and could help me?Regards
Elden -
elden
posted in patch~ • read moreI noticed that breeding MIDI CC# parameter values under 10 is very difficult as minimal values tend to blend back into the population, but minimal values are important for envelope parameters like attack and decay. I'll have to patch an option to weigh the randomization of parameter values for each CC# so they tend to generate smaller numbers.
And a much bigger population of at least 12 sounds would be helpful. Will take a while... -
elden
posted in patch~ • read moreExactly, but you don't have to create a loop, if you rout a keyboard into your synth. You just have to make sure, that the synth only gets note events from your keyboard - not the CCs, as those are coming from Ewolverine.
You can also control Ewolverine with your keyboard. Just scroll to the right to read the MIDI implementation chart.