-
elden
Ok, thanks. Great work! That should do the job I need it for. I'll get back to you if I run into some hurdles. Thanks a lot!
-
elden
@alexandros By the way: do you have a scheme of the internal operations of a single neuron inside [neuralnet] for me to check out?
-
elden
Yes, it does. Now that makes me think about leaving the FFT-realm and turning to [TimbreID] to directly extract features from a sample over time and have them compared using [neuralnet]. Then it's no longer an FFT-issue, but simply a comparison of several X-Y-curves. I'll see what I can achieve. Thanks for your help!
-
elden
Ok, thanks a lot! Yes, the audio signal will be recorded and encapsulated as a sample of the same length like the target sample. No recurrent network needed. I just don't get how I could distribute the inputs of the network across the fft waterfall spectrum, you know? Every single input has to detect an amplitude value on a certain position in time and frequency and I don't know how [neuralnet] does that. Or is that already implemented somehow?
-
elden
@alexandros I checked out your [neuralnet] external and it looks really promising. Now, let's say I wanted to create a complex perceptron that recognizes the spectral evolution of a recorded sound of a certain legth with high fft-bin-resolution over time and classifies it as a certain sound "debit", so to speak. That would require the input layer to have thousands of inputs scanning for the amplitude on certain positions on the frequency domain as well as on the time domain in the fft-spectrum. Is that possible with [neuralnet] ?
-
elden
@lacuna The flucoma guys referred me to their forum "discourse.flucoma.org" in which I posted my concern a couple of weeks ago, I just noticed. I forgot about them as nobody could help me there. I'll try [neuralnet] now, but I'm pretty sure that I'll not understand a bit about how to use it. Help with it is highly appreciated @alexandros .
-
-
elden
Hello,
in order to patch a target drive for my synth sound breeding genetic algorithm "Ewolverin", I need to measure the tonal distance of different sounds to a target sample. Is there a way to do this with an artificial neural network in Pd? If yes, who knows how and could help me?Regards
Elden -
elden
I noticed that breeding MIDI CC# parameter values under 10 is very difficult as minimal values tend to blend back into the population, but minimal values are important for envelope parameters like attack and decay. I'll have to patch an option to weigh the randomization of parameter values for each CC# so they tend to generate smaller numbers.
And a much bigger population of at least 12 sounds would be helpful. Will take a while... -
elden
Exactly, but you don't have to create a loop, if you rout a keyboard into your synth. You just have to make sure, that the synth only gets note events from your keyboard - not the CCs, as those are coming from Ewolverine.
You can also control Ewolverine with your keyboard. Just scroll to the right to read the MIDI implementation chart. -
-
elden
@alexandros
it could very well be that these objects work. Being a pd-rookie, all my patches are dependent on external help. Thank you very much. -
elden
Hello again,
I need to compare audio against a target sample. I heard that MFCC error calculation does a good job in this field. Is there any object in pd that does that for multiple seconds long audio recordings?
regards
-
elden
Hello everyone,
as most of you know, I'm continuously developing my Ewolverine patch with which you can genetically breed sounds out of your MIDI-gear.
In order to automatically approximate synthesizer parameters, Ewolverine must compare different synth-sounds to a target sample. The problem is that sounds match differently to the target sample depending on the comparison-criterion.Examples:
Case 1:
If the selection criterion is the length of synthesized sounds in comparison to the target, the selection mechanism may choose for synth parameters that generate sounds as long as the target sample, but may pay no attention to its timbre.Case 2:
If the selection criterion is the onset, the generated sounds may all have equal onsets, but differ in length and timbre.What I need is a way of multi-objective optimization which takes all criteria into account and tells Ewolverine's selection mechanism which synthesized sound is generally nearest to the target sample.
Is there anything in pd that I could use or do you have any idea what I could do or do you know anyone who could help me?
-
elden
EWOLVERINE v.7.1 beta by Henry Dalcke 6.pd
...changed some default settings of the Target Drive and corrected the Help-subpatch a little...
-
elden
Good thought, thanks. Maybe it' s a start. I'll check it out as soon as I can.
-
elden
Hey guys,
I just checked this little thing
Seems to me that they switch between different wavetable oscillators in the time of the wavelength at key frequency.
In general, one could easily do this using an audio input switch that cycles through the different inputs at the rate of the wave frequency of the triggered midi note. If you now manipulate the different audio streams that are connected to the different audio inputs of the switch, you can edit the different waveform segments separately.My question: How can I switch between different audio inputs at the rate of a midi note frequency?
Or mighty it be less complicated to just concatenate different wavetables into a wavetable of a length equal to note frequency? What do you think how Waverazor works?