• elden

    @lacuna I'm afraid I won't find anything there that's useful in my scenario...

    posted in technical issues read more
  • elden

    @nicnut No, I need to calculate warping paths between two audio files to measure the tonal distance between them. Not by ear. Please google "dynamic time warping".

    posted in technical issues read more
  • elden

    @nicnut in my case dynamic time warping is used to measure the tonal similarity of audio files with slightly different length.

    posted in technical issues read more
  • elden

    I need to compare sequences of different legths with each other. Therefore I need dynamic time warping. I know that a guy called Pedro Lopes did an external for Linux, but I can't get in touch with him, so does anyone here know an external that does it?
    Regards

    posted in extra~ read more
  • elden

    Hi, in order to compare sounds with each other, I need dynamic time warping. I know that this is possible with PD since at least 10 years, but I don't know which externals I have to use for that. Any suggestions?

    posted in technical issues read more
  • elden

    Ok, thanks. Great work! That should do the job I need it for. I'll get back to you if I run into some hurdles. Thanks a lot!

    posted in technical issues read more
  • elden

    @alexandros By the way: do you have a scheme of the internal operations of a single neuron inside [neuralnet] for me to check out?

    posted in technical issues read more
  • elden

    Yes, it does. Now that makes me think about leaving the FFT-realm and turning to [TimbreID] to directly extract features from a sample over time and have them compared using [neuralnet]. Then it's no longer an FFT-issue, but simply a comparison of several X-Y-curves. I'll see what I can achieve. Thanks for your help!

    posted in technical issues read more
  • elden

    Ok, thanks a lot! Yes, the audio signal will be recorded and encapsulated as a sample of the same length like the target sample. No recurrent network needed. I just don't get how I could distribute the inputs of the network across the fft waterfall spectrum, you know? Every single input has to detect an amplitude value on a certain position in time and frequency and I don't know how [neuralnet] does that. Or is that already implemented somehow?

    posted in technical issues read more
  • elden

    @alexandros I checked out your [neuralnet] external and it looks really promising. Now, let's say I wanted to create a complex perceptron that recognizes the spectral evolution of a recorded sound of a certain legth with high fft-bin-resolution over time and classifies it as a certain sound "debit", so to speak. That would require the input layer to have thousands of inputs scanning for the amplitude on certain positions on the frequency domain as well as on the time domain in the fft-spectrum. Is that possible with [neuralnet] ?

    posted in technical issues read more

Internal error.

Oops! Looks like something went wrong!