• elden

    Hello again,

    I need to compare audio against a target sample. I heard that MFCC error calculation does a good job in this field. Is there any object in pd that does that for multiple seconds long audio recordings?

    regards

    posted in technical issues read more
  • elden

    Hello everyone,

    as most of you know, I'm continuously developing my Ewolverine patch with which you can genetically breed sounds out of your MIDI-gear.
    In order to automatically approximate synthesizer parameters, Ewolverine must compare different synth-sounds to a target sample. The problem is that sounds match differently to the target sample depending on the comparison-criterion.

    Examples:

    Case 1:
    If the selection criterion is the length of synthesized sounds in comparison to the target, the selection mechanism may choose for synth parameters that generate sounds as long as the target sample, but may pay no attention to its timbre.

    Case 2:
    If the selection criterion is the onset, the generated sounds may all have equal onsets, but differ in length and timbre.

    What I need is a way of multi-objective optimization which takes all criteria into account and tells Ewolverine's selection mechanism which synthesized sound is generally nearest to the target sample.

    Is there anything in pd that I could use or do you have any idea what I could do or do you know anyone who could help me?

    posted in technical issues read more
  • elden

    EWOLVERINE v.7.1 beta by Henry Dalcke 6.pd

    ...changed some default settings of the Target Drive and corrected the Help-subpatch a little...

    posted in patch~ read more
  • elden

    Good thought, thanks. Maybe it' s a start. I'll check it out as soon as I can.

    posted in technical issues read more
  • elden

    Hey guys,

    I just checked this little thing csm_Waverazor_Arp_Presets_8fbc7616c5.jpg

    Seems to me that they switch between different wavetable oscillators in the time of the wavelength at key frequency.
    In general, one could easily do this using an audio input switch that cycles through the different inputs at the rate of the wave frequency of the triggered midi note. If you now manipulate the different audio streams that are connected to the different audio inputs of the switch, you can edit the different waveform segments separately.

    My question: How can I switch between different audio inputs at the rate of a midi note frequency?
    Or mighty it be less complicated to just concatenate different wavetables into a wavetable of a length equal to note frequency? What do you think how Waverazor works?

    posted in technical issues read more
  • elden

    hello guys,
    I used the last few days to analyze the authentic expression technology filter in Native Instruments' "Kontakt" to make my own remake of it in form of a multiple input audio source morphing tool that functions exactly like the AET filter in NI Kontakt. I'll upload a video demo on how it's done, soon. For now here's the description. This following little patch is a recreation of how Kontakt's modwheel behaves in relation to key velocity. You need it for an authentic AET experience.

    Henry velocity plus modwheel merger.pd

    What Kontakt's channel vocoder does is this: It swaps the carrier and modulator input signal everytime a morphing has finished while at the same time routing another audio source into the respectively muted input.
    This can easily be done with freeware, too - with up to 12 audio sources!
    Here is how:

    modwheel morpher needed softwares.jpg this picture shows all softwares needed to fake the AET filter functioning.: a modular host / VST wrapper; Midicurve; RD switch 6×6; DtblkFXs

    routing and parameter assignment:

    1. MIDI keyboard into Pd, from Pd to your DAW, inside your DAW rout it into all 'Midicurve'-plugins
    2. rout first and second Midicurves into a 6x6 switch each; assigning 6×6's input switching parameter to the Midi CC coming from the respective Midicurve plug-in
    3. rout third Midicurve to DtblkFXs, assigning it's "0.Val" parameter to Midi output CC coming from that respective Midicurve
    4. rout your audio sources (synths / samplers / microphones) into the RD switches - instruments 1,3,5,7 & 9 into one switch and the instruments 2,4,6 & 8 into the other switch (all instruments get a different input at the RD switches - don't put all of them into the first audio channel, otherwise they won' t morph)
    5. draw the transfer functions of the Midicurves as seen in the picture and make sure to place a hook at "CC" and select the CC of your modwheel!
      If you turn your modwheel up, now, the switches should change their input channels exactly in the moment when a full morphing from one source to the next has been finished.
    6. adjust DtblkFXs as seen in the picture!

    Much fun with your own totally free AET morphing tool!

    posted in patch~ read more
  • elden

    Some depictions of what Ewolverine does to your synthesizes:
    EWOLVERINE DEMO PIC.jpg
    klangpotential darstellung.jpg

    posted in patch~ read more
  • elden

    Oh, thank you! That's a good start - although l don't really get the meaning of the numbers of the [connect( message, yet and what influences the positioning of an object in a patch?

    posted in technical issues read more
  • elden

    Hi,
    l want to replicate certain objects a variable number of times and patch them to others automatically. Is that possible and if Yes - how?

    Thanks!

    posted in technical issues read more
  • elden

    Hello everyone,
    I want to connect a (long short term memory) neural network to midi parameters and let it find out what assigned synth parameters influence the generated sound in which way for a sound matching application. How would you do this?

    Regards

    posted in technical issues read more
  • elden

    Looks pretty convincing, actually. I need to check it out in more detail. Stay tuned.

    posted in technical issues read more
  • elden

    Do you know my patch "Ewolverine"? The functionality is limited on midi for now, but I experimented with a pdvst version already and the results were similar. The only problem was that I couldn't figure out how to make a VST out of a pd patch. I was also thinking of stuff you mentioned. I don't think that's very hard to do, but first I need to convert a patch into a vst with midi and audio inputs as well as outputs. I don't use artificial neural nets. The audio analysis of sampled sounds is based on spectral comparison to a target sample. ANNs are not necessary for such operations. If I'd like to check adapted sound against some special features I want them to provide, I'd surely use ANNs, but that would probably cost me many months or even years of development. I don't think that I have the necessary endurance for that. At least not in pd. Patches would be highly complex and surely I'd myself not be able to overlook it.

    posted in technical issues read more
  • elden

    Hey there,

    I want to program a VST wrapper that allows for the genetic breeding of presets for any vst synth or fx, which I can load into a DAW. Any advices?

    Cheers

    posted in technical issues read more
  • elden

    Hello Jona, i'm sorry I cannot recall - it's about two years since that. I found a better way to do what I wanted, so the artificial neural net approach went obsolete for me pretty soon. I think you need to find out by trial and error.

    posted in technical issues read more
  • elden

    I'm not using anything like live or the supported software. I want to use pd stand alone for my purposes, actually. Nevertheless great news to see pd integrate or connect to the actual standard in music production!

    What about recording or converting midi files to arrays?

    posted in technical issues read more
  • elden

    And is this just a link between live and pd or is it a suit of patches that function like live?

    posted in technical issues read more
  • elden

    I'm working on a live arranger for midi phrases like those of the accompanying arranger keyboards, but not as limited to a few pattern including intros, fills and endings. I want it >bigger< .

    My question:
    Are there synchronizable and tempo adjustable midi players for pd available?

    posted in technical issues read more
  • elden

    New version 7 (currently testing)

    • added automatic loosening of minimum fitness limit for the case that a population of sounds gets stuck in a local maximum in the fitness landscape (really nerdy jabbering, but trust me, it's useful ^^)
    • added automatic "jumping" out of local maximums after a certain number of fruitless climbing-trials

    EWOLVERINE v.7 by Henry Dalcke.pd

    plans:
    • bugfix: prevent a newly audible sound from being selected after manually stopping the target drive
    • simulated annealing in target drive mode: span "temperature" value onto fitnesslandscape and decrease step length (modwheel) and probability value in the splice-pattern-generator the closer the fitness gets to optimal fitness value
    • interactive mode: automatic narrowing of the range of generated parameter values around a mean value that's derived from the repeated selection of similar values of individual parameters throughout the generations (increases the number of similar sounds per generation that are located around a certain coordinate in parameter space; increases the likelihood of the generation of the desired sound in a smaller amount of time)
    • stop-condition for automatic stopping of target drive
    • make default settings for modwheel-position, splice-pattern-generator's probability, anti-stuck and allowed minimum fitness value in target drive adjustable from GUI
    • adjustable MIDI output message blocker (useful for instruments with a fixed MIDI implementation, for instance: If you want to breed a bass drum in a drum synth with multiple instruments, you may not want to ruin the parameter adjustments of the snare drum meanwhile you're selecting for good bass drums)
    • storage for self-created splicer patterns (maybe in connection to the MIDI output message blocker)
    • low-value-weighted probability for the generation of MIDI-CC-values in new populations; switchable per MIDI-CC either manually or randomly (increases the probability for the generation of short attack and decay values in synth's envelopes)
    • bigger populations for each sound-set: 4 more random sounds per set (A/B) to select from
    • discontinuous MIDI messaging interrupted by assignment switching CC events (special build for FM-Heaven) - low priority
    • possibility to interpolate between new random population's sounds to smoothly re-direct the modwheel-morphing path while morphing
    • selection-history recorder that one can use to re-load the selected sounds of each past generation
    • a visualizer that generates a "tree of life" from directions (keys C,D,E,F) and steplengths (modwheel) of formerly selected individuals and their respective distances to their parent sounds

    posted in patch~ read more
Internal error.

Oops! Looks like something went wrong!