-
ddw_music
posted in technical issues • read moreOK, here's a basic live input granulator, no really fancy features, just pitch shifting.
The handling of grain playback rate in the one-grain abstraction is a neat trick I had worked out some time ago. For example, if you want to play the grain an octave higher (2x speed), then you need to span
2 * durms of audio indurms of time. You can do that by modulating the delay time as a straight line, fromdurms to 0 ms -- addingdurto the delay time at the beginning adds that many ms to the amount of audio being used:dur * (rate-1). Or, to play slower, start with a shorter delay and go toward a longer delay, and the boundary is againdur * (rate-1). If rate = 1, then the delay time goes from 0 to 0 = no change = normal speed. That might look a bit funky in the patch you can try it with different transposition intervals, which will show that it's correct.For sound file processing, replace the delay line with a soundfiler-filled array, and use tabread4~ for the audio source (and the line~ driving it will have to be different too).
IMO granular processing is 99% refinement and more advanced modulation of these basic parameters, which you should be able to tailor to your needs. I think the pitch shifting is more-or-less smooth here, though I'm not sure it matches your comparison plugins -- this is 66.6667 grains per second, with 4x overlap.
one-delay-grain.pd
live-granular.pdhjh
-
ddw_music
posted in technical issues • read more@Moddmo said:
I don't want to spend months making a patch and end up with crap sound.
Well, that's hard to promise because I'm not sure exactly what you mean by crap sound.

What I can say is that granular synthesis is made up of short clips of audio under envelopes. Pd can do both:
- clips:
tabread4~for a sound file loaded into memory,delread4~for live input. - envelopes: you can fill an array with a Hann (or any other type of) window, and stream it out using
tabread4~as well.
Pd has one edge over Max here, in that Pd's
metrois sub-block accurate. In both Pd and Max, there's an audio block size (default 64 samples), and control messages execute only on those block boundaries. In Max, last time I tried to do granular synthesis driven by control messages, I could hear the timing inaccuracy due to the messages being quantized to these block boundaries. (Maybe that's changed in Max 9, but that's my recollection from Max 8.) In Pd, control messages are processed on block boundaries, but they also carry sub-block timing information so that grains will start on the right sample, not just the right block. IMO Pd's timing was noticeably smoother. (In Max, multichannel signals get a better result.)For sound quality, it's very helpful to introduce a little bit of randomness into the grain time position, to avoid "machine gun" effects.
Again, "close to the available software," I'm not sure exactly what you mean. With proper tuning, I was able to get a pretty smooth sound out of it. Maybe an example later.
hjh
- clips:
-
ddw_music
posted in technical issues • read more@jamcultur said:
If the code on Windows was the same as the code on Mac, they would work the same.
One source of confusion here is the difference between source and object code.
Most of the time, humans never look at the object code produced by a compiler. We only look at the source code. So, when porres says there's no difference in the code between platforms, this is talking about source code.
The source code gets compiled into object code. In Mac vs Windows, the compilers are different, and the CPUs (architectures and instruction sets) are different. If the Mac is using an M-series CPU, then it's impossible for the object code to be the same as Windows (Intel or AMD chip), because the instruction sets are completely different. (That's also not considering the differences in OS function calls, which of course will not be the same between different OSes.) So in fact, "the code" isn't the same -- but this isn't porres's fault, and there's no way for the code not to be different.
Ideally, the same source code compiled for different chips should produce equivalent results. Programmers usually take this as a safe assumption. But there are edge cases where it might not work out that way (we just saw one of those over in SC-land, related to floating-point rounding). These cases can be extremely difficult to debug, and at the end of the day, one is at the mercy of the CPU and the compiler's behavior.
In such cases, it isn't helpful to accuse a developer of writing different code for Windows (this is implausible for DSP code in any case, which is mostly math operations that are well abstracted -- you don't need #ifdefs for std::xxx() math functions) or of "not caring" enough.
We want to assume that the compiler and CPU are transparent with respect to the source code's meaning. When that isn't the case, it's necessary to inspect every operation. It's painful, and if porres doesn't have access to a machine where the problem occurs, it can be very slow (test builds, relying on other people to run the specific tests). A little patience goes a long way.
hjh
-
ddw_music
posted in technical issues • read more@porres said:
cyclone does not have a [spectrogram~] object
ELSE has [spectrograpg~] though.
Thanks -- I didn't look closely enough at the help patch.
About formant synthesis with FM, I know Miller includes something like that in the audio examples (see F10). Is it related maybe?
It probably is, and his implementation is probably more elegant than mine (though it's jammed into a small space on the screen so it's a bit tough for me to read quickly).
It takes some tuning -- the FM index isn't a simple analog to formant bandwidth (it seems to need to be scaled down at higher pitches). But it's computationally cheap and gets a useful result, and it seemed to fit "approaches to formants other than bandpass filtering."
hjh
-
ddw_music
posted in technical issues • read moreThere's also the John Chowning "Phoné" FM formant approach, where the carrier is at the formant center frequency and the modulator is at the fundamental. It's "formant-ish" I suppose, but sweeping the fundamental while holding the formant frequency steady does produce a vocal-ish sound.
Here I'm crossfading between two formants, to make smooth transitions between integer carrier-mod frequency ratios.

Oops, no:
(~[spectrogram~] is from cyclone -- not essential to the patch's operation.)(~[spectrograph~] is from ELSE -- not essential to the patch's operation.)
hjh
-
ddw_music
posted in technical issues • read more@willblackhurst Tilde objects don't have anything to do with voltage inside the computer.
hjh
-
ddw_music
posted in technical issues • read moreInteresting that both responses assumed that atux wants to send pitch bend messages out, but "modulate the pitch in real time by moving a slider" says nothing about MIDI being the target.
The question might just as easily be, "How to map a slider onto a frequency ratio, to multiply with the note's main frequency?"
Normalize the slider value. Because pitch bend normally goes both up and down, I normalize to -1 .. +1.
line~ for smoothing. (Also, I'm stealing the mouse-release logic from porres, good tip!)
Pitch bend range is given in semitones. We need a fraction of an octave = pbrange / 12.
Scale the normalized pb value onto the fraction of the octave = [*~].
The ratio for one octave = 2. So, the ratio for the fraction of the octave = 2 ^ fraction.
Now you have a ratio that you can multiply by any frequency, and get pb.

(More generally, almost any kind of exponential modulation i.e. frequency can be expressed as
baselineValue * (modRatio ** modulator). Pitch bend is a specific case of this, where modRatio = 2 and the modulator is scaled to the range +-pbrange / 12. Linear modulation just demotes the math ops:baselineValue + (modFactor * modulator). With these 2 formulas you can handle a large majority of modulation scenarios.)hjh
-
ddw_music
posted in technical issues • read more@fer FWIW this is a formula I've used for abstraction init defaults for awhile now -- only vanilla objects, no externals needed.

If the usage of the object is [defaults-test] with no args specified, it prints
abstraction_defaults: list default symbols 0 1 2 3= the values that I put into the [pack].If it's [defaults-test xyz ttt 34 72], then it prints
abstraction_defaults: list xyz ttt 34 72 2 3where the 4 values overwrote the [pack] values from the start... generally what you'd assume from defaults.(If all the args are numbers, then you don't need any funny business, just [pdcontrol] --> [pack].)
hjh
-
ddw_music
posted in technical issues • read moreyou did spread misinformation about it
So just correct it and move on.
When people try to reinvent what already exists... I built this for the community. Ignoring existing tools and efforts misses the spirit of open source
Both Pd and SC have a systemic problem wherein there is no good way for new users to know which extensions exist. Recent versions of Deken improve the situation somewhat for Pd, and there's a similar effort underway for SC, but "missing the spirit of open source" is quite a burden to lay on somebody who might have using the tool for just a couple of weeks or months.
So I'm out of this thread. I like a lot of the stuff in else, really, and I wish I'd known about it from the start. (Btw "when it's there in plugdata already" -- when I started using Pd in classes, there was no plugdata and there was no pd-extended, and no way to discover ELSE by chance.)
hjh
-
ddw_music
posted in technical issues • read more@porres said:
nah, I'll just leave as it is, the object is already too much complicated and I don't know how to deal with it (if anyone has a suggestion, please let me know).
Maybe like this? Instead of velocity --> envelope, derive a gate by way of [change]. Then multiply the envelope by the velocity value. The volume will change if the velocity changes on a slurred note. If you don't want that, it should be possible to close the spigot when slurring to a note, and open it only when a brand-new note is being played.

hjh