Real-time (note off - triggered) ADSR envelope
All those things in life we take for granted
Since I started playing synths in the early 1990s, a release segment triggered by a MIDI note off event (ie gradual ending after releasing a key) was found in every synth I played, whether hardware or software synth, free or commercial.
So... I'm wondering how people do this when they create (keyboard-operated) synths with Pd.
There would have to be a standard method.
But then maybe I'm making assumptions
Thanks in advance for your advice.
Wavetable/Shaper Synthesis
Hey All,
I got a few questions, mods please let me know if i should split them into diff posts?
1. Does anyone have a wavetable synth that i can lookup external wavetables? (in .wav format)
2. I'd like to really test the latency/synthesis capabilities of PD, therefore, can someone point me to the BIGGEST (i.e. most intensive) pd synth around?
3. How does the Pd engine handle itself on 64bit?
Thanks so much for any answers!
BECAUSE you guys are MIDI experts, you could well help on this...
Dear Anyone who understands virtual MIDI circuitry
I'm a disabled wannabe composer who has to use a notation package and mouse, because I can't physically play a keyboard. I use Quick Score Elite Level 2 - it doesn't have its own forum - and I'm having one HUGE problem with it that's stopping me from mixing - literally! I can see it IS possible to do what I want with it, I just can't get my outputs and virtual circuitry right.
I've got 2 main multi-sound plug-ins I use with QSE. Sampletank 2.5 with Miroslav Orchestra and Proteus VX. Now if I choose a bunch of sounds from one of them, each sound comes up on its own little stave and slider, complete with places to insert plug-in effects (like EQ and stuff.) So far, so pretty.
So you've got - say - 5 sounds. Each one is on its own stave, so any notes you put on that stave get played by that sound. The staves have controllers so you can control the individual sound's velocity/volume/pan/aftertouch etc. They all work fine. There are also a bunch of spare controller numbers. The documentation with QSE doesn't really go into how you use those. It's a great program but its customer relations need sorting - no forum, Canadian guys who wrote it very rarely answer E-mails in a meaningful way, hence me having to ask this here.
Except the sliders don't DO anything! The only one that does anything is the one the main synth. is on. That's the only one that takes any notice of the effects you use. Which means you're putting the SAME effect on the WHOLE SYNTH, not just on one instrument sound you've chosen from it. Yet the slider the main synth is on looks exactly the same as all the other sliders. The other sliders just slide up and down without changing the output sounds in any way. Neither do any effects plugins you put on the individual sliders change any of the sounds in any way. The only time they work is if you put them on the main slider that the whole synth. is sitting on - and then, of course, the effect's applied to ALL the sounds coming out of that synth, not just the single sound you want to alter.
I DO understand that MIDI isn't sounds, it's instructions to make sounds, but if the slider the whole synth is on works, how do you route the instructions to the other sliders so they accept them, too?
Anyone got any idea WHY the sounds aren't obeying the sliders they're sitting on? Oddly enough, single-shot plug-ins DO obey the sliders perfectly. It's just the multi-sound VSTs who's sounds don't individually want to play ball.
Now when you select a VSTi, you get 2 choices - assign to a track or use All Channels. If you assign it to a track, of course only instructions routed to that track will be picked up by the VSTi. BUT - they only go to the one instrument on that VST channel. So you can then apply effects happily to the sound on Channel One. I can't work out how to route the effects for the instrument on Channel 2 to Channel 2 in the VSTi, and so on. Someone told me on another forum that because I've got everything on All Channels, the effects signals are cancelling eachother out, I can't find out anything about this at the moment.
I know, theoretically, if I had one instance of the whole synth and just used one instrument from each instance, that would work. It does. Thing is, with Sampletank I got Miroslav Orchestra and you can't load PART of Miroslav. It's all or nothing. So if I wanted 12 instruments that way, I'd have to have 12 copies of Miroslav in memory and you just don't get enough memory in a 32 bit PC for that.
To round up. What I'm trying to do is set things up so I can send separate effects - EQ etc - to separate virtual instruments from ONE instance of a multi-sound sampler (Proteus VX or Sampletank.) I know it must be possible because the main synth takes the effects OK, it's just routing them to the individual sounds that's thrown me. I know you get one-shot sound VSTi's, but - no offence to any creators here - the sounds usually aint that good from them. Besides, all my best sounds are in Miroslav/Proteus VX and I just wanted to be able to create/mix pieces using those.
I'm a REAL NOOOB with all this so if anyone answers - keep it simple. Please! If anyone needs more info to answer this, just ask me what info you need and I'll look it up on the program.
Yours respectfully
ulrichburke
Bandlimited~ - bandlimited signal generator (square, triangle, saw)
IMPLEMENTATION DETAILS:
adding up harmonics for each phase can use a lot of CPU power, especially for lower frequency. Even in the current optimized state if you try playing a low frequency such as 2 or 3 hz it'll probably go above 100% usage.
So I decide to trade memory for cpu. Once you create the first instance of bandlimited~ it'll create 138 wavetables of size 2051 for each waveform type (square, triangle, saw, saw-triangle; rsaw is just saw inverted). In each table there will be a waveform generated with a different amount of harmonics, it iterates by 8.
So the tables will look like this:
8, 16, 24, 32, 40 ... 1104 harmonics
In the perform loop what it does is calculate the number of harmonics needed for the specified frequency: cutoff / frequency (cutoff is the nyquist limite), rounds down to the nearest integer.
It then checks to see which wave table is closest the this number and then calculates the missing component, ex:
44.1Kzh, nyquist = 22050
generated frequency = 330hz
number of harmonics = 66
the closest wave table has 64 harmonics
It'll then use 4 point interpolation (borrowed from tabread4~) to generate a signal from this wave table and add the harmonics 65-66.
This way the calculation loop is guaranteed to never be greater than 4 iterations.
The sin function itself is also placed in a wavetable of size 2051 (3 guard points). But when creating the waveform wavetables it'll use the real function.
Because the wavetables represent waveforms up to 1104 harmonics any frequency that requires more will start to eat up the CPU. These frequencies would be the frequencies below the nyquist limite divided by 1104.
That would be about 20hz at 44.1Kzh, 30hz at 88.2kz ...
EDIT: grammar
Superbordelor, a polyphonic synth for gamepad.
Hello,
I am currently working on a synth for my game controller, this is my first patch and I am very learning the basics.
For the moment the patch is a 8 voices polyphonic synthesizer working with a bunch of phasors. With my controller I have 8 key representing the major scale, the up and down buttons change the octave and right and left rise or lower the note by a semitone. Having this PS2 command, the central joysticks give the possibility to slide between tones a bit like a theremin.
Though i have a few question :
I wanna adopt this little thingy for a midi keyboard with a 16-polyphonic synth.
What is the best way to have polyphonic synth?
Is there a way to have a dispatcher that will send the notes to those synth?
There should be a possibility to pass from [phasor~] to [osc~]. I do not want to surcharge the cpu and wonder if when 0 is send to the frequency of [osc~], does the cup it still use a lot of cpu?
Thanks.
Separate control of two midi devices
I've been learning Pd for a couple months now, piecing things together from other people's questions on this forum, but now I have to ask my own question... (which happens to be my first post)
I have been able to successfully control my novation X-station synth and/or my electribe sampler with midi messages from Pd. I just bought a second midi to usb cable (some M-audio thing) and that works too. But what I couldn't seem to figure out is how can I send one set of midi events to the sampler and another separate set of events to the synth.
Specifically, I want to be able to program a beat and control that with one set of midi notes, but not have them go through to the synth. And I want to program a synth line, but not have those notes triggering the sampler.
I tried doing the "use multiple devices" and I have both things show up, but I don't know how to do what i'm trying to do. I read something about maybe using [noteout 1] and [noteout 2] or something like that.... I'm kinda stuck. Any suggestions? Thanks.
Midi devices not showing in 0.40.2
hi,
i'm new to pd and started working in 0.38.4-extended under winxp. i just switched to version 0.40.2 (for easier handling of graph-on-parent) and get a problem: although my midi devices are listed properly when invocing pd with the -listdev %1 %2 %3 %4 %5 %6 %7 %8 %9 command line option, they are not selectable in the midi settings (only #1 is shown for input, and #1, 3 and 7 for output). do you have any idea what might be the solution?
(those are the devices:)
MIDI input devices:
1. MPU-401
2. SB Audigy MIDI IO [E000]
MIDI output devices:
1. Microsoft MIDI Mapper
2. SB Audigy Synth A [E000]
3. MPU-401
4. SB Audigy Sw Synth [E000]
5. SB Audigy Synth B [E000]
6. SB Audigy MIDI IO [E000]
7. Microsoft GS Wavetable SW Synth
Using Pd to edit external synths like Oberheim Xpander
Again, not much activity here because of problems in sysex/midi.
What would be nice here for people building external synth editors is a working
raw midi on all platforms and a set of abstractions to dump, store and examine
sys-ex messages. Have you looked through all of extended/grdiflow/etc and still
not found something to do it?
Manufacturers sysex charts, different for each synth but basically assign controllers to
internal parameters that just arent realtime controller messages, and you fill out the
variable field, usually at least two midi bytes (14 bits) or longer variable length
parts.
They're all published so setting up new synths is no worry. I would actually canibalise the
old sound diver data files I reckon. )
Patco did something with the idea of a "universal synth editor" not too long ago...
IIRC the expander is TWO matrix12's in a box, which is a beast of a synth with hundreds of
parameters... so yikes!, best to start with data in a form that is machine readable that you
can inject your variables into.
> it scares me to use it in case it wipes out all my patches forever!!
Do a sysex dump of your setup and restore it once to feel confident about hacking
the Oberheim via sysex.
Announcing bagoftricks-0.2.8
Yep... stn, the bot environment is a great immediate, self-contained PD
environment. I really like the examples (tutorial) you provided
with it - great sounds and clear examples.
Apart from sequencing side, I looked a bit into "expandibility" of bot
since I really like the approach
of bot pre-made (and candy colored tools like filters, synths, FX but
this is not really easy: for example
if you want to add a synth to the bot arsenal (hardoff's great juno synth comes to
my head in this moment this is not immediate, nor it is to take a FX out
of bot and use it into other environments (at least you need to make some - mmh, heavy? - mods). So at the end if you want some special features
you may feel a bit "narrow" between bot walls
On the other side I think that if one being used to it, bot *is* one of the much powerful environment I've seen in PD for direct, straigth music composing out-of-the-PD-box. (X Andy: looking into your songs I've the impression
of a slightly different approach: if in bot sequencer is completely separated from
the synth-effect part like a orchestra-score in csound, in your songs the
division is not so "clear", and the sequencing side seems a bit more integrated
with the synth-effect side: am I right? I still need to find my way
So when I first started with PD last year it was a great
discovery like: "wow, this bot is really cool!", which for a PD starter like I was
this helped me a lot in staying inside PD world. So, even if it is not complete,
even if you are not completely satisfied with it ... put it on-line again
I saw that you prepared later last year a tool called "mmm": are you still
developing around it? That was a different approach, and personally I had a
bit more difficulties in understanding it w.r.t. bot...
All the best,
Alberto
PD SYNTH - PLEASE HELP!
Hello, my name is Martin. I am doing a simple PD patch but I have a lot of doubts and problems with it.
What I need is a simple synth to control ADSR , have 4 presets, simple modulation (AM, FM), and include a free improvise composition.
What I have done is this (Please see my pd patch synth.pd).
What I really want to have is something like this:
http://music.ucsd.edu/~tre/ - a simple 6 voice synth with presets
Anyone can help?
Thank you very much,
Martin.