Managing voices
I get around the cords by sends and receives. When I create a voice I create it with arguments name and parent id
[voice v1 $0]
The only inlets voice has are for note and velocity.
From within voice I can access stuff from the parent
for instance $2modfactor would reference parents $0modfactor
so I can change $0modfactor in the main patch and all the voices will receive the value.
For the voices local sends use
Lets say you want all the voices to use the same table for an lfo table.
you would use [tabosc~ $2lfo] inside the voice
and inside the main patch the tables name is $0lfo
you can even trigger the lfo in each voice so that it is gated on by noteon or have it free running. Each voice could in effect scale the frequency of the lfo by some parameter like kbd note tracking. It gets a little complicated trying to keep up with what all is going on but I think it is simpler than having so many wires.
Managing voices
i have a problem, that i cannot figure out:
say, i have a synth with two osc's, each of them have for example 3 voices. now i want osc 1 to modulate osc 2's frequency by a constant amount of - for example - 100.
now, it seems that i have to use 3 multipliers and 3 signal cords, so that every voice of osc 1 can be multiplied by 100 and then routed to it's voice-"twin" of osc 2.
in my case i have 12 voices and i want to add some other modulation sources and targets, so you can imagine, that i'd run into problems, if i have to use millions of cords for a modulation matrix.
is there any way around this?
i know, a comparison to reaktor is not possible - but in reaktor the voices are all transported via one cable and automatically routed to their target.
in pd, the routing of messages is pretty easy but i have no clue how i could build a signal bus transporting 3 or more signals with a voice index.
can someone help?
edit: i tried conversion from signal to message domain to benefit from the packing and routing options from at messages, but operations on a bus have to be done with list objects/abstractions - this is unbelievable cpu hungry. i attached a patch with two different signal sources, but it takes about 30 percent of cpu usage on my machine.
Help for filter objects
@toxonic said:
is it correct, that i have to use filters and amp envelope for each voice in a synth?
Well, you don't have to, but if your aim is creating a "true" polyphonic synth then the answer is yes. Each voice has it's own envelopes and filters. I'm also working on my main synth (12 voices max) and there's only 1 resonant filter, because I'm afraid 12 vcf's would not be friendly for my cpu
But then I find it's ok to have the filter after the actual voices. It would be certainly nice to have each voice have it's own filter but I can live with the limitation.
BECAUSE you guys are MIDI experts, you could well help on this...
Dear Anyone who understands virtual MIDI circuitry
I'm a disabled wannabe composer who has to use a notation package and mouse, because I can't physically play a keyboard. I use Quick Score Elite Level 2 - it doesn't have its own forum - and I'm having one HUGE problem with it that's stopping me from mixing - literally! I can see it IS possible to do what I want with it, I just can't get my outputs and virtual circuitry right.
I've got 2 main multi-sound plug-ins I use with QSE. Sampletank 2.5 with Miroslav Orchestra and Proteus VX. Now if I choose a bunch of sounds from one of them, each sound comes up on its own little stave and slider, complete with places to insert plug-in effects (like EQ and stuff.) So far, so pretty.
So you've got - say - 5 sounds. Each one is on its own stave, so any notes you put on that stave get played by that sound. The staves have controllers so you can control the individual sound's velocity/volume/pan/aftertouch etc. They all work fine. There are also a bunch of spare controller numbers. The documentation with QSE doesn't really go into how you use those. It's a great program but its customer relations need sorting - no forum, Canadian guys who wrote it very rarely answer E-mails in a meaningful way, hence me having to ask this here.
Except the sliders don't DO anything! The only one that does anything is the one the main synth. is on. That's the only one that takes any notice of the effects you use. Which means you're putting the SAME effect on the WHOLE SYNTH, not just on one instrument sound you've chosen from it. Yet the slider the main synth is on looks exactly the same as all the other sliders. The other sliders just slide up and down without changing the output sounds in any way. Neither do any effects plugins you put on the individual sliders change any of the sounds in any way. The only time they work is if you put them on the main slider that the whole synth. is sitting on - and then, of course, the effect's applied to ALL the sounds coming out of that synth, not just the single sound you want to alter.
I DO understand that MIDI isn't sounds, it's instructions to make sounds, but if the slider the whole synth is on works, how do you route the instructions to the other sliders so they accept them, too?
Anyone got any idea WHY the sounds aren't obeying the sliders they're sitting on? Oddly enough, single-shot plug-ins DO obey the sliders perfectly. It's just the multi-sound VSTs who's sounds don't individually want to play ball.
Now when you select a VSTi, you get 2 choices - assign to a track or use All Channels. If you assign it to a track, of course only instructions routed to that track will be picked up by the VSTi. BUT - they only go to the one instrument on that VST channel. So you can then apply effects happily to the sound on Channel One. I can't work out how to route the effects for the instrument on Channel 2 to Channel 2 in the VSTi, and so on. Someone told me on another forum that because I've got everything on All Channels, the effects signals are cancelling eachother out, I can't find out anything about this at the moment.
I know, theoretically, if I had one instance of the whole synth and just used one instrument from each instance, that would work. It does. Thing is, with Sampletank I got Miroslav Orchestra and you can't load PART of Miroslav. It's all or nothing. So if I wanted 12 instruments that way, I'd have to have 12 copies of Miroslav in memory and you just don't get enough memory in a 32 bit PC for that.
To round up. What I'm trying to do is set things up so I can send separate effects - EQ etc - to separate virtual instruments from ONE instance of a multi-sound sampler (Proteus VX or Sampletank.) I know it must be possible because the main synth takes the effects OK, it's just routing them to the individual sounds that's thrown me. I know you get one-shot sound VSTi's, but - no offence to any creators here - the sounds usually aint that good from them. Besides, all my best sounds are in Miroslav/Proteus VX and I just wanted to be able to create/mix pieces using those.
I'm a REAL NOOOB with all this so if anyone answers - keep it simple. Please! If anyone needs more info to answer this, just ask me what info you need and I'll look it up on the program.
Yours respectfully
ulrichburke
Help with a midi-in issue
ok then.. i've got another question.
this thing works pretty well, although its unfinished.
The problem i am having now is that i would like to be able to use chords with different numbers of voices. right now it is set up to use 3 voices and that works pretty well. using different voices doesn't really work right now because of the way higher octaves of the arpeggio. I've set it up to do chords with 4 or 5 voices and they work fine, but i'm not sure how to set it up to handle chords with a variable number of voices. I'd like to give it the ability to take 8 voices.
In the patch, after it unpacks the chord, it populates number boxes with the voices that come in, when changing to a chord with 3 voices from one with a higher number, the boxes that are unused stay at their previous values. If i could figure out how to reset them to 0, i could use booleans to do the switch... but i don't know how to do that.
any ideas?????
Interaction Design Student Patches Available
Greetings all,
I have just posted a collection of student patches for an interaction design course I was teaching at Emily Carr University of Art and Design. I hope that the patches will be useful to people playing around with Pure Data in a learning environment, installation artwork and other uses.
The link is: http://bit.ly/8OtDAq
or: http://www.sfu.ca/~leonardp/VideoGameAudio/main.htm#patches
The patches include multi-area motion detection, colour tracking, live audio looping, live video looping, collision detection, real-time video effects, real-time audio effects, 3D object manipulation and more...
Cheers,
Leonard
Pure Data Interaction Design Patches
These are projects from the Emily Carr University of Art and Design DIVA 202 Interaction Design course for Spring 2010 term. All projects use Pure Data Extended and run on Mac OS X. They could likely be modified with small changes to run on other platforms as well. The focus was on education so the patches are sometimes "works in progress" technically but should be quite useful for others learning about PD and interaction design.
NOTE: This page may move, please link from: http://www.VideoGameAudio.com for correct location.
Instructor: Leonard J. Paul
Students: Ben, Christine, Collin, Euginia, Gabriel K, Gabriel P, Gokce, Huan, Jing, Katy, Nasrin, Quinton, Tony and Sandy
GabrielK-AsteroidTracker - An entire game based on motion tracking. This is a simple arcade-style game in which the user must navigate the spaceship through a field of oncoming asteroids. The user controls the spaceship by moving a specifically coloured object in front of the camera.
Features: Motion tracking, collision detection, texture mapping, real-time music synthesis, game logic
GabrielP-DogHead - Maps your face from the webcam onto different dog's bodies in real-time with an interactive audio loop jammer. Fun!
Features: Colour tracking, audio loop jammer, real-time webcam texture mapping
Euginia-DanceMix - Live audio loop playback of four separate channels. Loop selection is random for first two channels and sequenced for last two channels. Slow volume muting of channels allows for crossfading. Tempo-based video crossfading.
Features: Four channel live loop jammer (extended from Hardoff's ma4u patch), beat-based video cross-cutting
Huan-CarDance - Rotates 3D object based on the audio output level so that it looks like it's dancing to the music.
Features: 3D object display, 3d line synthesis, live audio looper
Ben-VideoGameWiiMix - Randomly remixes classic video game footage and music together. Uses the wiimote to trigger new video by DarwiinRemote and OSC messages.
Features: Wiimote control, OSC, tempo-based video crossmixing, music loop remixing and effects
Christine-eMotionAudio - Mixes together video with recorded sounds and music depending on the amount of motion in the webcam. Intensity level of music increases and speed of video playback increases with more motion.
Features: Adaptive music branching, motion blur, blob size motion detection, video mixing
Collin-LouderCars - Videos of cars respond to audio input level.
Features: Video switching, audio input level detection.
Gokce-AVmixer - Live remixing of video and audio loops.
Features: video remixing, live audio looper
Jing-LadyGaga-ing - Remixes video from Lady Gaga's videos with video effects and music effects.
Features: Video warping, video stuttering, live audio looper, audio effects
KatyC_Bunnies - Triggers video and audio using multi-area motion detection. There are three areas on each side to control the video and audio loop selections. Video and audio loops are loaded from directories.
Features: Multi-area motion detection, audio loop directory loader, video loop directory loader
Nasrin-AnimationMixer - Hand animation videos are superimposed over the webcam image and chosen by multi-area motion sensing. Audio loop playback is randomly chosen with each new video.
Features: Multi-area motion sensing, audio loop directory loader
Quintons-AmericaRedux - Videos are remixed in response to live audio loop playback. Some audio effects are mirrored with corresponding video effects.
Features: Real-time video effects, live audio looper
Tony-MusicGame - A music game where the player needs to find how to piece together the music segments triggered by multi-area motion detection on a webcam.
Features: Multi-area motion detection, audio loop directory loader
Sandy-Exerciser - An exercise game where you move to the motions of the video above the webcam video. Stutter effects on video and live audio looper.
Features: Video stutter effect, real-time webcam video effects
Serious problem with poly object!!
@skatias said:
But when I press a note twice and I want it to be sustained by finger and release the pedal then the pedal gives all notes off and the finger sustain doesn't work.
I don't think that should happen unless the previous note has been stolen. But if it is the case, you might want to consider handling this inside the voice itself. I don't have time to make an example right now, but I'll try to explain. Right now I imagine it's set up to go into its release stage when it receives a velocity 0. You could put a [ctlin] inside the voice and have it do a check for whether or not the sustain pedal is down when it receives velocity 0. If it is, have it wait for the pedal to be released before going into the release stage, otherwise let it pass. This should definitely prevent new voices from being turned off.
As for making sure a specific note isn't duplicated, you'll have to keep a running track of what notes have been sent to what voice. Here's what I'm thinking: for every note-on/note-off coming out of [poly], store the note value in a table that's the same length as the number of voices. Use [poly]'s voice number as the table index, and for every note-on store the note number, and for note-offs store something out of range like -1. When a new note-on comes through, run a quick check through the table to see if the same pitch is being used before [route]ing it. If it finds a match, change the voice number to the table index so that it goes to the same voice and retrigger the note, making sure to reset it if it's being held by the sustain pedal.
Or something like that.
Filtering speech from adc
you mean, you have a stereo music signal with e.g. a complete band with guitars, bass, drums and vox on the inlet and you only want to have the voices on the outlet? no, i'm pretty sure that's not possible or in any case not a trivial task.
if you aim at the opposit of those effects, that filter out a voice from a music file, so you get the instruments only as result - those efffects do not really filter voice, they filter everything in the middle of the stereo field such as voice, but unfortunately often the bass guitar and other instruments are set to middle too, so they will be filtered too.
edit: if you have a constant noise in a voice recording, then this is possible to reduce it more or less without big changes on the voice. this is a task for fft analysation and fir filters. theres a nice freeware plugin called reaFIR, look here:
http://www.reaper.fm/reaplugs/
"winds of doom" soundscape-generator
@toxonic said:
i have to enhance some little things in my structure to save some cpu power (for example i use a noise~ object in every(!) voice, but i only need one for all voices) - at the moment it's not possible to run 100 voices on my machine. but i find, this many voices are not needed to get a good and fat sound.
yes, a single instance of noise~ would help in reducing the cpu load. Another (maybe useful) suggestion is to use
a switch~ for the voices that are not in use at a certain time. Anyway you're right, a nice sound does not need
too many voices (96 are a lot, 128 too many
.
@toxonic said:
what i found out in your patch is, that reducing the voices leads to a reduction of mainly high voices - was this your intention?
I approached that problem by using a "frequency spacing" parameter, so even with few voices I can span
the 20-20000 Hz region (each "voice" has a center frequency f = f0 x n x dF where f0 is the "base" frequency,
n is the voice number and dF is the frequency spacing), but in that case, if you reduce the number of voices and you
want to preserve the full spectrum span you need to increase the frequency spacing..
Anyway, I really enjoyed winds of doom, thanks for sharing.
I particularly like the "drops" effects (preset 2 if i'm not wrong), it's really good.
@toxonic said:
p.s.: a skrewell adaption would be a nice idea too, but i didn't already get me into it.... i hate reverse engeneering...
Skrewell is an instrument for which I'd pay to have it ported in Pd (ok, kudos, not real euros
..
In case you change idea let me know, maybe sometime we can join the efforts for this ...
Anyone else? 
Alberto
"winds of doom" soundscape-generator
cool, nice sounding patch too! it's sounds a little different to mine, but great too. i have to enhance some little things in my structure to save some cpu power (for example i use a noise~ object in every(!) voice, but i only need one for all voices) - at the moment it's not possible to run 100 voices on my machine. but i find, this many voices are not needed to get a good and fat sound.
what i found out in your patch is, that reducing the voices leads to a reduction of mainly high voices - was this your intention? i solved that by creating the voices as abstractions dynamically with their basic-pitch as argument - but i create them in 10th steps and use modulo to begin again the lowest voice.... sorry my english is too poor.
what i mean is, when i have a count of ten voices i have already coverd the whole frequency spectrum.
thank you for the link, i'll have a look to your other creations later! 
p.s.: a skrewell adaption would be a nice idea too, but i didn't already get me into it.... i hate reverse engeneering... 


