Schenkarian generation of music box music
I like the weighted random idea —
I think I'm going to start by implementing the bare-bones core structure of a piece, starting with the root note and mode, and then generating a chord structure based off of this mode. The idea of Schenkarian analysis is to start off by analyzing the larger chord structure of a piece (to find the overarching I-V-I motion) and then move inwards layer by layer in terms of complexity until the whole piece is analyzed down to the individual notes. I'm going to approach generation in the same way — when I get to the point where I am generating the smaller chord progressions I will probably use a weighted random scheme of commonly used motives.
2016_05_06 - Trip Tech, a Trio for Three Strings and one Voice
Part of me wishes I had not given this piece a name, yet do as I must, it is now time for me to release it Into The Wild.
I recorded it in 4 separate improv takes, each one adding a new layer to the previous ones as I listened with Audacity passthru, with vocals last.
The first track was using only vcompander and the DIY2 compressor.
The second track was using the previous settings + Puckette's reverb2.
The third track was using the first settings but not the second and adding mmb's resonant filter.
The final (vocal) track was as clean as I could get it with just a little reverb.
The intention was to come as straight from the Heart as I could and see if I could layer/fold my musical ideas onto themselves and it still resonate as a cohesive whole.
In the end that will be (mostly
) for others to decide, but for me I Did achieve that goal with about 95-6% accuracy.
Peace and good will to all you.
-svanya
2016_05_06 - Trip Tech, a Trio for Three Strings.mp3
p.s. i deducted the 4-5% for the sound quality of the first track because it sounded really garbled: any advise/suggestions on that front would be much appreciated.
How to make a simple loop pedal?
Here was my take on it.
I had my guitar input via adc~, and I gutted an old usb keyboard so it only had a few keys left so I could use my foot to control it.
Hold down the rec loop1 button to lay down the first layer, then use loop2 to stack as many layers ontop of it.
Flatten GEM and remanipulate it?
Here is what I hope to do:
Chroma key out parts of one gemhead film layer,
to reveal parts a film playing behind it (another gemhead layer)...
(Can i ) flatten these two gemheads and then manipulate it further, i.e.:
chroma key out more parts (originally from the top gemhead layer),
to reveal another film playing in the background?
Any help would be much appreciated!
Thanks
Loading a folder of audio files
Interesting topic, guys!
Do I understand right, that you try to assign numbers to samples that are used to select them out of a certain folder?
Loading audio files like that could easily be used for a sampler in the style of
"ParamDrum" http://reaktortips.com/2011/03/paramdrum-3-is-here.html
or "S-Layer" http://twistedtools.com/shop/reaktor/s-layer/ .
I could also imagine loading sample-combinations using an interactive genetic algorithm that lets you listen back new sample combinations in a certain mixture... really inspiring.
Think of combining that sample loading mechanism with a sequencer using a Markov-chain-matrix like used in
"sector" http://createdigitalmusic.com/2014/02/sector-stuttering-stochastic-sample-slicer-using-probability-curving-lines-ipad/ !
wicked...
Yepp...I guess I'll use this stuff, too.
Great work!
Need help to slightly modify a PD project (Rythmboy)
The samples in Traktor only trigger with the length of the note that comes out of the Rhythmboy, so especially for longer samples, the length should be high enough, something like 5000 ms.
But if the length of one note is longer than it takes to play the next note, the next note will not be played, or at least it seems like some notes are randomly skipped, whereas the chance to be faulty rises with the length:
10 ms -> everything fine, but even the base drum is just some ticktickticktick
5000 ms -> the base drum is a whoompf, as it should be, but is only triggered once per every few sceonds
So I need to make sure that the note length is always long enough to play the sample as long as it should (sample is over, or same sample triggered again), but still short enough to not disturb the triggering of the next sample. The problem is that we can't use fixed or calculated lengths, because sometimes the rhythm could not be absolutely straight, and we need the exact time at the moment the first note ist played, and then we don't know yet when the next note will be triggered.
Because of this I think it is better to concentrate on properly retriggering the notes.
Here's the modified patch, my modifications are mainly on the first layer (MIDI clock in, metronome, syncing "play" and "reset" with the clock), and on the track layers (the part you already made a screenshot from, adapted to the midiclock, the [send subdivision_track] part is only responsible for triggering the lights and thus is not that time critical).
Furthermore, I've flipped the rotaries upside down (the QuNeo sends volumes like a watch, 0 and 127 is at the top, not like a normal rotary, where both is on the bottom. I flipped this), and changed the output notes and ranges to work with Traktor.
Maybe I didn't not find the cleanest solutions, but at least it works. 
Noise filter for microphone (Live Audio)
Well, the forum crash seems to have eaten my last post.
I have made a noise filter to clean up audio signals live.
Other noise removal filters need to have a noise sample selected and try to remove that noise from the complete track - they only work offline.
This patch works online. It removes any stationary noise from the signal and doesn't need any user adjustment except that it does need to be told how much to reduce the noise.
"Stationary noise" is a signal whose frequency content and amplitude stay (more or less) constant for over 1 second. Fan hum is a good example, as well as the more or less "white" noise from the wind from the fan. Car motor sounds from a car travelling at constant speed is also a good example.
It will also kill feedback squeal cold, even at the lowest settings.
The patch is built in layers, and the lower layers can be used independently or combined and used to build different filters.
The attached zip file includes all the components from the lowest level up to a complete demonstation that takes in audio from a microphone and puts out filtered audio on line out. It also includes a set of help files that describe the function and use of the various modules.
Included modules:
NoiseLevelDetector.pd - estimates the amplitude of the stationary noise
NoiseFilter.pd - attenuates the signal based on the amplitude from
NoiseLevelDetector.pd Since it is more effective at high frequencies, it is best to feed it limited bandwidth signals and use multiple filters to cover the desired audio range.
BandLimitedNoiseFilter.pd - a Noisefilter that only works on the specified frequency range.
MultibandNoiseFilter.pd - a complete filter that covers the range from 40Hz upto 22000 hz to filter the complete audio spectrum.
Test.pd - demo program that demonstrates the use of MultibandNoiseFilter.pd
It works best for speaking voices. Singing tends to be more stationary. It could be adapted to singing voices by changing a single value in one of the lower level blocks.
The original idea was to create filter for removing car noises from microphone audio for two-way radios. When used to cover just the range from 300Hz to 3000Hz, it does a very good job.
The biggest disadvantage is that it will start making "musical noise" if there is a lot of noise and the attenuation is set high. It also adds a slight echoing quality to the filtered audio.
The project is hosted on GitHub: PureData NoiseFilter project.
Output non audio and non MIDI signals outside Pd
Hello,
Thanks for your help. Ok I see how to operate.
I'm wondering if it's possible to do such a Bome's MIDI translator, or MIDI stroke-like patch : with those two, the keyboard shortcut is sent "somewhere in the system", i.e. in a layer that is directly accessible by any active application. I don't think they use specific API for specific applications. This is my feeling as I used them with various applications without any additional stuff to install.
Thus do you think it will be possible to make an API between the MIDI->keyboard patch and any application ? an API taking the keyboard order, and putting it in "basic system layer=keyboard entrance" to where any application is constantly listening to ?
(you probably understand know that I'm not informatician...
).
Thanks bye
Help with Pd controlling effects in Logic using ctlout
Hi, don't know if this is still relevant but anyways, there is a way without Logics automation menu.
Here is what I do:
(I'm not sure if it is the slickest thing to do, but it works)
Logic is using it's own "control language" for its own and external vst/au plugIns.
You will have to translate your midi cc into Logics language.
In Logics "environment" on the "mixer"-layer simply connect a "monitor" behind the channelstrip (the cable icon in the upper right corner of the channel object) in which your filter plug is located, so that you can see the messages going out of your channel.
If you now turn a knob (let's say, the cut-off frequency) in your plugIn, you should see a message in the monitor that looks just like a cc-message only that it is marked with an "F" instead of the "cc-icon".
Now, on the "click&ports"-layer grab the port on which your cc-data is coming in from the "physical input" and connect it to a "transformer". Configure the "transformer" so that it takes the controller you want to use to control the cut-off frequency and changes it into the "fader"-control data you need for the cut-off-parameter in your plugIn. You will have to change the "Channel" and "Data Byte 1".
The mapping function in the transformer is the way to go, if you want to control more than one parameter in the plugIn.
Connect the "transformer" output to the "Channel" (alt-click on the output cable icon to connect between layers) and Voilá! When you send the cc to Logic, you should see the parameter change in the plugIn GUI.
In this patch, the incoming data goes only to the cable and not to the "SUM" output of the "physical input" object. So it doesn't reach the "Sequencer" by default anymore and you won't see any incoming data in the transport bar. (This is due to the hierarchy of Logic in which the "Environment" is a like a shell around the "Sequencer") To Y-split the incoming data, simply patch a "Monitor" as the first object, which will give you multiple outputs. In case you want to control several Channelstrips via the same port.
This is of course a "fixed" patch, but Logic lets you patch in a way that you can change the destination of the cc-data within Logic (in the environment-menu New->Fader->Specials is the awesome "cable switcher".
Hope this helps.
If anyone has a better solution, please let me know.
Regards,
j
(If this is nothing new, please excuse me. I added some explanations in case someone, who doesn't know that much about Logics environment, finds this )
GEM and multiple video signals
Is there a way to take multiple video signals using GEM, sample only portions of them then recombine them into a single frame.
For example I need to divide video into 4 separate sections across the screen, think of layers stacked on top of one another. the video will be sampled from multiple input streams such as camcorders. each camcorder covering a zone in the stack of video.
so simply put can you select a region in the gem output to display. something like a layer mask?



