Conway's Game of Life implementation with data structures
i implemented a third voice + a scale change from @weightless. the concept of the third voice is that the second voice gets transformed to the third voice after it is played. if the third voice is not deleted by new living cells the third voice disappears after it is played. (the voices send only midi values at the moment, its without soundsynthesis...) : conway_2.0e_test.zip
Kangtaum - Scriptophonic microtonal metal generator
Kangtaum.zip
_
_
Kangtaum is an attempt at a real-time implementation of the text-to-(Black Metal)music algorithm proposed by Dave Tremblay.
The algorithm was intended for use with Tolkien’s writing; this version will produce a microtonal scriptophonic ‘metal’ composition from any text file that is input. The default is the poem She Bomber by Eliza Gregory, the text currently being converted is output to the Pd window.
_
_
_
Here's how it works:
·The octave is divided in 26 equal steps (26 microtones, one for each letter).
·The ordering of the letters, from low to high, is:
E-O-V-I-Q-C-F-A-J-Z-P-H-B-Y-S-R-K-D-T-L-X-M-N-G-W-U based on their frequency of use in the sample text (Tolkein).
·One letter represents a duration of 1/8th note.
·One comma, parenthesis or semicolon is a ‘tie’ to the last note played (1/8th duration normally) is continued for a ¼.
·A full stop, colon, exclamation or question mark is a pause of 1/8th measure.
·There are three transcription tracks:
- The melody: It is each letter and punctuation in the text played one after the other.
- The chords: A chord lasts as long as a word in the melody (for example, a three-letter word will last 3 eights notes, and a nine-letter word will last 9 eights). It's comprised of all the different notes/letters of the word played at once.
- The bass: The bass track or chord root consists of the first letter of each word played for the duration of the word. For example, the words THAN and NATH have the same notes and length, but the root and melody of the chords will be different).
·Reverberation for the composition is inversely proportional to the size of the current text chunk or sentence being converted into music. The bigger the chunk, the smaller the virtual space.
·The stereo position of the melody is controlled by the length of the word currently being sonified.
·A counterpoint line is generated from a reordering of the current word/chord’s notes from low to high, which is arpeggiated. The balance between melody and counterpoint is controlled by word length. The longer the word the more melody is present.
This isn't a finished patch, i've kind of reached a dead end with it so thought I'd open it up and see if any one else would like to chip in. I'm really interested in developing the synthesis side of things and making it sound more METAL I suppose.
If nothing else, hopefully some of the list processing is useful to someone working to convert text to music. Many thanks to those on the forum that helped me work some of it out or provide solutions.
This is a video of a slightly earlier version:
Here is an example of Dave's music produced with the original algorithm.
Polyphonic voice management using \[poly\]
@Fauveboy said:
but then if I load the subpatch gridSampler 1 and all the $1 inside are 1 what does that mean for the pd voice and the numbers it gets?
if the "pd voice" instances are subpatches (like they are now), they argument for gridSampler is passed down to them as well. You can see this if you open one of the pd voice subpatches, they are called "voice (1)".
@Fauveboy said:
is it possible for it still to read the array from gridSampler 1?
If you turn the pd voice into an abstaction, it is still possible to read from the same table that is located in the gridSampler patch. The problem you have here, as far as my understanding goes, is not that the subpatches are addressing the same table, but that they are doing so with the same phases. To solve this I think you need abstractions for the voices, and the reading mechanism for each of them (aka the note each voice plays) needs to be unique, and you do that by using $0 inside the voice abstractions.
Pándinus Who is up for Collaboration?: New Synth and add and fix new features on Iannix (Source code)(experienced) cash involved.
So, I really love Iannix and spend a lot of time on it, but a lot of work must be done, I´ve asked and asked the iannix people to help me out more but they dont have the time...
I need to fix the 3d and add color the way I want it from the iannix source code, among few other things, and ,make a program compatible with iannix to make sound sculptures.
Another of my projects is to make a physical synth, starting with pd, for multi-parametric music,
this involves continuums only: for the synth to have up to 30 voices: and to have as much parameters as possible via continuum (sliding parameters with as much resolution as possible) for each voice, the synth will be a drawing synth to be used with Iannix. Please, serious interests.
Pay for support.
Who is up for Collaboration?: New Synth and add and fix new features on Iannix (Source code)(experienced) cash involved.
So, I really love Iannix and spend a lot of time on it, but a lot of work must be done, I´ve asked and asked the iannix people to help me out more but they dont have the time...
I need to fix the 3d and add color the way I want it from the iannix source code, among few other things, and ,make a program compatible with iannix to make sound sculptures.
Another of my projects is to make a physical synth, starting with pd, for multi-parametric music,
this involves continuums only: for the synth to have up to 30 voices: and to have as much parameters as possible via continuum (sliding parameters with as much resolution as possible) for each voice, the synth will be a drawing synth to be used with Iannix. Please, serious interests.
Pay for support.
Noob Trying to Create a MIDI Chorder/Harmonizer
Yet more progress!
But still stuck on sending noteoff messages to a note after its number has changed. Maybe there’s something to do with a cold inlet, working as memory? Wait! Might have found part of the solution…
Followed the first two parts on the synth creation tutorial on Libre Music Production,
(The third and last part of the LMP tutorial has to do with filters and UI, so it shouldn’t have an answer to my noteoff issue.)
Through that tutorial, was able to make a simple polyphonic synth which takes MIDI in and outputs ADSR-enveloped notes to the DAC. So far, so good.
Added a fifth to the mix. Still works. No stuck note.
Then tried adding a third note which progressively goes up with a counter… Boom, noteoff problem again. It does make some sense: need to trigger a velocity of zero to the previous note, But this is where memory would come in handy.
Found part of a solution in using the right inlet of a [float] object,
libremusic-synth.pd
Now, the synth produces the correct effect, even with multiple incoming notes.
In fact, doing this with [poly] may bring us closer to the original effect created by Robby Kilgore on the Oberheim Xpander! Adding more polyphony than the notes which are produced internally, getting a rotation of notes… and a comeback of the noteoff problem.
libremusic-synth-rot.pd
So, getting closer, but my learning path is still winding around. Will search for known solutions, as it’s surely a common problem. Don’t necessarily want to go all the way to a minimal sequencer with [tabwrite] and [tabread], but it could be a solution and would have the added advantage of leaving a trace on which notes have been generated.
Will get it eventually!
Noob Trying to Create a MIDI Chorder/Harmonizer
Made a bit of progress. Just in case it helps someone, here’s a few more details.
Adapted the patch to work in harmonic minor mode and streamlined a bit by routing lists of intervals instead of connecting each wire individually.
harmo-chorder.pd
As there are only four types of seventh chords in the major mode, it does make things a bit clearer.
major-chorder.pd
Among my next steps is a generalization of the whole thing to work in different modes, and with different root notes. Not completely sure how to make that work but it should also help me in terms of randomizing chord selection.
Might also tweak voicings to fit a couple of concepts from classical harmony/counterpoint (avoiding parallel fifths, etc.). Eventually, it’d be great to choose a given voicing based on the preceding chord (for instance, the F in a G7 chord could resolve on the E in a C Major 7 chord). Should be doable, but will probably require more learning.
Something which should be relatively easy to tweak is to decrease the velocity of the inner voices, to emphasize the top part.
Another thing which will be quite fun is to produce sounds directly in Pd. Wavetable synth might be especially fitting for a wind controller. In that case, it should be easier to run it on a Raspberry Pi, without requiring an external synth. Would make for a very compact setup.
Been experiencing some issues with some scale notes not producing chords. In fact, Pd doesn’t show that incoming note, but it plays on my external synth. Probably an issue with my current MIDI setup (WX-11 wind controller hooked to a laptop which controls the iWavestation app on an iPad using MIDImux). In some ways, it makes for a bit more variety.
It’s a fun learning experience, so far. Been occasionally thinking about a project like this for some years, actually. But kept being too overwhelmed to take it on. Today’s experience really energized me.
Thanks, @shindeibrauns!!
ADSR clips when triggering new note while old one is releasing
I am trying to make a monophonic synth in PD. I have yet to add an LFO or VCF or a second oscillator, but I have created a waveform switcher (sawtooth-triangle-pulse) for the first one. One quirk I have found so far is that when triggering a new note, if there is a older note releasing, it will cut that one and begin playing the new one.
This isn't an issue with the sawtooth, but with the triangle (and to a lesser extent the pulse) this old note will pop. I am unsure how to fix this.
The synth.pd file is the main file, adsr.pd and note.pd are both required to run the synth. The waveform switcher is in Synth.pd.
$0 differences between objects and messages
Hi guys,
I want to store multiple $0 send values in a single message box that is to be sent to multiple $0- receive parameters on a synth. I want to do this for each 'voice' in the synth to make it polyphonic. I also would like my final polyphonic synth to be an abstraction.
I know that $0-sends won't work in messages. So is there a way I can store a list of $0- parameters for each voice on my synth so these can be selected as a 'sound' for the user?
How do I get around that $0- doesn't work in a message to an object box?
Any suggestion would be very helpful (sorry if this is very obvious)
kind regards
Casper 
I alkso
Polyphonic voice management using \[poly\]
Keeping track of note-ons and note-offs for a polyphonic synth can be a pain. Luckily, the [poly] object can be used to take care of that for you. However, the nuts and bolts of how to use it may not be immediately obvious, particularly given its sparse help patch. Hopefully this tutorial will clarify its usefulness. It will probably be easier to follow along with this explanation if you open the attached patch. I'll try to be thorough, which hopefully won't actually make it more confusing!
To start, [poly] accepts a MIDI-style message of note number and velocity in its left and right inlets, respectively...
[notein]
| \
[poly 4]
...or as a list in it left inlet.
[60 100(
|
[poly 4]
The first argument is the maximum number of voices (or note-ons) that [poly] will keep track of. When [poly] receives a new note-on, it will assign it a voice number and output the voice number, note number, and velocity out its outlets. When [poly] gets a note-off, it will automatically match it with its corresponding note-on and pass it out with the same voice number.
By [pack]ing the outputs, you can use [route] to send the note number and velocity to the specified voice. For those of you not familiar, [route] will take a list, match the first element of the list to one of its arguments, and send the rest of the list through the outlet that goes with that argument. So, if you have [route 1 2 3], and you send it a list where the first element is 2, then it will pass the rest of the list to the second outlet because 2 is the second argument here. It's basically a way of assigning "tags" to messages and making sure they go where they are assigned. If there is no match, it sends the whole list out the last outlet (which we won't be using here).
[poly 4]
| \ \
[pack f f f] <-- create list of voice number, note, and velocity
|
[route 1 2 3 4] <-- send note and velocity to the outlet corresponding to voice number
At each outlet of [route] (except the last) there should be a voice subpatch or abstraction that can be triggered on and off using note-on and note-off messages, respectively. In most cases, you'll want each voice to be exact copies of each other. (See the attached for this. It's not very ASCII art friendly.)
The last thing I'll mention is the second argument to [poly]. This argument is to activate voice-stealing: 1 turns voice-stealing on, 0 or no argument turns it off. This determines how [poly] behaves when the maximum number of voices has been exceeded. With voice-stealing activated, once [poly] goes over its voice limit, it will first send a note-off for the oldest voice it has stored, thus freeing up a voice, then it will pass the new note-on. If it is off, new note-ons are simply ignored and don't get passed through.
And that's it. It's really just a few objects, and it's all you need to get polyphony going.
[notein]
| \
| \
[poly 4 1]
| \ \
[pack f f f]
|
[route 1 2 3 4]
| | | |
( voices )



