@porres The lists are stored in a text, the markov chains are simple build upon the line numbers of the text, so over the indices.
-
MIDI into [seq] and Markov chains
-
yeah, now that I had a deeper look into it, I get it nice way to make multidimensional arrays
anyway, I was working on an object based on cyclone/anal + cyclone/prob for ELSE, but I'm gonna try and make something like this now! I just miss the possibility to set a probability transition matrix like in prob, but I may be able to work on this patch and get there.
great work and thanks for this
-
@porres Well, the [markov] object takes source material and generates something like an implicit probability transition matrix from that. If you already have a probability matrix, you only need a starting point and can play the markov chains immediately. A much more simpler abstraction can do that. Mixing the two approaches seems complicated, since [markov] follows a different philosophy, i.e. it allows adding more source material later in the process.
Interesting could be to have two separate abstractions: One to generate a probability matrix from source material and another one that plays markov chains from that. So you can have both approaches and combine them.
This would also be similar to the combination of [anal] and [prob], but as a generalized approach to have markov chains of arbitrary length.
The question is rather if it is a realistic scenario to have a complex probability matrix for markov chains of higher order. [markov] is built as a basic machine learning tool.
-
I have been working with the Markov object with great success, but my patch fails in loading the saved state.
I get this when I reload my patchsavestate
... couldn't createUsing PD 0.48.1.
Any ideas on how to debug the savestate function?
-
@MikkelM since [savestate] is a newer object, i would try to use the current PD version (it exists since 0.49 http://msp.ucsd.edu/Pd_documentation/x5.htm).
-
Ok! I am using PD on a Tiny Core Linux platform that runs 0.48.1 and I dont think I dare to touch it again soonish.
-
@ingox I see now that the approach of defining a probability matrix is hard with this abstraction. I guess we needed to generate a third [text define] which properly encodes a probability matrix (of any order). This could be generated from the input like the $0-markov text, but once we're editing and creating it as the source of the chain, we'd need to create $0-memory and $0-markov from it...
so yeah, seems like a lot of trouble.
Anyway, hope you don't mind, I'm including a variation of this abstraction in my ELSE library
-
@porres Sure thing, it is public domain.
And yes, this abstraction is basically creating a form of probability matrix out of the source material. If you already have the matrix you don't need most of the steps and can basically directly play the chains...
This abstraction is something like a very basic machine learning approach: Play some notes or read a midi file into it and get new stuff out of it that is computer generated, but based on human creativity.
-
@ingox said:
If you already have the matrix you don't need most of the steps and can basically directly play the chains...
the thing is that I was talking about another form of matrix like the one from [prob] which is kinda intuitive, unlike the one we have here
-
@porres Maybe you can post some sample data of your matrix?
-
just check cyclone/prob please, that's it
-
@porres This uses [array random] to move through the chains:
In markov_matrix_demo.pd you can see that the probabilities actually do match up.
The first value of the prob matrix could also be an index of a larger chain, the second value could also be the index of a chord. This could be incorporated or left outside the object. Only the length of the chain cannot be recalculated from within the system.
This does not include any checks for duplicates or consistency, so [markov_matrix] should be reset first and then the matrix data should be correct.
-
The basic idea is:
- read all the data into a [text]
- choose a random starting point and take the second value of the prob list as current state
- find all lines with the current state as first value, read the second values into a list and the probabilities into an array
- use [array random] and take the corresponding value from the list as new current state
- repeat from step 3.
(In the implementation another column for counting is added, so first value becomes second and so on.)
This should also work with more probabilities and also if the probabilities don't add up to 100. They don't actually have to be percentages at all (untested).
-
@porres I think that [markov] is just more flexible than [anal] + [prob] or [anal] + [markov_matrix] for that matter, so i assume that the whole prob matrix approach is just going nowhere.
-
I still have to check. The thing is that I like the approach where you can design your own probability matrix without the "learning" process. There's an advantage if you just sit and write down how many times you want "A" to be followed by "B" and etc...
As for the machine learning approach, your markov design is flawless and extremely versatile and I've put it in my library already
-
@ingox would it be possible too choose the most similar value, if there is no same value (of course, the similarity needs to be defined first)?
-
@Jona In the sense that markov chains define similarity in that a note was following another note in the source material?
Maybe you can describe a bit more how notes would be selected, maybe with an example?
-
@ingox maybe it does not make so much sense for single notes (not sure how much different notes an average song has - but not too much) but for chords and for velocity perhaps. Now a problem is, if we do not look at the beginning and the end of a song, there is always a same value in a markov chain. So maybe the same value just has a higher probability than a similar value? I have to think about a concept...
-
@Jona Generally speaking, a markov chain can be created not only over notes or chords, but also over abstract values. For example, in the [markov] object, the chains are created over ids. So you could for example put your notes/chords/velocity values in a [text] and use the row numbers to build the chains using [markov]. [markov] would in turn output row numbers and you could decide what to play from there.