• ddw_music

    @ingox

    $0 is not a symbol. It is internally rendered to a number.

    OK, got it.

    Is this documented anywhere?

    You can also give preset values to many objects, i.e. [pack 10 $0].

    Ah, that's very nice. It hasn't entered my vocabulary yet. Maybe it will sink in now.

    oid:

    If you want that preset value to be a constant you should always precede the left inlet with something which will keep out accidental lists...

    In fact, I have two of them: [unpack 0] [f], and a f outlet of a trigger.

    Also, when naming things with $0, it seems best to always do name-$0 instead of $0-name as most every example does,

    Understood, but... again, is this documented anywhere? "13.locality.pd" demonstrates $0 as a prefix -- if this is out of date, shouldn't somebody put in a pull request to update the help file?

    Anyway, with that, I was able to wrap up this moving-average abstraction:

    pd-moving-avg.png

    moving-avg.pd

    hjh

    posted in technical issues read more
  • ddw_music

    pd-arguments-again.png

    -history: no such object
    -history: no such object
    

    I know you can't get $0 in a message box.

    Looked through the subpatch/abstraction/locality help patches, no relevant example. It's kinda an important topic, but... where should I look it up?

    So what am I doing wrong?

    hjh

    posted in technical issues read more
  • ddw_music

    @jancsika

    Imagine you've got a single sequencer in the top level of a patch.

    Somehow, from your first description, I didn't get this idea at all :laughing: Basically it's the same as jameslo's second example here and my second example here -- the latter of which explicitly strips the [send] name off of a list btw.

    @ingox

    The simulated function via setting [send] to the caller of the "function" is possible, but it somewhat contradicts the data flow paradigm. It is important to see that in Pd, every operation is done sequentially, in a strict linear order of operations. So you are actually always in control over what is happening.

    I do see the point -- I think that concern is somewhat mitigated by the fact that the intention here is to send to one and only one [receive], and (deterministically) this [receive] object will trigger its chain before the [send] yields control.

    If there are accidentally two [receive] boxes with the same name, that would indeed be non-deterministic, but also user error.

    Patching is just more fun.

    I do find it fun in the sense of learning a different way of thinking. I don't know that I will ever really love patching in the same way that I love coding though.

    hjh

    posted in technical issues read more
  • ddw_music

    So, to sum up the strategies presented so far:

    • Global storage -- Pro: Super-easy for numbers (with [value]), fairly easy for other entities with [text] as storage. Con: Potential for name collision, but that can be avoided.
      1. Trigger the sequencer/counter/whatever and save the value in a global location.
      2. The caller reads from the global location.
    • Changing [send] target -- Pro: Caller directly specifies the return point. Con: A bit "magical."
      1. Caller sends the request with a [receive] name tag.
      2. The pseudo-function updates a [send] with the name, and passes the data there.
    • Abstraction, globally-stored index -- Pro: Call and return are all in the same place. Con: Global storage name is hidden in the abstraction -- be careful of collision. Also, in the provided example, redundant memory usage (three copies of the [text], all with the same contents).
      1. Abstraction increments a counter, which is shared across all instances.
      2. Gets from [text] at that index and returns directly.

    It seems to me that any or all of these could be useful in different cases -- the globally-stored index might be perfect in one case, while a name-switching [send] might be better in another. (Tbh I don't see this as "clunky" at all -- if you have widely spatially disparate references to the same source of data, as may easily be the case in a large and complex program, a simulated function call might be the cleanest way to represent it. Also, it seems to me that in all of the "shared storage" approaches, there is some risk of name collision, but this risk essentially doesn't exist with the send/receive approach, because the caller is completely in control of the location where the data will be returned. Oh, and I'm seeing a bit late that this is one of the options jameslo suggested.)

    As you've probably gathered, I'm much more comfortable with text languages. (With apologies in advance... tbh I think in most cases, you can go farther, faster, with text languages. With the exception of slider --> number box, pretty much the only thing I can think of that Pd can do better than SC is [block~], which SC simply doesn't have. Especially when complexity scales up, I find text to be more concise and less troublesome. E.g., if the synthesis code is structured correctly, I can create 3, or 10, or 100 parallel oscillators just by changing one number in the code, and without saving extra files to use as [clone] abstractions. I would love to see some counterexamples here! What am I missing? Being proven wrong means that I learned something.)

    WRT to Pd, after about a year of use in a couple of courses, I feel like I'm starting to get a bit better at it. For me, the most important point of this thread is to learn dataflow vocabulary for the text-programming concept of function calls. This discussion has been really valuable for me.

    hjh

    posted in technical issues read more
  • ddw_music

    @jancsika

    It decouples the output of your state machine from one and only one path.

    I'm afraid I'm not quite following you.

    Returning to a sequencer as an example -- if I want the output of a [text sequence something] to be directed to different places depending on the system's state, the one thing I can't do is to have multiple instances of [text sequence] -- because each instance has its own position that is independent of other instances.

    You get N paths where N is the number of abstraction instances you use to, say, implement the Supercollider state machine thingy you linked to. (Just to clarify-- each abstraction only generates output when a message gets sent to its own inlet.)

    ... which means each abstraction instance must have a separate sequencer instance.

    That isn't going to work.

    The only way I can think of to make it work is to have one abstraction containing one sequencer, and use the [send] to reroute to the appropriate [receive].

    Here, the upper construction is wrong; the lower one gets the result I want.

    pd-multiple-sequencers.png

    20-1024-multiseq.pd

    hjh

    PS Ergonomics do matter. The systems we use exert some influence over the ideas we're willing to consider (or, in extreme cases, the ideas we can even conceive). If something is inconvenient enough to express, then it restricts the range of thoughts that are practical within the system. In part this means we should choose systems that are closer in line with the thoughts we want to express, but it also means there's no harm in extending the system's reach.

    posted in technical issues read more
  • ddw_music

    @jancsika

    You can set up an abstraction which just forwards the message on to your global sequencer along with the abstraction's "$0" value. You use that "$0" value to set the symbol to the right inlet of a [send] that connects to the output of the sequencer-- something like that "$0" value prepended to "_the_output". Now just let that output flow from the sequencer to that [send] object.

    Ok, this is promising.

    Note that "sequencer" is just an example. I'm referring to a general problem which currently has no general solution in Max or Pd. Maybe other dataflow environments have found a solution.

    Inside your abstraction, you have [receive $0_the_output]. Just hook that to an [outlet] and you're done.

    Hmm, no, the point is to decouple the output from one and only one path. I think I would just have the wrapper abstraction do the "send" and the caller is responsible to have a receiver for it (and to request that the output be sent to the specific receiver).

    hjh

    posted in technical issues read more
  • ddw_music

    Thanks for all the suggestions. Using [text] as a list-register is not a bad idea.

    The other side of the topic is, however, pondering not only workarounds, but also ways to improve graphical patching itself to reduce the need for workarounds. I sometimes feel when using Pd that I'm spending a nontrivial amount of time working around limitations -- and not only in Pd's core object set, but also in the design of graphical patching in general.

    This is a concrete case where text programming languages' concept of a function/method call is richer and more robust than patch cables, because the target that will receive the function's output is not fixed, but rather depends on the place that called it. Multiple references are then entirely idiomatic. I think this might be one reason why graphical patching has not supplanted text languages in computer science generally: while it's very well adapted to trigger responses, and describing signal processing, the lack of a function call that brings the result back here is a pretty hefty obstacle to many types of computing. (And, when something is difficult, users tend to avoid it... so the problem doesn't really get solved.)

    I'm using Pd in teaching because I have a few students who have really picked up the graphical-patching ball and run with it. (SuperCollider, despite being more richly featured, has largely intimidated almost every student of mine who has approached it.) So I fully appreciate that graphical patching has advantages... but I can't shake the feeling that there's a complexity wall at some point, and it's hard to climb it. I wonder what it would take to reduce the slope of this wall.

    @jameslo

    Would you be able to side-step this issue if [text sequence] had a post-wait mode? Or is the pre-wait requirement a consequence of loading data from standard MIDI files?

    One, I'm not considering MIDI files anymore. That's off the table. Also, pre-wait is a recommendation of [text], isn't it? (Personally I disagree with the pre-wait design, but it seems baked into [text] for whatever reason.)

    Two, I did make an abstraction to output post-deltas -- but I didn't arrive at this solution, which is more elegant -- close to it, but not so clean. I probably got confused at that time by advice from the forum that the wait time should always be "pre." So... I'll rewrite that abstraction sometime.

    Thanks!

    hjh

    posted in technical issues read more
  • ddw_music

    In a course, I asked the students to try vstplugin~, which turned out to be a little fiddly until we figured out the right options to use. (That's not a criticism -- it's a complex external and it took a little while to figure out the most streamlined approach. Cheers to @spacechild1 for making [vstplugin~] pretty much bulletproof.)

    Having figured that out, I thought I would make it easier for the students by making a wrapper, to:

    • load a plug-in automatically with -e (currently this is not configurable, which is a potentially controversial choice, but my purpose is to simplify the process for the students, and they will universally prefer to see the familiar VST GUI);
    • show the GUI automatically (optionally) after loading;
    • provide a couple of conveniences (show/hide GUI, program_list, MIDI panic).

    I didn't do much to clean up the layout inside the abstraction, but it seems helpful, so I thought I would share.

    This assumes vstplugin~/ is a directory immediately beneath one of your search paths ([declare -path vstplugin~] inside).

    hjh

    vstplug-hjh~.zip

    posted in abstract~ read more
  • ddw_music

    Something I was thinking about yesterday, while adding to my [text sequence]-based abstractions:

    [text sequence] is a stateful object, in that the result of [step( depends on the previous state.

    What if you need the result to be handled differently in different cases?

    Specifically, I'm rejiggering the data so that it outputs the time delta to the next event alongside the data list, rather than the time delta from the previous event. As a step sequencer, in pseudocode, it looks like this:

    coroutine
        t = text sequence blah blah;
        list = t.next;
        #time ... data = list;  // [list split 1]
        time --> time outlet;  // withhold data now
        wait for bang;
        while
            list = t.next;
            list isn't empty
        do
            #time ... data2 = list;
            time --> time outlet;
            data --> data outlet;
            data = data2;
            wait for bang;
        loop;
    end
    

    So there's an "initializer" handling of the [text sequence] result, and then a "normal" handling within the loop.

    For stateless operations like [+ 1], you can just replicate the operator for the different contexts. But because [text sequence] is stateful, you can't do that.

    I could actually think of a nice, clean solution for numeric results -- I'm OK with this as it would scale up to any number of references:

    pd-shared-ref.png

    But [value] can hold only a single number (why? wouldn't the equivalent of a 'variant' type be useful?), and [text] is dealing with lists.

    I ended up using the [list prepend] --> [route 0 1] trick -- fair enough as the above routine uses the sequence in only two places. It might not scale when things get more complicated (for instance, in https://github.com/jamshark70/ddwChucklib-livecode/blob/master/parsenodes.sc I'm passing a stateful string iterator through dozens of functions, haven't counted them but I guess it's easily 50 or 60 places -- from a computer science perspective, a parser is not an exotic use case -- quite common really -- [route 0 1 2 3 4 5 6 7 8 9 10 11 12] oh, I give up already :laughing: ).

    Wondering if there are other options. It would be really nice if graphical patchers had a concept of a shared object reference, replicating the 't' variable in the pseudocode, but I guess that won't happen soon.

    hjh

    posted in technical issues read more

Internal error.

Oops! Looks like something went wrong!