• ddw_music

    You've entered the fun and exciting world of floating point fractions.

    0.2 is a finite decimal, but in binary, it's infinitely repeating. (1/3 in base 10 is infinitely repeating, because the denominator includes at least one prime factor, 3, which is not a prime factor of the base. Decimal fractions are finite only if the denominator has no prime factors other than 2 or 5: 5, 10, 20, 2, 4, 8 etc. are OK, 3, 7, 11, 15 are not.)

    In binary, the base has only one prime factor: 2. So a fraction is exact only if the denominator is a power of two. 0.2 = 1/5 = non-power-of-two denominator.

    So when you write 0.2, the floating-point number is something very close to 1/5 but not exactly. And the rounding algorithm is coming up with some other floating point number that is very close to 1/5 (but again, not exactly, because 0.2 is impossible to represent exactly in binary).

    TL;DR "equals" comparisons for floating-point numbers are not necessarily safe.

    You could multiply the osc~ by 100, round to 1.0, and [select 20]. Then the denominator is 1, which is a (trivial) prime factor of base 2, therefore non-repeating, therefore exact.

    hjh

    posted in technical issues read more
  • ddw_music

    GEM's [pix_data] perhaps?

    hjh

    posted in technical issues read more
  • ddw_music

    @jancsika said:

    Where is the specification for ableton link?

    Relevant bit is: https://github.com/Ableton/link#latency-compensation

    "In order for multiple devices to play in time, we need to synchronize the moment at which their signals hit the speaker or output cable. If this compensation is not performed, the output signals from devices with different output latencies will exhibit a persistent offset from each other. For this reason, the audio system's output latency should be added to system time values before passing them to Link methods."

    abl_link~, by default, doesn't do this. But, at https://github.com/libpd/abl_link/issues/20, I was told that [abl_link~] response to a "offset $1" message where a positive number of milliseconds pushes the timing messages earlier.

    Using that, it's actually easy to tune manually.

    This was undocumented -- intentionally undocumented, for a reason that I can't say I agree with. So I'll put in a PR to document it.

    Also-- Assuming that arbitrary devices are to be able to connect through ableton link, I don't see how there could be any solution to the design of abl_link that doesn't require a human user to choose an offset based on measuring round-trip latency the given arbitrary device/configuration. You either have to do that or have everyone on high end audio interfaces (or perhaps homogenous devices like all iphones or something).

    As far as I know (and I haven't gone deeply into Link's sources), Link establishes a relationship between the local machine's system time and a shared timebase that is synchronized over the network. Exactly how the shared timebase is synchronized, I couldn't tell you in detail, but linear regressions and Kalman filters are involved -- so I imagine it could make a prediction, based on the last n beats, of the system time when beat 243.5 is supposed to happen, and adjust the prediction by small amounts, incrementally, to keep all the players together.

    Then, as quoted above, it stipulates that the sound for beat 243.5 should hit the speakers at the system time associated with that beat. The client app knows what time it is, and knows the audio driver latency, and that's enough.

    So, imagine one machine running one application on one soundcard with a 256 sample hardware buffer and another app on a different soundcard with a 2048 sample hardware buffer. The system times will be the same. If both apps compensate for audio driver latency, then they play together -- and because the driver latency figure is provided by the driver, the user doesn't have to configure it manually.

    The genius of Link is that they got system times (which you can't assume to be the same on multiple machines) to line up closely enough for musical usage. Sounds impossible, but they have actually done it.

    Put another way-- if you can figure out an automated way to tackle this problem for arbitrary Linux configurations/devices, please abstract out that solution into a library that will be the most useful addition to Linux audio in decades.

    Ableton Link actually is that library.

    https://github.com/Ableton/link/blob/master/include/ableton/Link.hpp#L5-L9

    license:
    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 2 of the License, or
    (at your option) any later version.

    posted in extra~ read more
  • ddw_music

    Is anyone maintaining abl_link~?

    It's fundamentally broken (not accounting for audio driver latency = incorrect sync with non-Pd clients) but nobody is listening.

    https://github.com/libpd/abl_link/issues/20

    hjh

    posted in extra~ read more
  • ddw_music

    @jancsika said:

    Searching for "biquad" does turn up biquad~ help, but the problem is that the help patch illustrates nothing about calculating filter coefficients.

    Try the search again using the settings I gave in the comment under the issue you added to the tracker.

    Seen that, thanks.

    That Purr-Data for Mac doesn't support GEM is, unfortunately, a deal-breaker.

    I'm not sure what your original terms were. Are you in search of

    1. the most suitable software
    2. the most suitable free/open source software
    3. the most suitable gratis software
    4. something else?

    Pd-vanilla plus a few specific external libraries will meet the needs of that course. Purr Data's GUI is more attractive, and the zoom feature is extremely useful in the classroom, but these are not critical. The ability to use GEM or Ofelia on Mac, Windows and Linux platforms is critical for me.

    I suppose really "the most suitable software" is Ableton Live + Max4Live. But I can't in good conscience require students to use a specific commercial software package when I know, practically speaking, most of the students here will not pay for it. So FLOSS is preferable, yes.

    But that's not really relevant to the comment. The point is that Purr Data is, unfortunately, not currently suitable for multimedia on an OS platform that is widely used for digital arts, while Pd-vanilla is.

    hjh

    posted in technical issues read more
  • ddw_music

    @jancsika "Please hammer away at the <ctrl-b> help browser in Purr Data by adding issues to the tracker"

    OK, good to know. Actually I just reported one (https://git.purrdata.net/jwilkes/purr-data/issues/570) -- comport-help.pd is not found in the help browser, but the object is available.

    Searching for "biquad" does turn up biquad~ help, but the problem is that the help patch illustrates nothing about calculating filter coefficients. It's just not useful for musical purposes without being able to specify cutoff frequency and Q. Sure, you can "build it" with what's given -- actually, I looked at that a couple of months ago, and decided it would take me at least a whole afternoon to hack up my own coefficient calculator, and then I just went looking for other people's abstractions. (Compare Max, where filtergraph is built-in, or SC where RLPF is built-in.)

    I should also say WRT Purr-Data, I'm afraid I'm going to have to stop using it next semester. GEM is necessary for this two-semester course, and the media lab is all Mac. That Purr-Data for Mac doesn't support GEM is, unfortunately, a deal-breaker.

    hjh

    posted in technical issues read more
  • ddw_music

    @toddak "While I have read the helpfile, can you explain to me in terms of 'PD for Dummies' why in this instance the argument is -u?"

    A "listening" connection can be either TCP or UDP.

    TCP is more robust but heavier-weight and a bit slower. It's a good idea if you need to send messages outside of a LAN, but not necessary if you're sending from one process to another on the same machine (and probably overkill within a LAN).

    UDP is faster but slightly more fragile. I've seen it drop messages over a wifi LAN (but never to/from localhost btw) -- which makes it OK for something like touch controllers (if it's sending dozens of messages per second, dropping one or two won't be fatal) but not ideal when you need to be certain of successful transmission (e.g. large datasets split up into multiple packets, where you lose one packet and you're ruined -- TCP's handshaking is essential in that case).

    For control signals in multimedia apps, usually UDP is enough.

    hjh

    posted in technical issues read more
  • ddw_music

    [oscformat] and [oscparse] are probably the droids you're looking for.

    hjh

    posted in technical issues read more
  • ddw_music

    @Jona "....better than to code" actually that was a reason why i started with pure data.

    Sure -- different people have different cognitive styles, which is why it's good to have multiple tools with different approaches. I had a couple of students who really took to Pd, which never happened in the classroom with SuperCollider for me -- and that's more important to me than my own personal biases.

    I can also see that some of what I'm experiencing with Pd is simply lack of fluency -- such as, there are efficient and inefficient ways to express complex math expressions in data flows. I simply haven't practiced those skills, so of course my perception is going to be that "it's really hard." (In SC, wiring up GUIs is admittedly harder than in Pd.)

    so your main audio programming language is super collider?

    Yes. Features that I feel are better developed in SC than in other audio programming environments are: polyphony (and parallel signal graphs -- not the same as parallel processing btw), glitch-free instantiation of audio UGens, and feature-complete implementations of standard data structures (multidimensional arrays whose elements are "variant" types, hash tables, queues and linked lists). These data structures have been invaluable in my big project over the last few years (a live-coding dialect for SC).

    On the downside, both synthesis and GUIs are "heavy" in that there's more background involved -- hslider --> [osc~] is easy, SC sliders are not exactly hard to use, but more to learn at the beginning. So a lot of potential users probably give up.

    of course it would be nice to have a pure data library database where you can search for vocoder for example, and it shows all available vocoder objects and the corresponding libraries.

    Indeed... but maintaining such a list is a huge effort. SC has the same problem with third-party UGen plug-ins and class library extensions -- some great stuff out there, but how do you find it?

    can i ask what you use for graphics programming besides GEM and Ofelia?

    I've done almost nothing with graphics (no time), so only those two.

    hjh

    posted in technical issues read more
  • ddw_music

    Should also say: I'm writing deliberately provocatively. I've actually enjoyed digging into a different way of thinking in Pd, and the students take to it better than to code. GEM is fun, and Ofelia is even fun-ner. Just, no harm in noting where it could improve. (And, I did notice the comment about [text] -- tried it here a couple of weeks ago, much nicer than previous sequencing options.)

    Thanks for listening.
    hjh

    posted in technical issues read more

Internal error.

Oops! Looks like something went wrong!