Order of ops confusion (troubleshooting vanilla demux abstraction)
@lacuna said:
one more demux, currently I think it is perfect
Nice! Definitely works for my little test patch, so unless there's an exotic testing situation I'm not considering here, I think you nailed it.
And Zexy [mux] of course acts just the same, so no problem here. Nothing wrong with your example.
Well shucks, you're absolutely correct, I hadn't tried this test with zexy/mux but doing so now it does give the same result as my [vmux], including the extra unwanted output on click #1. I suppose this is less of a concern than demux because it seems easier to spot execution order problems earlier in the chain (before being sent to the abstraction), but I think I'll still need to go through my patches to make sure I didn't stupidly misuse [mux] this way in the past.
@oid Thanks for that mux idea as well! As expected, it also behaves the same as my vmux and zexy/mux in my little test patch, so the main lesson I'm taking here is that pre-mux order of operations is particularly important. Your solutions are super slick and inspiring to see; in this case I think I'll go with @lacuna's demux (with the extra layer of spigots) and stick with my simple vmux, mainly just to not have to worry about whether I'm anywhere close to drowning in symbols.
Performance of [t f] vs [pd nop]
Thanks for your responses! now that's interesting about the s/r performance - in this case, i get the following results (repeatedly tested again):
[perf_meter]: pd_nop
bangs time(ms) bangs/ms
9603563 1000.0 9603.55
[perf_meter]: t_f
bangs time(ms) bangs/ms
9987069 1000.0 9987.06
[perf_meter]: send_receive
bangs time(ms) bangs/ms
10147908 1000.0 10147.90
[perf_meter]: connected
bangs time(ms) bangs/ms
10711894 1000.0 10711.88
Here's the patch if you want to check these on your side: test-performance-nop.pd
I updated perf_meter.pd with this additional newline for clarity - but as I said, I'm also not completely sure if this is a valid way of checking performance.
timing events in pd
I am working on a piece that will turn on and off multiple motors at certain time based on certain variables.
What I'm trying to do is sequence like this:
sequence 1:
button is pressed
motor 1:
start at min 00: random pulse(ON time of 35 ms) every 7 - 10 sec
motor2:
start at min 01: random pulse(ON time of 35 ms) every 5 - 11 sec
motor3:
start at min 04: random pulse(ON time of 35 ms) every 3 - 13 sec
motor4:
start at min 06: random pulse(ON time of 35 ms) every 7 - 10 sec
motor5:
start at min 08: random pulse(ON time of 35 ms) every 5 - 12 sec
motor6:
start at min 10: random pulse(ON time of 35 ms) every 4 - 10 sec
all motors run
when button is pressed again (button count = 2):
sequence 2 starting:
motor 1:
continue random pulse(ON time of 35 ms) every 5 - 10 sec
stop after 01 min
motor 2: continue random pulse(ON time of 35 ms) every 3 - 12 sec
stop after 04 min
motor 3: continue random pulse(ON time of 35 ms) every 7 - 12 sec
stop after 10 min
motor 4: continue random pulse(ON time of 35 ms) every 3 - 8 sec
stop after 16 min
motor 5: continue random pulse(ON time of 35 ms) every 5 - 16 sec
stop after 18 min
motor 6: continue random pulse(ON time of 35 ms) every 7 - 10 sec
-- now only motor 6 is running >> button pressed>> last motor that running(motor 6) is stops.
sequence 3:
all motors pulsing together every 7 seconds for 2 minutes
after 2 minutes elapsed all motor stops.
how can I implements the above in PD?
how is possible to timing events without using metro rather something more accurate?
Thanks
count~ pause option?
The problem with a [metro]-based approach is that the [metro] puts a hard limit on time granularity. You can get more accuracy by increasing the [metro] speed, but this uses more CPU.
IMO if you want real accuracy, then [metro] isn't the way to go.
At first I was going to keep a [timer] running continuously, and also accumulate a "total pause time" value to subtract from the [timer].
But then I realized... at the moment of pausing, it could "freeze" the pause time -- then, when you resume, reset the timer to 0 and add the last frozen time -- so at that moment, 0 + frozen time = last-paused value, 1000 ms later = 1000 + frozen time etc.
This type of thing is IMO much easier to do in a programming language, but with some clever traffic policing, it does work:
For this type of abstraction, it's good to have a unit test -- provide a controlled input and make sure the output is as expected.
1, clear; 2, init the sequence; 3, "0 1000" to go.
input: 0 stop
input: 0 reset
input: 0 1
input: 0 bang
print: 0 OK
input: 250 bang
print: 250 OK (250 ms later)
input: 350 0 Pause at 350
input: 450 bang Poll 100 ms later
print: 350 Still paused at 350, OK!
input: 600 1 Resume 250 ms later (time should not have changed)
input: 700 bang +100 ms after the moment of resuming, then poll
print: 450 Result = 350 ms + 100 ms = 450 ms, OK!
input: 900 0
Unit test passed The paused duration between 350 and 600 ms does disappear as expected.
So you'll get full [timer] resolution without running a busy-counter loop.
hjh
filtering jittering numbers - avoiding floats
@KMETE said:
I copy a max version patch...
A hidden source of trouble with this approach is that Max has integer-only operators while Pd only has floats.
In Max (IIRC), 3.1 --> [+ 2] produces 5, not 5.1. 3.1 --> [+ 2.] (with a decimal point) produces 5.1.
In Pd, the "dot" thing doesn't exist. If you're copying a Max patch that depends on integer-math behavior, you can't just copy it directly -- you need to handle the integer conversion explicitly.
I've gotten bitten by this multiple times when going the other direction (building something in Max and expecting float ops).
hjh
Not clipping! (?) OMG, MY BAD, NEVERMIND
@whale-av said:
And 32bit floating point has a dynamic range of 1528dB....! ... be careful out there...!...
That's a joke, right?
Let's do a thought experiment. Let's say you have binary floating-point samples where the maximum exponent is 4 and you have 2 bits below the point. Then the maximum value is 1.11 * 2^4 = 11100 = 28. If you had all the bits of precision, the maximum would be 31 (the largest integer less than 2^(4+1)). So the maximum quantization error due to the limited precision of the mantissa is 3 = 2 ^ (max exponent - mantissa bits) - 1.
If we increase the maximum exponent to 5, the same holds: max possible value = 111111 = 63, max encodable = 111000, max error = 7.
If we increase the maximum exponent to 127 (max allowed in single precision), and allow 23 bits below the point, then the error is 2 ^ (127 - 23) - 1 = 2 ^ 104 - 1, or on the order of 2 * 10^31.
This is the noise part of signal-to-noise ratio. It's the maximum noise. The overall noise level is the integral of the quantization error function (which is the absolute value of the difference between y and quantized y, where quantized y is a piecewise function) divided by the x range (something like that) and this will be lower than the max.
Of course, if you have an audio signal scaled up to 2^127, then some samples might reach the peak, but many will be lower amplitude, and their quantization error will be lower. So the overall SNR should be higher than the worst case at the top of the range. But quantization error must integrate over the absolute value, so the lower error at small amplitudes does not cancel out the astronomical error at high amplitudes.
Yes, I've seen the articles producing that bogus 1528 dB figure. The mistake they make is to assume that quantization error is equally distributed throughout the range, and that the magnitude of this quantization error is proportional to the smallest representable value. This is a fundamental reasoning error. To understand how to think properly about quantization error, read https://www.analog.com/media/en/training-seminars/tutorials/MT-229.pdf .
I'm quite bothered by this, actually. Some marketing bros use faulty math to claim that "our soundcard can give you 1528 dB dynamic range!!1!1!!" and then this goes to the top of search results, and gets repeated as folklore. But it's nonsense. It needs to be stopped.
hjh
Clock percision
@20har I suspect others know better than I, but haven't responded.
This is one way to test:
On my system, I observe jitter in "realtime" on the order of about 5 ms. This holds true at shorter intervals -- if I move the slider to 6 or 7 ms, I see the realtime number jumping between 5 and 10 ms (error within 5 ms, then).
I tried two different audio hardware buffer sizes, same behavior.
Up to 5 ms timing jitter doesn't seem very nice.
To sync with Ableton, I would use the abl_link~ external instead.
hjh
Pd as an accessible programming environment?
@JoshuaACNewman said:
My AI professor back decades ago wished that he could start programming with MAX, but it was completely out of the range of the students. I'm hoping to pick up his dream, particularly since he's at a new institution that requires that he teach beginning computer science in Java (?!?)
While java is joyless and corporate-desiccated, it's a better choice for computer science instruction than either Max or Pd.
Max and Pd are good at reacting to incoming messages (whether generated by GUI objects, MIDI devices, OSC apps etc) -- the top-down "push" model is often more intuitive for this than, say, SuperCollider's callback function paradigm. This suits the purpose for which Max was designed in the 80s.
"Push" messaging is also handy for signal flow graphs.
This messaging model is terrible at expressing complex algorithms. (I don't have any plans to back down from that word.) And, Pd and Max both suffer from woefully underdeveloped data structures (Pd's "data structures" do not quite replicate computer science's hash tables, heaps etc -- which computer science formalized because they're optimal solutions for certain problems) -- I suspect this is because they're designed principally for reactivity, not for data work.
They look cool, and I think I've gained a lot actually by figuring out how to do some non-trivial stuff in Pd (e.g. my tick-scheduler abstraction), but I would never use either of them for hard core programming tasks. They are really not built for it.
hjh
Why does Pd look so much worse on linux/windows than in macOS?
Howdy all,
I just found this and want to respond from my perspective as someone who has spent by now a good amount of time (paid & unpaid) working on the Pure Data source code itself.
I'm just writing for myself and don't speak for Miller or anyone else.
Mac looks good
The antialiasing on macOS is provided by the system and utilized by Tk. It's essentially "free" and you can enable or disable it on the canvas. This is by design as I believe Apple pushed antialiasing at the system level starting with Mac OS X 1.
There are even some platform-specific settings to control the underlying CoreGraphics settings which I think Hans tried but had issues with: https://github.com/pure-data/pure-data/blob/master/tcl/apple_events.tcl#L16. As I recall, I actually disabled the font antialiasing as people complained that the canvas fonts on mac were "too fuzzy" while Linux was "nice and crisp."
In addition, the last few versions of Pd have had support for "Retina" high resolution displays enabled and the macOS compositor does a nice job of handling the point to pixel scaling for you, for free, in the background. Again, Tk simply uses the system for this and you can enable/disable via various app bundle plist settings and/or app defaults keys.
This is why the macOS screenshots look so good: antialiasing is on and it's likely the rendering is at double the resolution of the Linux screenshot.
IMO a fair comparison is: normal screen size in Linux vs normal screen size in Mac.
Nope. See above.
It could also just be Apple holding back a bit of the driver code from the open source community to make certain linux/BSD never gets quite as nice as OSX on their hardware, they seem to like to play such games, that one key bit of code that is not free and you must license from them if you want it and they only license it out in high volume and at high cost.
Nah. Apple simply invested in antialiasing via its accelerated compositor when OS X was released. I doubt there are patents or licensing on common antialiasing algorithms which go back to the 60s or even earlier.
tkpath exists, why not use it?
Last I checked, tkpath is long dead. Sure, it has a website and screenshots (uhh Mac OS X 10.2 anyone?) but the latest (and only?) Sourceforge download is dated 2005. I do see a mirror repo on Github but it is archived and the last commit was 5 years ago.
And I did check on this, in fact I spent about a day (unpaid) seeing if I could update the tkpath mac implementation to move away from the ATSU (Apple Type Support) APIs which were not available in 64 bit. In the end, I ran out of energy and stopped as it would be too much work, too many details, and likely to not be maintained reliably by probably anyone.
It makes sense to help out a thriving project but much harder to justify propping something up that is barely active beyond "it still works" on a couple of platforms.
Why aren't the fonts all the same yet?!
I also despise how linux/windows has 'bold' for default
I honestly don't really care about this... but I resisted because I know so many people do and are used to it already. We could clearly and easily make the change but then we have to deal with all the pushback. If you went to the Pd list and got an overwhelming consensus and Miller was fine with it, then ok, that would make sense. As it was, "I think it should be this way because it doesn't make sense to me" was not enough of a carrot for me to personally make and support the change.
Maybe my problem is that I feel a responsibility for making what seems like a quick and easy change to others?
And this view is after having put an in ordinate amount of time just getting (almost) the same font on all platforms, including writing and debugging a custom C Tcl extension just to load arbitrary TTF files on Windows.
Why don't we add abz, 123 to Pd? xyzzy already has it?!
What I've learned is that it's much easier to write new code than it is to maintain it. This is especially true for cross platform projects where you have to figure out platform intricacies and edge cases even when mediated by a common interface like Tk. It's true for any non-native wrapper like QT, WXWidgets, web browsers, etc.
Actually, I am pretty happy that Pd's only core dependencies a Tcl/Tk, PortAudio, and PortMidi as it greatly lowers the amount of vectors for bitrot. That being said, I just spent about 2 hours fixing the help browser for mac after trying Miller's latest 0.52-0test2 build. The end result is 4 lines of code.
For a software community to thrive over the long haul, it needs to attract new users. If new users get turned off by an outdated surface presentation, then it's harder to retain new users.
Yes, this is correct, but first we have to keep the damn thing working at all. I think most people agree with you, including me when I was teaching with Pd.
I've observed, at times, when someone points out a deficiency in Pd, the Pd community's response often downplays, or denies, or gets defensive about the deficiency. (Not always, but often enough for me to mention it.) I'm seeing that trend again here. Pd is all about lines, and the lines don't look good -- and some of the responses are "this is not important" or (oid) "I like the fact that it never changed." That's... thoroughly baffling to me.
I read this as "community" = "active developers." It's true, some people tend to poo poo the same reoccurring ideas but this is largely out of years of hearing discussions and decisions and treatises on the list or the forum or facebook or whatever but nothing more. In the end, code talks, even better, a working technical implementation that is honed with input from people who will most likely end up maintaining it, without probably understanding it completely at first.
This was very hard back on Sourceforge as people had to submit patches(!) to the bug tracker. Thanks to moving development to Github and the improvement of tools and community, I'm happy to see the new engagement over the last 5-10 years. This was one of the pushes for me to help overhaul the build system to make it possible and easy for people to build Pd itself, then they are much more likely to help contribute as opposed to waiting for binary builds and unleashing an unmanageable flood of bug reports and feature requests on the mailing list.
I know it's not going to change anytime soon, because the current options are a/ wait for Tcl/Tk to catch up with modern rendering or b/ burn Pd developer cycles implementing something that Tcl/Tk will(?) eventually implement or c/ rip the guts out of the GUI and rewrite the whole thing using a modern graphics framework like Qt. None of those is good (well, c might be a viable investment in the future -- SuperCollider, around 2010-2011, ripped out the Cocoa GUIs and went to Qt, and the benefits have been massive -- but I know the developer resources aren't there for Pd to dump Tcl/Tk).
A couple of points:
-
Your point (c) already happened... you can use Purr Data (or the new Pd-L2ork etc). The GUI is implemented in Node/Electron/JS (I'm not sure of the details). Is it tracking Pd vanilla releases?... well that's a different issue.
-
As for updating Tk, it's probably not likely to happen as advanced graphics are not their focus. I could be wrong about this.
I agree that updating the GUI itself is the better solution for the long run. I also agree that it's a big undertaking when the current implementation is essentially still working fine after over 20 years, especially since Miller's stated goal was for 50 year project support, ie. pieces composed in the late 90s should work in 2040. This is one reason why we don't just "switch over to QT or Juce so the lines can look like Max." At this point, Pd is aesthetically more Max than Max, at least judging by looking at the original Ircam Max documentation in an archive closet at work.
A way forward: libpd?
I my view, the best way forward is to build upon Jonathan Wilke's work in Purr Data for abstracting the GUI communication. He essentially replaced the raw Tcl commands with abstracted drawing commands such as "draw rectangle here of this color and thickness" or "open this window and put it here."
For those that don't know, "Pd" is actually two processes, similar to SuperCollider, where the "core" manages the audio, patch dsp/msg graph, and most of the canvas interaction event handling (mouse, key). The GUI is a separate process which communicates with the core over a localhost loopback networking connection. The GUI is basically just opening windows, showing settings, and forwarding interaction events to the core. When you open the audio preferences dialog, the core sends the current settings to the GUI, the GUI then sends everything back to the core after you make your changes and close the dialog. The same for working on a patch canvas: your mouse and key events are forwarded to the core, then drawing commands are sent back like "draw object outline here, draw osc~ text here inside. etc."
So basically, the core has almost all of the GUI's logic while the GUI just does the chrome like scroll bars and windows. This means it could be trivial to port the GUI to other toolkits or frameworks as compared to rewriting an overly interconnected monolithic application (trust me, I know...).
Basically, if we take Jonathan's approach, I feel adding a GUI communication abstraction layer to libpd would allow for making custom GUIs much easier. You basically just have to respond to the drawing and windowing commands and forward the input events.
Ideally, then each fork could use the same Pd core internally and implement their own GUIs or platform specific versions such as a pure Cocoa macOS Pd. There is some other re-organization that would be needed in the C core, but we've already ported a number of improvements from extended and Pd-L2ork, so it is indeed possible.
Also note: the libpd C sources are now part of the pure-data repo as of a couple months ago...
Discouraging Initiative?!
But there's a big difference between "we know it's a problem but can't do much about it" vs "it's not a serious problem." The former may invite new developers to take some initiative. The latter discourages initiative. A healthy open source software community should really be careful about the latter.
IMO Pd is healthier now than it has been as long as I've know it (2006). We have so many updates and improvements over every release the last few years, with many contributions by people in this thread. Thank you! THAT is how we make the project sustainable and work toward finding solutions for deep issues and aesthetic issues and usage issues and all of that.
We've managed to integrate a great many changes from Pd-Extended into vanilla and open up/decentralize the externals and in a collaborative manner. For this I am also grateful when I install an external for a project.
At this point, I encourage more people to pitch in. If you work at a university or institution, consider sponsoring some student work on specific issues which volunteering developers could help supervise, organize a Pd conference or developer meetup (this are super useful!), or consider some sort of paid residency or focused project for artists using Pd. A good amount of my own work on Pd and libpd has been sponsored in many of these ways and has helped encourage me to continue.
This is likely to be more positive toward the community as a whole than banging back and forth on the list or the forum. Besides, I'd rather see cool projects made with Pd than keep talking about working on Pd.
That being said, I know everyone here wants to see the project continue and improve and it will. We are still largely opening up the development and figuring how to support/maintain it. As with any such project, this is an ongoing process.
Out
Ok, that was long and rambly and it's way past my bed time.
Good night all.
Audio click occur when change start point and end point using |phasor~| and |tabread4~|
Hi Junzhe
I often like to think through something like this with a concrete example.
Let's start with 10 seconds of audio: phasor = 0 --> time 0, phasor = 1 --> time 10000 (ms).
Then, at 5000 ms, you change the end time to 7000 ms.
So, now, phasor = 0.5, time = 5000. And you need phasor = 1 now to map onto 7000 ms.
So two things need to happen:
- The phasor, running at its current speed, would reach its end 5000 ms later -- but you actually need it to reach the end in 2000 ms. So the phasor's speed has to increase by a factor of 5/2 = current_time_to_end / new_time_to_end.
- The linear mapping currently locates phasor = 1 at time = 10000, but now you need phasor = 1 --> time 7000. So the slope of the linear mapping will have to adjust by a factor of 2/5 = new_time_to_end / current_time_to_end (and the linear function's intercept would have to change too).
The changes in phasor frequency and linear slope should cancel out.
Then, at the start of the next cycle (which can be detected by [samphold~]), you would have to recalculate the slope for the entire start <--> end segment.
I might be overthinking it, but e.g. jameslo admits that in a simpler solution "the pitch is not steady as you adjust the start and end" so I think there aren't many shortcuts to be taken.
BTW there is another way to integrate a rate: rpole~. With an integrator, you don't have to worry about rejiggering slopes -- it just continues on its way, and reaches the end value in its own time. But, you would be responsible for triggering the reset at that point. This version uses control messages for triggering, so it would be quantized to block boundaries. To do it sample-accurately involves feedback and I never worked that out.
And a quick test patch (array "a" is 44100 samples, vertical range -10 to +50):
hjh
PS Hm, now I guess the [route float bang] isn't needed after all.