Dear all, the tabread4~ object does a 4-points interpolated read, but it has no means to smooth changes in the delay time avoiding clicks and artifacts. I have added a 1st order lowpass filter to smooth the read head but the artifacts can be still audible, even if smaller.
What's the best way to obtain a smooth change in the delay (e.g. with an LFO signal) without having artifacts? Do we have an object in PD or should one modify the source code of tabread4?
All the best
I'm considering creating an audio networking prototype application in PD on Android.
I already have networking externals I written in C a few years ago (not unlike netsend~ and netreceive~), but I'm new with libpd. The networking externals employ POSIX sockets.
So questions are:
- are the sockets ok under Android? Being a Linux system I believe they will work compiling with the NDK, but I fear they will interfere with all the Android networking layer (which I don't know and anyway I don't think I can call from plain C code).
- how do I embed my external in libpd? Since libpd is basically the vanilla Pd I guess somehow I'll have to let it know that there are other externals binaries somewhere...
Thank you guys
Thank you both.
Seb-harmonik, thank you for the big effort!
The code looks clearer, although I should take more time to look at the double/int trickery. If I get it right, the UNITBIT32 and the usage of int along with double is intended to bring double precision calculus on fixed-point machines or at least to avoid the need for a double-precision ALU. Is this right?
I'll try to fix my "porting", if I still get it wrong I might look at tabread4 which also guarantees a higher accuracy thanks to Lagrange interpolation.
BTW: would the osc~ output lose accuracy at the point you can hear it if it would use only single precision floats?
I've been writing some externals in the past, but I never stumbled into the basic problem of writing oscillators, as PD has already plenty of. That's surely a well-known topic, but I never faced it. I want to write my own jack client to generate a sine wave and test audio from command line and use it on a ARM Cortex-A8 board.
I've been looking into some code, for instance Fons Adriaensen's jack_delay where he generates multiple sine waves for delay measurement purposes. Unfortunately, his code relies on math.h sinf and has a branch instruction with two conditions check (AND-ed). It costs 50% of my 1GHz ARM core... A bit too much!
I found out that replacing sinf with a taylor approx (up to x^13) the computational cost is lower, but it doesn't work well for sound synthesis for various reasons.
At the end I decided to look at [osc~] and take it to my barebone jack client. The cosine table creation is easy, but I'm not sure I fully understand how the look up table is read. The linear interp is easy, but how the table address is calculated looks totally weird and unrelated to the frequency argument of [osc~]. In fact x_f (the frequency) is unused. The input signal in my case is zeroed so I can't see how the code is supposed to generate a fixed freq sine. Could anyone shed some light? Anyway, I copied the code and adapted to my client and the CPU load dropped by a factor of 10 from the sinf implementation. Of course it does not play any sound so I need to understand the code and correct what is wrong with it.
BTW: I read the [phasor~] code, it looks similar, with no LUT of course. It points to a paper from Hoelderich, ICMC 1995 but looking into the ICMC archives his paper seemingly has nothing to do with generating signals...