-
johnwbyrd
@whale-av I saw readsf~ and I also noticed that it can take parameters representing ranges of the input sound file... but I don't see any suggestions that readsf~ occurs asynchronously to the main thread -- it seems to hang the rest of pd while the file system wakes up and delivers some more audio data into pd. I think it might make more sense to tell readsf~ or some other equivalent to asynchronously fill an array range with some data from an audio file, and then send a bang or something once that particular read is complete. Do I misunderstand the nature of readsf~? I can dig around in the source code, but I bet I'm not the first person to consider this problem.
-
johnwbyrd
There are some things I would have expected to have been implemented in vanilla pd by now, but I don't see them in any of the archives. Perhaps you could advise.
Among them:
- An audio file player which does not keep an entire sound file in memory, but rather streams in blocks from the file system as needed and discards them once played.
- A timeline editor for sample sequencing, ideally based on such an audio player.
- A method for keeping multiple parallel mono audio tracks in sync.
Most of the attempts at these things seem to depend for some reason on old extensions that are no longer supported. Advice appreciated, thank you.
-
johnwbyrd
Nanotonality is a bit like saying "if we increment a counter from zero to one, and increase it by epsilon each time, then we will have to increment it 1/epsilon times, and that's a really big number, woo." I suggest that if you really care about generating a signal that will beat every 1 trillion years, then go get yourself a bignum library and write your own dsp code instead of fussing over 32 vs. 64 bits. There are psychoacoustic limits on the ability of humans to perform pitch discrimination, and these limits have been well studied. Here's a typical study: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003336