[SOLVED]video still remains after disconnecting [ofelia d] script
I've modified some abstractions created by @60hz for prototyping fast with ofelia (BTW, thanks a million for your abstractions they're great! And of course a million thanks to @cuinjune for creating Ofelia!). Things seem to work OK, but there's this happening. I have the following patch (I've changed the names from [gl.draw] to [ofelia.draw] for this and other abstractions)
[ofelia.draw]
|
[ofelia.cube]
and I see a white cube in the Ofelia window. Then I make the following connections:
[ofelia.draw]
|
[ofelia.movie]
|
[ofelia.cube]
and I get a video playing in the same cube. Till now everything is fine. But if I go back to the first patch, I don't get a white cube anymore but a still from the video file from [ofelia.movie]. Here are the scripts of the abstractions (almost identical to @60hz abstractions):
[ofelia.draw] (I'm returning ofColor because I want to have all abstractions output pointers and not bangs, maybe there's a better solution for this):
ofelia d draw$0;
local c = ofCanvas(this);
local args = c:getArgs();
local depth = true;
local screenwidth;
local screenheigh;
local widthHeigh;
;
function M.new();
if args[2] == nil then depth = true;
else;
M.depth(args[2]);
end;
if ofWindow.exists then;
screenwidth = ofGetWidth();
screenheight = ofGetHeight();
end;
end;
;
function M.screensize(list);
screenwidth, screenheight = list[1], list[2];
end;
;
function M.depth(string);
if string == "3d" then;
depth = true;
else;
depth = false;
end;
end;
function M.bang();
ofSetDepthTest(depth);
ofTranslate(screenwidth*0.5, screenheight*0.5);
return ofColor(255, 255, 255);
end;
[ofelia.movie]:
ofelia d -c17 videoplayer-$0;
local canvas = ofCanvas(this);
local args = canvas:getArgs();
local videoplayer = ofVideoPlayer();
local filename, start, loop = args[1], args[2], args[3];
local loaded = 0;
;
function M.new();
ofWindow.addListener("setup", this);
if args[1] == nil then print("No file found");
else M.open(filename);
end;
if args[2] == 1 then M.play();
end;
if args[3] == nil then loop = 0;
end;
end;
;
function M.free();
ofWindow.removeListener("setup", this);
end;
;
function M.setup();
M.open(filename);
end;
;
function M.open(string);
if ofWindow.exists then;
videoplayer:close();
videoplayer:load(string);
if (videoplayer:isLoaded()) then;
print("loaded " .. string);
videoplayer:update();
end;
end;
end;
function M.url(string);
if ofWindow.exists then;
videoplayer:close();
videoplayer:load(string);
if (videoplayer:isLoaded()) then;
print("loaded " .. string);
videoplayer:update();
end;
end;
end;
function M.play() videoplayer:play() end;
function M.stop() videoplayer:stop() end;
function M.pause() videoplayer:setPaused(true) end;
function M.speed(float) videoplayer:setSpeed(float) end;
function M.frame(float) videoplayer:setFrame(float) end;
function M.volume(float) videoplayer:setVolume(float) end;
function M.loop(float);
if float == 0 then videoplayer:setLoopState(OF_LOOP_NONE);
elseif float == 1 then videoplayer:setLoopState(OF_LOOP_NORMAL);
elseif float == 2 then videoplayer:setLoopState(OF_LOOP_PALINDROME);
end;
end;
function M.get()
return ofTable (videoplayer, videoplayer:isLoaded(), videoplayer:isPlaying(), videoplayer:getCurrentFrame(), videoplayer:getTotalNumFrames(), videoplayer:getWidth(), videoplayer:getHeight());
end;
;
function M.pointer(p);
videoplayer:update();
videoplayer:bind();
return videoplayer;
end;
function M.bang();
videoplayer:update();
videoplayer:bind();
return videoplayer;
end;
Inside [ofelia.movie] there's this patch:
[ofelia d movie_script] <- this is the ofelia object that loads the script above
|
[t a a]
| |
| [outlet]
|
[ofelia d videoplayer_unbind;] <- this is one object
[function M.pointer(p); ]
[p:unbind; ]
[end; ]
and this is [ofelia.cube]:
ofelia d $0-box;
local c = ofCanvas(this);
local args = c:getArgs();
local width, height, depth, resw, resh, resd, drawmode, strokeweight = args[1], args[2], args[3], args[4], args[5], args[6], args[7], args[8];
local position, orientation, scale = ofVec3f(0, 0, 0), ofVec3f(0, 0, 0), ofVec3f(1, 1, 1);
;
function M.new();
ofWindow.addListener("setup", this);
if args[1] == nil then width = 100 end;
if args[2] == nil then height = 100 end;
if args[3] == nil then depth = 100 end;
if args[4] == nil then resw = 5 end;
if args[5] == nil then resh = 5 end;
if args[6] == nil then resd = 5 end;
if args[7] == nil then drawmode = "fill" end;
if args[8] == nil then strokeweight = 1 end;
M.setup();
end;
;
function M.free();
ofWindow.removeListener("setup", this)
end;
;
function M.setup();
box$0 = ofBoxPrimitive();
end;
;
function M.resw(float) resw = float end;
function M.resh(float) resh = float end;
function M.resd(float) resd = float end;
function M.width(float) width = float end;
function M.height(float) height = float end;
function M.depth(float) depth = float end;
function M.draw(string) drawmode = string end;
function M.stroke(float) strokeweight = float end;
function M.position(list) position = ofVec3f(list[1], list[2], list[3]) end;
function M.orientation(list) orientation = ofVec3f(list[1], list[2], list[3]) end;
function M.scale(list) scale = ofVec3f(list[1], list[2], list[3]) end;
;
function M.pointer(p);
ofSetLineWidth(strokeweight);
box$0:setPosition (position:vec3());
box$0:setOrientation (orientation:vec3());
box$0:setScale (scale:vec3());
box$0:set(width, height, depth, math.abs(resw), math.abs(resh), math.abs(resd));
if drawmode == "fill" then box$0:drawFaces() end;
if drawmode == "point" then box$0:drawVertices() end;
if drawmode == "line" then box$0:drawWireframe() end;
return p;
end;
function M.bang();
ofSetLineWidth(strokeweight);
box$0:setPosition (position:vec3());
box$0:setOrientation (orientation:vec3());
box$0:setScale (scale:vec3());
box$0:set(width, height, depth, math.abs(resw), math.abs(resh), math.abs(resd));
if drawmode == "fill" then box$0:drawFaces() end;
if drawmode == "point" then box$0:drawVertices() end;
if drawmode == "line" then box$0:drawWireframe() end;
return anything;
end;
This is a bit too much information but I think it's necessary if anyone can help. There's probably stuff I'm ignorant of. @cuinjune any hints?
Windowed-sync oscillator: Style questions
- Is [fexpr~] the best way to check for the phasor reset?
I did it in the same way, or with Cyclone.
@jameslo said:
I think you and I were posting on another topic when @alexandros suggested using [rzero_rev~ 0] to get the previous sample.
cool
- The [rpole~ 1] is essentially a phasor with a signal-rate reset (as opposed to [phasor~], which can be reset, but only with control messages). Is there a better way?
There are [vphasor~], [vphasor2~] and [vsamphold~] from @Maelstorm . I did not try them yet.
https://forum.pdpatchrepo.info/topic/10192/vphasor-and-vphasor2-subsample-accurate-phasors/
Patched my own [vphasor~] with [fexpr~], too and another one with a [pd subpatched] [block~ 1] and [tabwrite~] and another [pd sub] [block~ 1] [tabreceive~] and a feedback-loop, that is basicly forming a ramp by adding itself up.
Both might be expensive? I did not compare their CPU load yet.
Your [rpole~ 1] is neat.
Why not in vanilla?
I am wondering about this, too. Am working on sampleaccurate "audio-control" and slowly making progress.
In your patch [rpole~ 1] is working with a signal-inlet, isn't it?
Or what is it that you tried to say?
Of course you can deform the ramp with [*~] [+~] [samplerate~] ect.
Which is that other forum you mentioned? I am keen to learn more about DSP techniques.
(Tbh I'm a little bit proud of this one
)
yeay!
Windowed-sync oscillator: Style questions
Hi,
The topic of windowed sync oscillators came up on another forum. For fun, and also to improve my Pd chops -- this is what I came up with. (One reason for sharing is that I feel like this is a new level of Pd-idiomatic patching for me.)
I have a couple of questions, below the patch.
-
Is [fexpr~] the best way to check for the phasor reset? With cyclone I could do [rzero~ 1] --> [>~ 0] I think. The goal is a signal that is 1 while the phasor is incrementing, and 0 for exactly one sample when it wraps around. It must be 0 for exactly one sample, because this is used to reset the [rpole~] accumulator (syncing the sine oscillator).
(I've struggled with the lack of signal-rate comparators before. Yes, they're in cyclone, but... aren't these fundamental operators? Why not in vanilla? From past conversations, I gather that often, when a "basic" feature isn't present in vanilla, it's because you can build it from objects that do exist in vanilla. But I never figured out a good way, apart from [expr~] / [fexpr~], to do that for signal comparisons -- which feels like cheating in a way.)
-
The [rpole~ 1] is essentially a phasor with a signal-rate reset (as opposed to [phasor~], which can be reset, but only with control messages). Is there a better way? (Tbh I'm a little bit proud of this one
)
Thanks,
hjh
CPU usage of idle patches, tabread4~?
@zigmhount said:
Good advice on the discontinuities. I was kind of hoping that [phasor~] would handle this better than restarting [line~] from 1 to 0, but I suppose that it also just jump from 1 to 0?
Yep. Regardless of whether you're using [line~] or [phasor~] to drive [tabread4~], discontinuities can happen anytime you abruptly jump from one spot to another. This isn't unique to Pd, if you perform edits to a waveform in any DAW without crossfades at the edit points, you will get clicks/pops (unless you get lucky & happen to edit at a zero crossing). So, if the first & last values stored in your array (loop) are not the same & the loop restarts (either beginning a new 0->1 ramp with [line~] or letting [phasor~] wrap around), you'll get a pop unless you use windowing.
In your example, you record the ramp up and ramp down into the array itself, right? Is that not audible when looping the same array over and over? Thanks to this ramp in the array, I guess that [tabread4~] may not click even if started without a volume ramp in, would it?
Yes indeed, that example essentially records the fade in/out into the array, so you wouldn't hear clicks when the loop wraps even without using a window with [tabread4~]. However, note that this is only one of the causes I mentioned... if you're eventually planning to add any playback controls with abrupt changes (such as pause/stop, start from the middle, rewind, jump to a new position, etc), you'll need a fade out before the change and a fade in after the change. FYI, my personal use for recording the fade into the array itself is because I sometimes use a phase vocoder for time stretching of my loops, which seems to misbehave if I have extreme values at the start/end of the array.
And, yes, the windowing can be audible, but it really depends on the nature of the audio that you're recording into the array. I randomly chose a 10ms fade in/out for the example above, but that could be any duration you like (you might want it to be adjustable if you're looping many different types of sounds, to experiment with shorter/longer fade times). There are also ways of shaping the curve of the fades if you really want to minimize their chances of being audible. But even if the fades are obvious, I think you'll still find them to be a million times less strident than a loud speaker pop.
CPU usage of idle patches, tabread4~?
@beep.beep Very useful, thanks again! Note that some of the glitches I hear (like the one I mentioned with the emoji window) can be heard also when Pd is not running at all, so while I'm certain that a higher delay will help, the root cause is somewhere else I will try.
Good advice on the discontinuities. I was kind of hoping that [phasor~] would handle this better than restarting [line~] from 1 to 0, but I suppose that it also just jump from 1 to 0? I've seen other discussion threads about windowing, involving for example interpolation between the end of the array and its start when beginning a new ramp with [line~], but I'll probably have to quite some time to get this to behave properly.
More questions though :
In your example, you record the ramp up and ramp down into the array itself, right? Is that not audible when looping the same array over and over? Thanks to this ramp in the array, I guess that [tabread4~] may not click even if started without a volume ramp in, would it?
how can i make a [phasor~] myself?
my goal from doing this is to make a [phasor~], which only does one oscillation when it recieves a message, i can't simply use a line~ (at least AFAIK) to do this because i want to do FM on that [phasor~]. is there a way tho see what an object is made of, like opening a patch inside a patch, or is it just C code? i'm not good at C, BTW. could someone make a patch that does the same as [phasor~], or what i intend to do? both would be nice, because, if i just know haw the [phasor~] is made, i can probably transform it into what i need it for.
If none of the options above are possible, i have tried to use expr~/fexpr~ to time to turn off the [phasor~]/(output of [phasor~]), one attempt was with [fexpr~ if($x1[-1]==1,0,$x2[0])] where $x1 is the output of [phasor~], and $x2 is the frequency for [phasor~], and connect the output to [phasor~], but that didn't work. can somebody make a patch that reacts to [phasor~] reaching its peak: 1, and excactly between 1 and 0, when 1 is over and 0 has not started, and so turns the [phasor~] off?
help is greatly appreciated here:D
Problems with ableton link sampler
For everyone else....
[phasor~] plays the array "sampler". It makes ramps from 0 to 1 at the speed set by its frequency.
Your patch then multiplies those values by the total number of samples in the array.
When the phase is reset by a zero to its right inlet the [phasor~] jumps to zero and starts its ramp to 1.
So as the zero arrives the array is played from the beginning...... which is what you need as you stop and restart the sample.
You had let [phasor~] continue to run...... and it was at some point on its ramp..... not at the start.
David.
samphold-ing a previous signal value
I would like to create a Pd equivalent of SuperCollider's LFDNoise1 LFO (random line segments).
It's basically like this. In Pd, I can see how to do almost all of it, except for sample/holding the previous random value.
(
a = {
// pd: phasor~
var phasor = LFSaw.ar(1) * 0.5 + 0.5, // 0-1
// pd: [rzero~ 1] --> [<~ 0]
trig = HPZ1.ar(phasor) < 0, // 1 when phasor drops
// pd: [samphold~]
nextEndpoint = Latch.ar(WhiteNoise.ar, trig),
// pd: I don't know how to do this
prevEndpoint = Latch.ar(Delay1.ar(nextEndpoint), trig),
// pd: easy math
line = (nextEndpoint - prevEndpoint) * phasor + prevEndpoint;
// simple test signal: map bipolar LFO exponentially to freq
SinOsc.ar(400 * (2 ** line), 0, 0.1).dup
}.play;
)
a.free;
I made one failed attempt using [phasor~] --> [rzero~ 1] --> [*~ -1] --> [threshold~] --> [random], but if the phasor jumps to zero in the middle of a control block, then the random calculation is out of sync and the output glitches slightly. So I need to keep all of it in the signal domain (no control objects).
Thanks,
hjh
PD's scheduler, timing, control-rate, audio-rate, block-size, (sub)sample accuracy,
Hello, 
this is going to be a long one.
After years of using PD, I am still confused about its' timing and schedueling.
I have collected many snippets from here and there about this topic,
-wich all together are really confusing to me.
*I think it is very important to understand how timing works in detail for low-level programming … *
(For example the number of heavy jittering sequencers in hard and software make me wonder what sequencers are made actually for ? lol )
This is a collection of my findings regarding this topic, a bit messy and with confused questions.
I hope we can shed some light on this.
- a)
The first time, I had issues with the PD-scheduler vs. how I thought my patch should work is described here:
https://forum.pdpatchrepo.info/topic/11615/bang-bug-when-block-1-1-1-bang-on-every-sample
The answers where:
„
[...] it's just that messages actually only process every 64 samples at the least. You can get a bang every sample with [metro 1 1 samp] but it should be noted that most pd message objects only interact with each other at 64-sample boundaries, there are some that use the elapsed logical time to get times in between though (like vsnapshot~)
also this seems like a very inefficient way to do per-sample processing..
https://github.com/sebshader/shadylib http://www.openprocessing.org/user/29118
seb-harmonik.ar posted about a year ago , last edited by seb-harmonik.ar about a year ago
• 1
whale-av
@lacuna An excellent simple explanation from @seb-harmonik.ar.
Chapter 2.5 onwards for more info....... http://puredata.info/docs/manuals/pd/x2.htm
David.
“
There is written: http://puredata.info/docs/manuals/pd/x2.htm
„2.5. scheduling
Pd uses 64-bit floating point numbers to represent time, providing sample accuracy and essentially never overflowing. Time appears to the user in milliseconds.
2.5.1. audio and messages
Audio and message processing are interleaved in Pd. Audio processing is scheduled every 64 samples at Pd's sample rate; at 44100 Hz. this gives a period of 1.45 milliseconds. You may turn DSP computation on and off by sending the "pd" object the messages "dsp 1" and "dsp 0."
In the intervals between, delays might time out or external conditions might arise (incoming MIDI, mouse clicks, or whatnot). These may cause a cascade of depth-first message passing; each such message cascade is completely run out before the next message or DSP tick is computed. Messages are never passed to objects during a DSP tick; the ticks are atomic and parameter changes sent to different objects in any given message cascade take effect simultaneously.
In the middle of a message cascade you may schedule another one at a delay of zero. This delayed cascade happens after the present cascade has finished, but at the same logical time.
2.5.2. computation load
The Pd scheduler maintains a (user-specified) lead on its computations; that is, it tries to keep ahead of real time by a small amount in order to be able to absorb unpredictable, momentary increases in computation time. This is specified using the "audiobuffer" or "frags" command line flags (see getting Pd to run ).
If Pd gets late with respect to real time, gaps (either occasional or frequent) will appear in both the input and output audio streams. On the other hand, disk strewaming objects will work correctly, so that you may use Pd as a batch program with soundfile input and/or output. The "-nogui" and "-send" startup flags are provided to aid in doing this.
Pd's "realtime" computations compete for CPU time with its own GUI, which runs as a separate process. A flow control mechanism will be provided someday to prevent this from causing trouble, but it is in any case wise to avoid having too much drawing going on while Pd is trying to make sound. If a subwindow is closed, Pd suspends sending the GUI update messages for it; but not so for miniaturized windows as of version 0.32. You should really close them when you aren't using them.
2.5.3. determinism
All message cascades that are scheduled (via "delay" and its relatives) to happen before a given audio tick will happen as scheduled regardless of whether Pd as a whole is running on time; in other words, calculation is never reordered for any real-time considerations. This is done in order to make Pd's operation deterministic.
If a message cascade is started by an external event, a time tag is given it. These time tags are guaranteed to be consistent with the times at which timeouts are scheduled and DSP ticks are computed; i.e., time never decreases. (However, either Pd or a hardware driver may lie about the physical time an input arrives; this depends on the operating system.) "Timer" objects which meaure time intervals measure them in terms of the logical time stamps of the message cascades, so that timing a "delay" object always gives exactly the theoretical value. (There is, however, a "realtime" object that measures real time, with nondeterministic results.)
If two message cascades are scheduled for the same logical time, they are carried out in the order they were scheduled.
“
[block~ smaller then 64] doesn't change the interval of message-control-domain-calculation?,
Only the size of the audio-samples calculated at once is decreased?
Is this the reason [block~] should always be … 128 64 32 16 8 4 2 1, nothing inbetween, because else it would mess with the calculation every 64 samples?
How do I know which messages are handeled inbetween smaller blocksizes the 64 and which are not?
How does [vline~] execute?
Does it calculate between sample 64 and 65 a ramp of samples with a delay beforehand, calculated in samples, too - running like a "stupid array" in audio-rate?
While sample 1-64 are running, PD does audio only?
[metro 1 1 samp]
How could I have known that? The helpfile doesn't mention this. EDIT: yes, it does.
(Offtopic: actually the whole forum is full of pd-vocabular-questions)
How is this calculation being done?
But you can „use“ the metro counts every 64 samples only, don't you?
Is the timing of [metro] exact? Will the milliseconds dialed in be on point or jittering with the 64 samples interval?
Even if it is exact the upcoming calculation will happen in that 64 sample frame!?
- b )
There are [phasor~], [vphasor~] and [vphasor2~] … and [vsamphold~]
https://forum.pdpatchrepo.info/topic/10192/vphasor-and-vphasor2-subsample-accurate-phasors
“Ive been getting back into Pd lately and have been messing around with some granular stuff. A few years ago I posted a [vphasor.mmb~] abstraction that made the phase reset of [phasor~] sample-accurate using vanilla objects. Unfortunately, I'm finding that with pitch-synchronous granular synthesis, sample accuracy isn't accurate enough. There's still a little jitter that causes a little bit of noise. So I went ahead and made an external to fix this issue, and I know a lot of people have wanted this so I thought I'd share.
[vphasor~] acts just like [phasor~], except the phase resets with subsample accuracy at the moment the message is sent. I think it's about as accurate as Pd will allow, though I don't pretend to be an expert C programmer or know Pd's api that well. But it seems to be about as accurate as [vline~]. (Actually, I've found that [vline~] starts its ramp a sample early, which is some unexpected behavior.)
[…]
“
- c)
Later I discovered that PD has jittery Midi because it doesn't handle Midi at a higher priority then everything else (GUI, OSC, message-domain ect.)
EDIT:
Tryed roundtrip-midi-messages with -nogui flag:
still some jitter.
Didn't try -nosleep flag yet (see below)
- d)
So I looked into the sources of PD:
scheduler with m_mainloop()
https://github.com/pure-data/pure-data/blob/master/src/m_sched.c
And found this paper
Scheduler explained (in German):
https://iaem.at/kurse/ss19/iaa/pdscheduler.pdf/view
wich explains the interleaving of control and audio domain as in the text of @seb-harmonik.ar with some drawings
plus the distinction between the two (control vs audio / realtime vs logical time / xruns vs burst batch processing).
And the "timestamping objects" listed below.
And the mainloop:
Loop
- messages (var.duration)
- dsp (rel.const.duration)
- sleep
With
[block~ 1 1 1]
calculations in the control-domain are done between every sample? But there is still a 64 sample interval somehow?
Why is [block~ 1 1 1] more expensive? The amount of data is the same!? Is this the overhead which makes the difference? Calling up operations ect.?
Timing-relevant objects
from iemlib:
[...]
iem_blocksize~ blocksize of a window in samples
iem_samplerate~ samplerate of a window in Hertz
------------------ t3~ - time-tagged-trigger --------------------
-- inputmessages allow a sample-accurate access to signalshape --
t3_sig~ time tagged trigger sig~
t3_line~ time tagged trigger line~
--------------- t3 - time-tagged-trigger ---------------------
----------- a time-tag is prepended to each message -----------
----- so these objects allow a sample-accurate access to ------
---------- the signal-objects t3_sig~ and t3_line~ ------------
t3_bpe time tagged trigger break point envelope
t3_delay time tagged trigger delay
t3_metro time tagged trigger metronom
t3_timer time tagged trigger timer
[...]
What are different use-cases of [line~] [vline~] and [t3_line~]?
And of [phasor~] [vphasor~] and [vphasor2~]?
When should I use [block~ 1 1 1] and when shouldn't I?
[line~] starts at block boundaries defined with [block~] and ends in exact timing?
[vline~] starts the line within the block?
and [t3_line~]???? Are they some kind of interrupt? Shortcutting within sheduling???
- c) again)
https://forum.pdpatchrepo.info/topic/1114/smooth-midi-clock-jitter/2
I read this in the html help for Pd:
„
MIDI and sleepgrain
In Linux, if you ask for "pd -midioutdev 1" for instance, you get /dev/midi0 or /dev/midi00 (or even /dev/midi). "-midioutdev 45" would be /dev/midi44. In NT, device number 0 is the "MIDI mapper", which is the default MIDI device you selected from the control panel; counting from one, the device numbers are card numbers as listed by "pd -listdev."
The "sleepgrain" controls how long (in milliseconds) Pd sleeps between periods of computation. This is normally the audio buffer divided by 4, but no less than 0.1 and no more than 5. On most OSes, ingoing and outgoing MIDI is quantized to this value, so if you care about MIDI timing, reduce this to 1 or less.
„
Why is there the „sleep-time“ of PD? For energy-saving??????
This seems to slow down the whole process-chain?
Can I control this with a startup flag or from withing PD? Or only in the sources?
There is a startup-flag for loading a different scheduler, wich is not documented how to use.
- e)
[pd~] helpfile says:
ATTENTION: DSP must be running in this process for the sub-process to run. This is because its clock is slaved to audio I/O it gets from us!
Doesn't [pd~] work within a Camomile plugin!?
How are things scheduled in Camomile? How is the communication with the DAW handled?
- f)
and slightly off-topic:
There is a batch mode:
https://forum.pdpatchrepo.info/topic/11776/sigmund-fiddle-or-helmholtz-faster-than-realtime/9
EDIT:
- g)
I didn't look into it, but there is:
https://grrrr.org/research/software/
clk – Syncable clocking objects for Pure Data and Max
This library implements a number of objects for highly precise and persistently stable timing, e.g. for the control of long-lasting sound installations or other complex time-related processes.
Sorry for the mess!
Could you please help me to sort things a bit? Mabye some real-world examples would help, too.
Possible audio file playback methods
@Transcend Yes, [phasor~] just scans through the indexes at a desired speed and over a desired range, and [tabread4~] returns the sample values of those indexes at the audio rate.
[phasor~] actually sends a ramp..... 0 to 1, fall instantly to 0 and another ramp...... so a sawtooth 0 to 1 at the frequency that is sent to its left inlet. So to get it to play all the indexes its output needs to be multiplied by the total samples.
The frequency needs to be set that the time for each ramp is the same as the normal time length of the sample....... for playback at normal speed. Changing the frequency will speed up or slow down playback, and inverting the ramp (horizontally) will play backwards.
[mess] uses some math which changes as random numbers arrive........
The [*~ 1] will change the start point and the speed at the same time. They could be separated using a [+~] as well, but I kept it reasonably simple. The speed changes the frequency, but does no stretching (to maintain the time length of what is played) so higher pitch will be shorter, lower pitch longer.
The Grain slider changes the rate at which random numbers arrive, and [random] spews any number between zero and the size of the array..... a value that it gets from [soundfiler].
The [phasor~] gets its frequency from dividing the samplerate (I assumed 44100) by the number of samples, so that if nothing was messed up it would simply play the file at the correct speed from beginning to end repeatedly.
The [spigot] allows the same random numbers to change the frequency of [phasor~] as well.
You can reset that by closing the spigot and clicking the [44100( message. The clicking of the message could be automated by a bang when the spigot is turned off like this........ mess.pd
David.