Hi folks,
I am trying to accomplish the following:
I get a constant value stream out of FaceOSC with tacking data of e.g. mouth width.
What I want to do is to measure if the mouth is being opened or closed within the process of doing so.
The output are values from 1-10 per frame at a framerate/s of 60.
I tried to take averages of the values of say half a second, put them into {pipe} and use simple > < comparison with the real time data and the pipe-data. The problem is, that the output always is compared 2 times (the moment the current value comes in and the moment, pipe gives an output).
Can somebody give me a hint on how to do that in a more beautiful way?
Thank you so much in advance! The moment of frustration is nigh
Best regards,
Janek