• catfingers

    hi, I have just built gridflow on my ubuntu box as part of my efforts to find software/ an API where I can manipulate images/movie frames at the pixel level.

    I'm trying to do some proof-of-concept kind of things with gridflow now - the simplest being to sort component pixels according to colour etc.

    to me, #grade looks like it may be the key object (when combined with others). What I really need are some examples of building a sorting patch using #grade - or ways of working iteratively upon the contents of grids.

    Gridflow is very elegant but unfortunately a little beyond my comprehension at this point: I get the feeling that this is due to the design: that it allows you to do complex processing & operations with a minimum of objects involved.

    anyone got some pointers/ideas?

    Ash

    posted in pixel# read more
  • catfingers

    cool mr. God,

    yes, this build seems to work well on an Intel iMac I have here at work - help menu fixed & PDP/PiDiP objects working better than before too! usual complaints about codecs from the console - I need more time to look at these.

    Is it...just possible that I may be able to move my working platform (which relied heavily on PDP etc) onto a Mac? wow, that would be cool

    --clarification: the codecs issue is one about moving qt files made on linux to a Mac & re-saving them with VLC so that QT7.x can open them correctly - this is a workaround for the 'bad atom' error (ie. problems in the file header). More to do with libquicktime than PD I imagine.

    Ash

    posted in technical issues read more
  • catfingers

    cool, you now have 3 options:

    a) my old analogue art-school boy method
    b) triggering from a table of cue-points
    c) movement-detection

    This is all kind of like 'good automation' - not where you just press a button & everything assembles itself - but where you get involved in aspects of making your work (where you may have least expected) that open holes in the software AND your methodology... blah blah

    Ash

    posted in pixel# read more
  • catfingers

    hi - if yr not using audio tracks in the movie, then you could record an audible tone on the movie sound track at the desired point & get PD to listen for this as a trigger. This is an old idea that can be taken quite a way (like having a continuous frequency on the film soundtrack & using pitch-following to control stuff too).

    Sort of like playing with super-8 film 25 years ago :)

    Ash

    posted in pixel# read more
  • catfingers

    thanks for sharing. You got 5 x 25 frames at 25fps - so you didn't drop any frames - try and pass a name consisting of a path to the object & it might save your files to there.

    The help file notes that TIFF output is slow - trying jpegs at high quality would be fine re-encoding into a movie - if that's what you want to do with the resulting image sequences...

    Ash

    posted in pixel# read more
  • catfingers

    SCSI disks for use with pdp_qt? Or perhaps even firewire drives - that standard is meant to offer the same advantages of highspeed async IO (depending on how the particular drivers work I guess).

    With pdp_fqt - does enabling threads help at all? possibly not... unless DSP is already running.

    Maybe stream media from another machine, though that's getting very elaborate & startup could still be slow..

    A

    posted in pixel# read more
  • catfingers

    Yes, you need to keep going. That's what open source is all about. You could do far worse than writing up whatever you discover too. I'd stick with GEM & see where that gets you as it basically is OpenGL repackaged.

    A

    posted in pixel# read more
  • catfingers

    yep that's a situation that came to mind, Data moving around needlessly. & how to determine which subsystems a particular object or relationship of objects is going to hit. Don't let us stop you though - sounds like good old pd insane fun

    Using a profiling tool that will tell you what your graphics card vs. CPU are doing could also be useful. That's not something I know much about sorry.

    A

    posted in pixel# read more
  • catfingers

    I can't find it now but I saw a list of the GEM objects that 'wrap' OpenGL functions somewhere. I imagine these are wrappers in the sense that they allow you to call x. function as a pd object. I know nothing about these - maybe if you were to grab a basic OpenGL sample & try to translate the function calls & program flow from c or c++ using the GEM & other pd objects...it all might become clearer. Please post back if you do - I'd love to know

    Determining how much of the processing would get offloaded to the GPU on a video card - I think that would be a bit of a black art. I would just try to check if CPU load changes depending on how you build & rebuild your patch.

    I would expect that the higher-level GEM objects are already doing this (using the GPU).

    A

    posted in pixel# read more
  • catfingers

    hi,
    well, I can't make it work either. I setup MaxMSP & PD 0.39-extended rc2 & a 'network' session in Apple audio midi setup. I configured both apps to use this session for midi out (max) & midi in (pd).

    PD couldn't see anything arriving at a notein object (in omni mode - no args) - I was definately sending from Max. Max couldn't communicate with itself either for that matter, when using the network session.

    When I switched the inputs to IAC bus #1 - the communication all worked fine within the apps & between them.

    Guess that saves me reading the documentation (if there is any).

    Probably everything is handled by IAC inside localhost - since this is what feeds the TCP/IP midi routing probably. I wonder if it really would be any more efficient...

    A

    posted in technical issues read more

Internal error.

Oops! Looks like something went wrong!