• 4poksy

    righteous.

    good pi advice @old and yes this is mostly preemptive. I really want approximation charts to create structural patches and know their potential computation expense ahead of time.
    Need to handle all samples with a separate mcu / storage with card and better understanding of ram.. it's coming along. and yes all external hid handled by arduino

    yep, v true @ddw_music many many times (:

    posted in technical issues read more
  • 4poksy

    @old I agree and thanks for great info. That's sorta the best advice: The architecture of patching is utmost important. Im working out a scheme for live performance and want to have a schematic of patch basics: for instance a dac must be included in every patch .automatically. and if I wanted to instantiate something super minimal like an osc filter and ramp,,, how many ways could i do that and how many osc can I add on the fly and have them midi assigned at button push. Knowing the consumption parameters is helpful to what Im building and most definitely it'll rely on patch design.

    But on the fly sequencing for instance... sequencing a gated reverb on harmonics... and which parameters would be midi assigned automatically.. i have a ways to go .for sure. to somewhat cement these as core patches that can be assigned without using a mouse or pad.

    could latency be resolved running for instance any of the computational weight on arduino nanos or minis alongside the pi's?

    posted in technical issues read more
  • 4poksy

    Wow! Thank you @Whale-av ! thorough and prompt reply!

    Thats an excellent starting point. That dsp patch is great.

    Yes a super-patch is what I thought. So i'm thinking if there were maybe two Rpi's with one running a redundant version there'd be a fail-safe + + +... I've a bunch of Pi's to be repurposed.

    This is what im looking at to systematically make a report. I'd like to have a table of each object and how they've been organized in https://archive.flossmanuals.net/pure-data/list-of-objects/introduction.html with an estimate of cpu and ram usage in a largest patch scenario for live performance switching, etc.

    Onward!

    Thank you!

    posted in technical issues read more
  • 4poksy

    Howdy! [first post]

    Im somewhat new to PD. Ive read and patched and developed something I'd like to share. I am looking for data on how computationally intensive/expensive PD is. For instance I understand that block size is 64 samples that can be modified but what are the limits and how are they calculated?

    How would I measure the size parameters of each object both on disk storage size and speed of microprocessor to run n number of objects / patches / etc.

    I may not be asking this question correctly but for instance an attiny can run PD but what are the limits? How small of a processor (speed, ram, etc) can it be run on and based on that what are the limits to object count and hot swapping while a patch is playing (eg: adding a feedback on changing bands after fft) ?

    Im sure there's an easy way to do this within PD that i haven't found yet

    Any guidance to a list or script would be super helpful and appreciated.

    I will post any results and update to giithub and here.

    I look forward to meeting the community. Thank you for reading!

    buffer size of running each patch?

    posted in technical issues read more

Internal error.

Oops! Looks like something went wrong!