Spectral Cross Over
Unlike Max, which uses arguments to determine the frame size for FFT, Pd syncs it to the block (or as Max calls it, the vector). Because of this, there is no need to create an extra output to sync the bin number (though, granted, it might make it slightly easier). Instead, just create a [phasor~] who's frequency is (sample rate / block size) and send [0( to its right inlet once to sync it with the block. Then multiply the output by the block size, and now you have synced bin numbers.
Also, I think you might want to use [rfft~] over [fft~].
\[zerox~\]
If you plan on using [xerox~] to phase sync two oscillators, it probably won't cut it. Generally, you want those things to be sample accurate. [xerox~] will give you a click corresponding to zero crossings out its right outlet, but, as far as I know at least, Pd's oscillators can't really use that for phase syncing ([xerox~] is actually based on a Max object, yet strangely Max's oscillators can't use it either). It would require a conversion to message rate to reset the phase, which kills sample accuracy, not to mention the fact that the phase input of [phasor~] quantizes to block boundaries (default 64 samples in Pd), which also kills sample accuracy.
However, if you know the ratio between your two oscillators, phase syncing can be achieved using a master [phasor~] to run both oscillators. Use the master sync frequency to drive the [phasor~], then multiply the output of the [phasor~] by the ratio between the synced (slave) oscillator and the master one. In other words, it should be multiplied by:
slave frequency / master frequency
Then, you just [wrap~] the signal and viola, you have a new synced phasor signal to drive the slave oscillator. The attached patch should hopefully clarify.
Call for projects - make art 2009 - what the fork?!
_
_ _ ____ _ _ ||
| | | | ||_ | | || | |
| || | | || | | ||| | |_____
| | || | | | ____ | | | | |
| | | | | | | | | | || | |_
|| || || || || || ||
|| ____ _______ _________
|| || | | | | _
| || | | ||| | | || ||
from | __ | | _ | | |
7 to 13 | | | | | | ||__ ||
DECEMBER 09 || || || |_| ||
MAKE ART 2009 - What The Fork?!
distributed and open practices in FLOSS art
--
CALL FOR PROJECTS
--
make art is an international festival dedicated to the integration of
Free/Libre/Open Source Software (FLOSS) in digital art.
The fourth edition of make art -- What The Fork?! distributed and open
practices in FLOSS art - will take place in Poitiers (FR), from the 7th
to the 13th of December 2009.
make art offers performances, presentations, workshops and an
exhibition, focused on the encounter between digital art and free
software.
We're currently seeking new, innovative FLOSS works and projects: music
and audiovisual performances, presentations, software demos, and
installations.
This year make art focuses on distributed and open practices in FLOSS
art. 'What the fork?!' is about decentralisation. Forking is the new
black. Forking, copying the source code of a project and continuing
work on the copy instead of the original, used to have a bad reputation.
It would split a project and its developer community in pieces, leading
to different, often incompatible, projects. Wasted effort, rivalry and
developer fights were all associated concepts. This is history. Forking
a project with the intention to compete with it is another story, but
the freedom to fork enables quick implementation of features and
customization, bypassing acquiring committer status, bugfix or feature
request protocol, working in a distributed way, together with others but
not necessarily towards one goal, working from one source,
cross-fertilising, inspiring, copying, patching, improving,
experimenting, changing direction, and merging. This practice is boosted
by decentralised software development tools, such as Darcs, Mercurial
and Git. It's not about quick hacks, but about creating room to
experiment, letting go of the one working copy and creating a
multiplicity of ideas.
Deadline : 15th of July 2009.
For more details, please visit http://makeart.goto10.org/call
--
:*
Hard syncing
oh, thanks, that's cool - i'm bad at maths, but if i got it right, the factor in the [*~] object is the result of dividing the sync osc's frequency by the frequency of the osc that's getting synced, right?
what, if i want a sine osc as syncing source? i have always to build my own one with a phasor~ and cos~, so that i have a phasor~ to sync other osc's?
sorry for my poor english, hope you got me......
\[loadbang\] <-\> \[initbang\]
@hardoff said:
one practical application and example:
you could make a synthesizer voice abstraction and have all the default parameters of the synth loaded using [initbang].
then, you would be able to use that same abstraction as part of a song, and initialize that song's parameter values with [loadbang]. so, first the initbang defaults would be loaded, and then the parameters that need to be changed for the song would overwrite the defaults, because [laodbang] comes after [initbang]
You could use [loadbang] for this too, as they fire in patches from most-nested to least-nested after they are all reated. [initbang] is mainly useful for doing things before the abstraction is connected into the parent patch, for example dynamic numbers of inlets and outlets.
http://lists.puredata.info/pipermail/pd-dev/2006-08/007346.html
if you dynamically create an abstraction in your patch, then [loadbang] will not be triggered upon the creation of the abstraction.
True, but you can send a "loadbang" message to the containing subpatch to initialize the contained abstractions.
Semi-Automatic Video Remixer
Yeah, I'm pretty sure it could be changed to a non-pdp version. If you want any help let me know.
When you say 'when the video ends' do you mean when the song ends? Because the video basically just keeps looping itself anyways. But if you mean when the song ends it could just grab a new song and new videos then I think I know what you mean. I have never tried anything like a random file chooser, I usually have problems just getting it to grab the file that I specifically want, but I think it's an awesome idea. I still don't think it could be fully automated because it would need the tempo input manually for the new song. Not too big a deal though. Do you have any experience with random file choosing type patches?
\[loadbang\] <-\> \[initbang\]
one practical application and example:
you could make a synthesizer voice abstraction and have all the default parameters of the synth loaded using [initbang].
then, you would be able to use that same abstraction as part of a song, and initialize that song's parameter values with [loadbang]. so, first the initbang defaults would be loaded, and then the parameters that need to be changed for the song would overwrite the defaults, because [laodbang] comes after [initbang]
Sound appear and disapear!!
Hello I installed Pd, jackd and Alsa via apt-get i have debian 5.0(lenny)
Sometimes the sound works, when work it appear an error message:
audio I/O error history:
seconds ago error type
0.14 A/D/A sync
1.00 unknown
1.52 A/D/A sync
5.76 unknown
5.76 unknown
tried but couldn't sync A/D/A
audio I/O error history:
seconds ago error type
1.40 A/D/A sync
3.38 A/D/A sync
3.69 A/D/A sync
7.12 A/D/A sync
when don't works appear this:
couldn't open MIDI input device 0
couldn't open MIDI output device 0
opened 0 MIDI input device(s) and 0 MIDI output device(s).
audio I/O error history:
seconds ago error type
4.69 unknown
4.69 unknown
thnx
Crack
> Andy, *astonishing* sounds.
> 100% Pure Pure Pure PureData? 
> (allowed answers: YES!!!).
Thanks very much Alberto, as you surmise...yes indeed. Not just pure Pd but very efficient Pd. One tries to re-factor the equations and models, transforming between methods and looking for shortcuts, boiling each one down to the least number of operations. There are nicer sounds, but these ones are developed to use low CPU and run multiple instances in real-time.
> About EA: which games?
Truth be told, I don't know. If I did I would probably have to observe NDA anyway. Which is one reason I'm not working on them, because I am going to publish all my methods in a coherent and structured thesis - it's the best strategy to push procedural audio forwards for all. Maybe it will be personally rewarding later down the line. But I do talk to leading developers and R&D people, and slowly working towards a strategic consensus. All the same, I'd be rather cautious about saying who is doing what, games people like to keep a few surprises back 
> So this means designing an audio engine which is
> both responsive to the soundtrack/score, as well as
> to the actual action and human input of the game?
> Why wouldn't PD be the natural choice?
Pd _would_ be the natural choice. Not least of all, it's BSD type license means developers can just embed it. But it has competitors, (far less capable ones imho) that have established interests in the game audio engine market including a vast investment of skills (game sound designers are already familiar with them). So rather than let Pd simply blow them out of the water one needs a more inclusive approach by saying "hey guys..you should be embedding Pd into your engines"
Many hard decisions are not technical, but practical. For example you can't just replace all sample based assets, and you need to plan and build toolchains that fit into existing practices. Games development is big team stuff, so Pd type procedural audio has to be phased in quite carefully. Also, we want to avoid hype. The media have a talent for siezing on any new technological development and distorting it to raise unrealistic expectations. They call it "marketing", but it's another word for uninformed bullshit. This would be damaging to procedural audio if the marketers hyped up a new title as "revolutionary synthetic sound" and everyone reviewed it as rubbish. So the trick is to stealthily sneak it in under the media radar - the best we can hope for with procedural audio to begin with is that nobody really notices
Then the power will be revealed.
> Obi, I've noticed that a lot of your tutorials and
> patches are based on generative synthesis/modelling,
> rather than samples. Is this the standard in the game world?
No. The standard is still very much sample based, which is the crux of the whole agenda. Sample based game audio is extremely limited from an interactive POV, even where you use hybrid granular methods. My inspiration and master, a real Jedi who laid the foundations for this project is a guy called Perry Cook, he's the one who wrote the first book on procedural audio, but it
was too ahead of the curve. Now we have multi-core CPU's there's actually a glut of cycles and execs running around saying "What are we going to use all this technology for?". The trick in moving from Perrys models to practical synthetic game audio is all about parameterisation, hooking the equations into the physics of the situation. A chap called Kees van den Doel did quite a lot of the groundwork that inspired me to take a mixed spectral/physical approach to parameterisation. This is how I break down a model and reconstruct it piecewise.
> Is this chiefly to save space on the media?
Not the main reason. But it does offer a space efficiency of many orders of magnitude!!!!
Just as a bonus
I don't think many games developers have realised or understood this profound fact. Procedural methods _have_ been used in gaming, for example Elite was made possible by tricks that came from the demo scene to create generative worlds, and this has been extended in Spore. But you have to remember that storage is also getting cheaper, so going in the other direction you have titles like Heavenly Sword that use 10GB of raw audio data. The problem with this approach is that it forces the gameplay to take a linear narrative, they become pseudo-films, not games.
> Cpu cycles?
No, the opposite. You trade off space for cycles. It is much much more CPU intensive than playing back samples.
> Or is it simply easier to create non-linear sound design
> this way?
Yes. In a way, it's the only way to create true non-linear (in the media sense) sound design. Everything else is a script over a matrix of pre-determined possibilities.
oops rambled again... back to it...
a.
Midi & Audio glitching over IAC bus
Does anyone know whether Pd's Midi runs in the same thread as the GUI?
I have managed to get Plogue Bidule synced to a midi clock sent from Pd over the IAC bus (OSx), but when say a window is moved, there is loads of glitching in the audio and the jitter seems to increase enormously. I have tried wacking up the buffer sizes but of course, the increased latency means that Bidule is synced by the order of the buffer size...Or worse if theres jitter involved. ITs not really practical to not touch the computer whilst it is running!
How can this be remedied? Is some kind of latency compensation required here
I totally long for the day when Jack OSX gets midi then we can have rewire like performance but with all the benefits that Jack currently provides! (does anyone have a clue when this might be?)
cheers
