Crack
> Andy, *astonishing* sounds.
> 100% Pure Pure Pure PureData?
> (allowed answers: YES!!!).
Thanks very much Alberto, as you surmise...yes indeed. Not just pure Pd but very efficient Pd. One tries to re-factor the equations and models, transforming between methods and looking for shortcuts, boiling each one down to the least number of operations. There are nicer sounds, but these ones are developed to use low CPU and run multiple instances in real-time.
> About EA: which games?
Truth be told, I don't know. If I did I would probably have to observe NDA anyway. Which is one reason I'm not working on them, because I am going to publish all my methods in a coherent and structured thesis - it's the best strategy to push procedural audio forwards for all. Maybe it will be personally rewarding later down the line. But I do talk to leading developers and R&D people, and slowly working towards a strategic consensus. All the same, I'd be rather cautious about saying who is doing what, games people like to keep a few surprises back
> So this means designing an audio engine which is
> both responsive to the soundtrack/score, as well as
> to the actual action and human input of the game?
> Why wouldn't PD be the natural choice?
Pd _would_ be the natural choice. Not least of all, it's BSD type license means developers can just embed it. But it has competitors, (far less capable ones imho) that have established interests in the game audio engine market including a vast investment of skills (game sound designers are already familiar with them). So rather than let Pd simply blow them out of the water one needs a more inclusive approach by saying "hey guys..you should be embedding Pd into your engines"
Many hard decisions are not technical, but practical. For example you can't just replace all sample based assets, and you need to plan and build toolchains that fit into existing practices. Games development is big team stuff, so Pd type procedural audio has to be phased in quite carefully. Also, we want to avoid hype. The media have a talent for siezing on any new technological development and distorting it to raise unrealistic expectations. They call it "marketing", but it's another word for uninformed bullshit. This would be damaging to procedural audio if the marketers hyped up a new title as "revolutionary synthetic sound" and everyone reviewed it as rubbish. So the trick is to stealthily sneak it in under the media radar - the best we can hope for with procedural audio to begin with is that nobody really notices Then the power will be revealed.
> Obi, I've noticed that a lot of your tutorials and
> patches are based on generative synthesis/modelling,
> rather than samples. Is this the standard in the game world?
No. The standard is still very much sample based, which is the crux of the whole agenda. Sample based game audio is extremely limited from an interactive POV, even where you use hybrid granular methods. My inspiration and master, a real Jedi who laid the foundations for this project is a guy called Perry Cook, he's the one who wrote the first book on procedural audio, but it
was too ahead of the curve. Now we have multi-core CPU's there's actually a glut of cycles and execs running around saying "What are we going to use all this technology for?". The trick in moving from Perrys models to practical synthetic game audio is all about parameterisation, hooking the equations into the physics of the situation. A chap called Kees van den Doel did quite a lot of the groundwork that inspired me to take a mixed spectral/physical approach to parameterisation. This is how I break down a model and reconstruct it piecewise.
> Is this chiefly to save space on the media?
Not the main reason. But it does offer a space efficiency of many orders of magnitude!!!! Just as a bonus
I don't think many games developers have realised or understood this profound fact. Procedural methods _have_ been used in gaming, for example Elite was made possible by tricks that came from the demo scene to create generative worlds, and this has been extended in Spore. But you have to remember that storage is also getting cheaper, so going in the other direction you have titles like Heavenly Sword that use 10GB of raw audio data. The problem with this approach is that it forces the gameplay to take a linear narrative, they become pseudo-films, not games.
> Cpu cycles?
No, the opposite. You trade off space for cycles. It is much much more CPU intensive than playing back samples.
> Or is it simply easier to create non-linear sound design
> this way?
Yes. In a way, it's the only way to create true non-linear (in the media sense) sound design. Everything else is a script over a matrix of pre-determined possibilities.
oops rambled again... back to it...
a.
Crack
Yep. Things are going that way. Making it a standard is just my wishful vision! Not everyone is going to settle on Pd. There are other dataflow type interfaces to different unit generator sets like CPS, but there's a distinct movement in the direction of dataflow as a method for building procedural audio code...as it should be
I started to advocate this years ago as many here know, but found out only recently that EA have indeed ported Pd into some games, mainly for generative music scoring. Sony have something in R&D for the PS3 and certain game audio engine manufacturers have certainly considered it. I continue to knock on their doors, thump my bible and try to convince them to accept the good news into their hearts
It would be wonderful to establish Pd as the main audio component in games for runtime production because it's the correct tool to break down the barrier between sound designer and audio programmer, that's the way to push things forwards.
If you want to support this direction, the title to run out and buy is Spore. Brian Eno and others wrote procedural music scores using a cut down version called EAPd, which Mark Danks (GEM author, now at Sony) led the charge to embed as the audio engine.
More than one chapter of the book I'm working on is devoted to designing patches for game applications, how to do dynamic level of detail and interface to event streams from world controllers and physics engines.
I'd say dataflow programmers, whether audio or visual, have a good future ahead for commercial employment (but then I'm (very) biased
Here's some types of things in dev, these are components for planes, sort of thing you'd use in air combat games or whatever.
One of them is developed as a practical example in the book. (I'm trying to get an accurate Supermarine Spitfire working at the moment...)
http://obiwannabe.co.uk/sounds/effect-jetengine.mp3
http://obiwannabe.co.uk/sounds/effect-three-synthetic-jets-flypast.mp3
http://obiwannabe.co.uk/sounds/effect-singleprop-cockpit.mp3
Synthetic thunder
I don't understand enough about the geometry to visualise the distortions and phase changes that occur off the perpendicular axis. I can see you deal with this in the matlab code. Is that a re-implementation of Ribner and Roys equations? It's hard enough to imagine in a 2D plane, nevermind 3D
All I did with Pd is make a recirculating buffer that has a filter to approximate propagation losses and add N-waves created by the vline segment randomly to it. Sometimes the superposition does create thunder like effects, but that's more luck than judgement it seems.
I reckon to get the right effect you either need several n-wave generators in parallel, or several parallel delays operating on one wave.
As the text says, it's equivalent to a convolution of the n-wave with a set of points that are distances to corners in the tortuous line, it's the hypotenuse of a right angle triange if we assume the bolt comes straight down that's the square root of the horizontal distance squared plus the height squared, and with c as a constant then time correlates directly to distance.
In a way it's granular synthesis, so the density must be calculated. I think you could reduce the whole caper to two variables, density and off-axis phase shift. It's the propagation time divided by the period of one n-wave I think. At ground level the observer is perpendicular so it's very dense, but as you move up the lightning bolt the subtended angle increases and so does the propagation time, so density tails off.
I'm really looking forward to hearing where this takes you next...
a.
Synthetic thunder
Thanks again. I PM'd you with a an email. This afternnon I had a play with a recursive way of adding N-wave segments. It's very crude and unfortunately the random process still adds too much colour to the result, but it might give you ideas on a possible real-time solution.
http://www.pdpatchrepo.info/hurleur/thunder-nwave-experiment1.pd
Synthetic thunder
Hey all,
I've succeeded in generating thunder using a quasilinear model and MATLAB, and hopefully will soon be able to find an algorithm that will allow me to create entire storms in realtime in Pd (right now it takes a few hours to calculate the thunder signature of a 6-km lightning stroke).
In the meantime, I thought you guys might enjoy the product of today's computations, the thunder signature of a 3km lightning stroke 100m away from the observer. The dry output is very synthetic sounding, but with the simple addition of some reverb it really comes to life!
(I apologize for the quick & dirty cooledit 'verb on the example)
Error: tabsend~: $O-hann: no such array
Aha! This is a very interesting subject. Please forgive my presumptions.
The full FFT vocoder patch is attached. To use it you must experiment with loading different files and cross synthesising them. It is from the help files and should work as is.
Some problems will present themselves very quickly...
-
To cross synthesise two voices you must ensure that two speakers make exactly the same
utterances which are phonetically aligned. This is hard as I can tell you from experience of recording many voice artists. Even the same person will not speak a phrase the same way twice. -
The result is not a "timbral morph" between the two speakers. The human voice is very complex. Most likely the experiment will be invalidated by distracting artifacts.
Here's some suggestions.
-
Don't "morph" the voices, simply crossfade/mix them.
-
For repeatable results (essential to an experiment) a real-time solution is probably no good. Real time processing is very sensitive to initial conditions. I would prepare all the material beforehand and carefully screen it to make sure each set of subjects hears exactly the same signals.
-
If you want a hybrid voice (somewhere between A and
then vocoding is not the best way. There are many tools that would be better than Puredata which are open source and possible to integrate into a wider system.
Csound has a LPC suite. Linear predictive coding is particularly well suited to speech.
ii) Tapestrea is a wonderful tool that uses wavelet analysis. It also has a graphical front end that makes alignment of phonemes easier.
iii) Praat (Boersma and Weenink - Amsterdam Institute of Phonetic Sciences) is a great voice synthesis system based on articulatory tract models, where you can morph speaker models. You may find that a purely synthetic method yields data more suitable for this experiment.
> It is really hard (impossible or even sin) to convience TEACHERS.
Did you mean even cos, sin is an odd function, write it out 50 times!
What are people doing with pd
I'm writing a book about synthetic games sound design which uses Pd to teach it.
I wonder....
I think this is a cultural question, not a technical one Shankar, and there's a cultural answer. For almost 15 years in "the west" there's been an industry pushing quick and easy solutions to making music. That quick and easy approach is "buy our sample libaries".
I could write you a PhD thesis type essay on why this sucks, why the sample peddlers
have triumphed over the possiblity of human programmable synthesisers, why the cult of emulation and "hip hop" producers sampling loops of other peoples records has brought music down to the lowest common denominator. But I can sum it up in
one... creativity is hard. (And nobody wants to pay for it any longer, or invest
time in cultivating it)
I'm trying, in my own gentle way, to spread a little understanding and fresh enthusiasm for what I think has become a hidden art. Really understanding sound and synthesis is orders of magnitude (if there were such a scale) more difficult than grabbing a breakbeat from a record or going out with a microphone to collect material. Music making with preset tools has become so easy, and producers so lazy, that even the top paid studio producers do little except arrange other peoples work, and many lack even the most basic engineering skills to do recording and preproduction work on live material. Everyone wants to be a musician these days and put their "original" creations up on MySpace, and they can be - with Acid, FruityLoops and Reason you can just audition a few loops, press the "good" button and voila! Except your "original creation" is just a permutation on the same sound everybody else is making. That's why much music is so dreary, predictable and stale these days I think. The mainstream tools have become so rigid that it's impossible to subvert their use, and subversion is the essence of creative art.
Anyway, it would be arrogant to judge other peoples approach to music making this way. I myself spent many years hooked into the cult of sampling and making music
from other peoples work - it just became a boring creative cul-de-sac.
However, I would argue as a professional producer who has seen the industry go though many changes that the easy route to music making with sample libraries, combined with the mainstream medias greed for fast and cheap products has basically killed off a generation of really creative musicians and producers.
I've revised my paraphrasing of Miller about "undoing the sampling revolution".
There never was a sampling revolution. Sampling is the status quo, and the synthetic
revolution is still waiting to happen.
I say, stick with Pd, put in the effort to really understand manipulating and creating sound from first principles and you will harvest the fruits of its power and let your genuine creativity shine through.
Announce: mmm-0.1.0-eden
hi forum.
we proudly announce the first public release of our compact composer
for pd, mmm.
grab it at http://netpd.org/mmm-0.1.0.zip
mmm is best described in it's faq, see below. don't expect too much
yet, there is still a lot to be done. comments, bugreports, cash, are
welcome.
have fun with it!
christopher charles & enrique erne
faq for mmm-0.1.0 - eden
what is mmm?
mmm is a pd patch collection aimed at providing a studiolike(?),
streamlined, dynamic interface for making synthetic music.
screenshots?
http://www.netpd.org/mmm.png
ymmv depending on your operating system. we put some effort in
detecting the operating system and setting the fontsize according to
it, but quirky xorg or dpi settings might screw things up again.
where can i get it?
we currently host the mmm at http://netpd.org/mmm-0.1.0.zip ,
alternatively, you can grab netpd, enter the chat, and if either of
the authors is online, download it directly through netpd and start
rocking.
what does "mmm" stand for?
mmm was originally just the working title, but we came to like it
somehow. the original meaning is "music making machine" but you can
substitute every m for whatever you want. so "massive multiplayer
music" is okay with us, too.
what is the inspiration?
having worked on/with the bagoftricks (lots inconsistently coloured
gop-patches to be connected freely) and netpd (lots of
inconsistent-looking windows to clutter up the screen), we came to
mock up an clean, dynamic interface in which modules don't bring their
own gop or open their own window, but log onto the interface that's
provided for them by the motherpatch. all modules sharing the same
interface made it easy for them to share the same sequencer and
arranger.
what are the dependencies?
mmm should work with pd-0.39 and zexy installed. iemlib is important
for many synth and effects patches, and there's even set of gem
modules you can chain if you want.
is it actually usable?
no. this 0.1.0 release is rather a tech demo and a taste of things to
potentially come. you can crunch some acid loops out of it already,
but don't sell your protools studio equipment to start working with
mmm on monday.
how does it work?
mmm's interface (mmmmain.pd) is divided into 3 parts: there is the
module/channel view, where you can chain up synths and effects on 8
different channels. select an empty field on a channel, and then use
the scrollbox on the left to select a patch and open it. when clicking
on a patch you loaded up in the module view, the 2nd view comes into
play: from there you control the patch's sliders on the left, right of
it is the stepsequencer for each of the slider (means everything is
sequencable!). yet you won't hear anything until you did the following
2 things: press play in the uppermost row of mmmmain, and set up the
arranger to play the stepsequence. the arranger is not module-based,
but controls all modules of a channel are grouped in the arranger. for
now, you can only select pattern 01 or nothing to play in the
arranger. so set up a loop for the first pattern (loopstart:0,
looplength:1) set the first field on the channel you got your patch on
in the arranger to p01 and start making some noise.
does it work online?
yes. mmm is compatible to netpd and will automatically log on to
netpd's server if you have the netpd chat open. you can also download
the whole mmm package through netpd. feel free to jam around the
world.
what's not working yet / what is planned?
as for now, there is no support for samples whatsoever, it isn't
planned to support them soon. further, there is no hard disk recorder
available yet, but it is planned. the arranger/sequencer combo is very
crippled at the moment, only supporting 1 16-step-pattern to choose
from and 1 page of 16 patterns in the arranger. this will change
rather soon. next there are plans for luxury editing functions,
especially in the sequencer like copy, paste, random pattern,
interpolation and so on. plans exist for full keyboard control, but
this will be worked on not too soon. the module roster is far from
being complete yet, more is to come.
can i save my stuff?
should be possible with the buttons above the channels. don't rely on
the result though, this is still 0.1.0
can i add my own modules?
modules are not to hard to write, but for now, the list of selectable
modules is hardcoded. look at all the 4m-* patches in the patches
folder to see how they are ticking. contact us for adding your patch
to the mmm or try to figure out yourself how it works
what's the license?
mmm is licensed under the gnu lgpl. if you think this is a too useful
product to be free of charge, please consider donating the amount of
money you would've paid for it (or the amount of money you got from
selling your protools equipment on monday) to a trust of your choice.
who are the authors?
mmm is developed by enrique erne (eni, swiss, pd{at}mild.ch) and
christopher charles (syntax_tn, germany, chr.m.charles{at}gmail.com).
we can be contacted via email, irc (#dataflow) or directly in the
netpd chat. several patches within mmm are based upon netpd versions
of them, check netpd for the original authors. mmm shares some of it's
netcode with netpd, by roman haefeli.
disclaimer.
we cannot give you any guarantees on using mmm, not even that you
have fun. it should be relatively harmless, but don't come crying to
us if mmm accidently hijacks your *mule and downloads david hasslehoff
recordings to your computer.
eofaq
Anyone for a nice cup of tea?
Hi,
I'm a newbie with pure data but I found a good treatment : "Practical synthetic sound design - by Andy Farnell".
Thanks Obiwannabe for this ! It's not so easy for me to follow pure data's objects logical, but I do like the philosophy , the general point of view (ear), with a so clever sense of nonsense and humour lol !
So... I've got a problem (at least It's about the "Bubbles" patch in the tutorial :
Impossible to make it work : I conclued it was the objet ead~ which was missing. I've got the pd extended yet... Can you help me please ? By advance thank you.
Sorry for my so frenchy english lol
JeanMarie