Sympathetic strings, rods, etc implementation
I've just recently been getting into physical modeling myself, so I only have so much to offer here, but your post is also raising some question for me. Mainly, what is rule of thumb modeling? And what makes you think you need an FFT for this? I mean, is there some FFT synthesis method your thinking of for modeling resonant systems?
If you go the physical modeling route, though, sympathetic vibrations or resonance is actually pretty simple as it is inherently part of the system. For example, to create the sympathetic vibrations of a string (as in a sitar or open strings of a guitar) you would simply feed the output of the plucked strings into a model of the open strings, possibly attenuated and filtered first. The model will only resonate with frequencies that are harmonic to it, as it should; other frequencies will quickly die out. If I remember right, the ideal place to tap the plucked string would be at the "bridge" of the model as that is where the coupling would be in a physical instrument, though I'm not certain it is a hard and fast rule (you can definitely get strings to resonate from energy propagating through air).
Lunetta module inspired pd simulation
Hello all. I made this as a learning/exploration tool for a hardware project I've been interested in for a while.
Lunettas are simple CMOS based synths. Logic chips and few other components are used, brought out to patch points and used in a modular fashion.
This patch is made in pd-vanilla. No default values are used, so to get it started you may have to tweak it a bit. As it is, it's a nice drone synth. It can easily be expanded to include other modules and complex functions.
The simulation includes 4 LFO/Clocks, 4-bit DAC, VCO. All modules have three selectable frequency ranges and manual pulse width control. Each LFO has a slider for manual pitch control.
I have some ideas for refinement, they mostly involve using the formulas for RC networks to determine frequency.
Hope it's fun to play with!
NVidia puredyne vs vista
Ok. So you should try to find the appropriate alsa option for your soundcard.
sudo gedit /etc/modprobe.d/alsa-base.conf
and add the following line at the end of the file:
"options snd-hda-intel model=MODEL"
where MODEL is the name of your model. See http://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio-Models.txt , search for ALC662 and ALC861. Then save the file, reload alsa (sudo alsa force-reload), and mute/unmute the soundcard. It's a trial and error step.
PD cuts off other sound on the computer
Thanks!
There seems to be similar thread going on with sonsofsol.
i installed jackd, when typing this command:
pacmd load-module module-jack-source channels=2; pacmd load-module module-jack-sink channels=2;
i get the following error
Welcome to PulseAudio! Use "help" for usage information.
>>> Module load failed.
>>> Welcome to PulseAudio! Use "help" for usage information.
>>> Module load failed.
when trying to connect pd with jack i get
error: JACK: unable to connect to JACK server
JACK: jack returned status 17
How to fix this?
Also in PD, when I choose ALSA midi, not Jack, then PD appears in Jack control.
Does this mean alsa-midi is the way to connect Pd to Jack? Not by choosing "Jack" in "media"?
M
Pd disables media players on ubuntu 10.10
thanx for your help guys.
i installed jackd, when typing this command:
pacmd load-module module-jack-source channels=2; pacmd load-module module-jack-sink channels=2;
i get the following error
Welcome to PulseAudio! Use "help" for usage information.
>>> Module load failed.
>>> Welcome to PulseAudio! Use "help" for usage information.
>>> Module load failed.
when trying to connect pd with jack i get
error: JACK: unable to connect to JACK server
JACK: jack returned status 17
anybody know how to fix this?
DIY2 - Effects, Sample players, Synths and Sound Synthesis.
>>another suggestion: why not make a lib sub-folder. where you can store such things like modulate. or a scale log which then could work with arguments. <<
that's how i originally did it, but i just wanted to make it so that the modules would just 'work' even if they were removed from the library. i guess i was kind of successful in that regard, because i have spotted some of my drums and stuff inside other people's projects. the only abstraction they need is the 808_state for state saving, but they will work fine without that anyway.
>>currently i guess it would be annoying to change all modulates.<<
i sometimes have a lot of free time the other day i went through the whole library and replaced all the [loadbang] objects with a subpatch to allow for loadbang on dynamic creation. this meant changing around 100 patches and resaving them.
i will do DIY3 soon then. it is not 'finished', and i have some worries that it is not at all backwards compatible with DIY2, but at the end of the day i am a musician more than a programmer, so the function is what matters.
diy3 'upgrades' include:
a built in sound editor, so you can record loops and sounds and edit them within pd and then export them to .wav files.
some more 'synth' like modules and effects units (tape echo, etc)
inbuilt diy-clock module that ties everything together for sequencing, and then of course some sequencer modules.
vanilla pd compatibility (this unfortunately meant removing some really useful things like the 7 and 13 band EQ's, but they could always be imported from DIY2)
C::NTR::L 1.0 - live AV improv+physical computing
Hi everybody, i've never write on this forum, but i was always following threads and i must say thanks to all the contributors in here for their help, you all are making an amazing work with this forum.
Now i'm here cause i would like to introduce you C::NTR::L 1.0 (beta).
C::NTR::L is the name of the free software for real-time Human-Computer Interaction exploiting physical computing possibilities. Developed in PureData by Marco Donnarumma. It seeks to be a tool for audiovisual live improvisation. The project started in 2007 and it remains a constant work in progress, i'm always interested in new ideas and collaborations (recently for example i worked a bit with Servando Barreiro and we included a module to use sensors, exploiting his DIY hardware Minia). This is the version 1.0BETA.
I'm planning to publish the patch, but before i want to work more on the interface, and enhance some features to offer a good usability of the tool, also for who doesn't work everyday with graphical programming.
C::NTR::L transforms your standard chord instrument - electric bass, guitar, violin, etc. - in a audiovideo controller without exploiting a specific external hardware or MIDI technology.
Once you connected the instrument to the sound card of your computer, C::NTR::L starts to recognize which notes you play. This is possible trought a complex structure of band pass/low pass/hi pass filters which automatically separate the core frequency of the incoming audio signal.
Then C::NTR::L analyzes the duration and the RMS of each single note and finally translates this data in order to control and trigger a set of audiovideo efx and modules which at this moment features:
**
VIDEO
* playlist
* scratch and loop points
* white/black fade
* color matrix
* blur
* delay
* strobo
* 3D efx
* presets save
AUDIO
* real-time sound in processing
* support multiple sound input (up to as amany as you want and your machine can stand)
* granulator (original module by Matt Davey THANKSSSSSS for the great ispiration!! i put your reference in posts and in the patch itself, but please tell me if you want more specific references.)
* bit-crusher (original module by Matt Davey)
* reverb (original module by Matt Davey)
* oscillators
* presets
**
I'm looking for beta-testers, so If you're interested, please write to info [at] thesaddj.com. Keep on checking www.thesaddj.com for future news.
And www.thesaddj.com/icontrolnature for the live show i perform with C::NTR::L.
An extract of the live project performed with C::NTR::L can be found here (live @ Cinesthesy Festival, France):
Soulful thanks for sharing, supporting and inspiring goes to: Rep (PD'r and Multimedia Artist), AssoBeam (PD'r and Multimedia Artists), Husk (PD'r and Multimedia Artist), Sero (Sound Artist), Brendan Byrne (PD'r and teacher), Jorg Koch (MAX'r and sound artist), Servando Barreiro (PD'r and Multimedia Artist), Hardoff (PD'r) G-noma (Multimedia Artist) and the incredible community on the PD Forum.
Marco / The S.A.D
Help with pluck~
Yes, KS is "physical modelling". The term is useful to distinguish from spectral modeling. With spectral modelling you try to emulate a sound by directly following the spectrum at some points and with physical modelling the implementation tries to copy the physical process going on, to some extent, following the propagation of forces and displacements, boundary reflections and phase inversions..etc. But there isn't one way of doing PM.
KS is an example of a waveguide model, as opposed to a discrete MSD (mass-spring-damper) system. It is really a kind of IIR (recursive) filter.
I have seen two interesting string pluck implementations in Pd. The proper way, (as done by P Cook and JO Smith) uses two delays, one for each direction of propagation, and two filters which represent the end reflections.
Syntax the Nerd made a nice pluck unit using the fast t3_envelope and small blocksize in the Bot collection.
If the duration of the excitation is greater than or equal to the buffer propagation time, and a copy is passed straight through to the output, then the noisy attack leaves no perceivable delay at the start of each note. Unless the blocksize = 1 it gets hard to tune high notes.
Crack
> Andy, *astonishing* sounds.
> 100% Pure Pure Pure PureData?
> (allowed answers: YES!!!).
Thanks very much Alberto, as you surmise...yes indeed. Not just pure Pd but very efficient Pd. One tries to re-factor the equations and models, transforming between methods and looking for shortcuts, boiling each one down to the least number of operations. There are nicer sounds, but these ones are developed to use low CPU and run multiple instances in real-time.
> About EA: which games?
Truth be told, I don't know. If I did I would probably have to observe NDA anyway. Which is one reason I'm not working on them, because I am going to publish all my methods in a coherent and structured thesis - it's the best strategy to push procedural audio forwards for all. Maybe it will be personally rewarding later down the line. But I do talk to leading developers and R&D people, and slowly working towards a strategic consensus. All the same, I'd be rather cautious about saying who is doing what, games people like to keep a few surprises back
> So this means designing an audio engine which is
> both responsive to the soundtrack/score, as well as
> to the actual action and human input of the game?
> Why wouldn't PD be the natural choice?
Pd _would_ be the natural choice. Not least of all, it's BSD type license means developers can just embed it. But it has competitors, (far less capable ones imho) that have established interests in the game audio engine market including a vast investment of skills (game sound designers are already familiar with them). So rather than let Pd simply blow them out of the water one needs a more inclusive approach by saying "hey guys..you should be embedding Pd into your engines"
Many hard decisions are not technical, but practical. For example you can't just replace all sample based assets, and you need to plan and build toolchains that fit into existing practices. Games development is big team stuff, so Pd type procedural audio has to be phased in quite carefully. Also, we want to avoid hype. The media have a talent for siezing on any new technological development and distorting it to raise unrealistic expectations. They call it "marketing", but it's another word for uninformed bullshit. This would be damaging to procedural audio if the marketers hyped up a new title as "revolutionary synthetic sound" and everyone reviewed it as rubbish. So the trick is to stealthily sneak it in under the media radar - the best we can hope for with procedural audio to begin with is that nobody really notices Then the power will be revealed.
> Obi, I've noticed that a lot of your tutorials and
> patches are based on generative synthesis/modelling,
> rather than samples. Is this the standard in the game world?
No. The standard is still very much sample based, which is the crux of the whole agenda. Sample based game audio is extremely limited from an interactive POV, even where you use hybrid granular methods. My inspiration and master, a real Jedi who laid the foundations for this project is a guy called Perry Cook, he's the one who wrote the first book on procedural audio, but it
was too ahead of the curve. Now we have multi-core CPU's there's actually a glut of cycles and execs running around saying "What are we going to use all this technology for?". The trick in moving from Perrys models to practical synthetic game audio is all about parameterisation, hooking the equations into the physics of the situation. A chap called Kees van den Doel did quite a lot of the groundwork that inspired me to take a mixed spectral/physical approach to parameterisation. This is how I break down a model and reconstruct it piecewise.
> Is this chiefly to save space on the media?
Not the main reason. But it does offer a space efficiency of many orders of magnitude!!!! Just as a bonus
I don't think many games developers have realised or understood this profound fact. Procedural methods _have_ been used in gaming, for example Elite was made possible by tricks that came from the demo scene to create generative worlds, and this has been extended in Spore. But you have to remember that storage is also getting cheaper, so going in the other direction you have titles like Heavenly Sword that use 10GB of raw audio data. The problem with this approach is that it forces the gameplay to take a linear narrative, they become pseudo-films, not games.
> Cpu cycles?
No, the opposite. You trade off space for cycles. It is much much more CPU intensive than playing back samples.
> Or is it simply easier to create non-linear sound design
> this way?
Yes. In a way, it's the only way to create true non-linear (in the media sense) sound design. Everything else is a script over a matrix of pre-determined possibilities.
oops rambled again... back to it...
a.
Announce: mmm-0.1.0-eden
hi forum.
we proudly announce the first public release of our compact composer
for pd, mmm.
grab it at http://netpd.org/mmm-0.1.0.zip
mmm is best described in it's faq, see below. don't expect too much
yet, there is still a lot to be done. comments, bugreports, cash, are
welcome.
have fun with it!
christopher charles & enrique erne
faq for mmm-0.1.0 - eden
what is mmm?
mmm is a pd patch collection aimed at providing a studiolike(?),
streamlined, dynamic interface for making synthetic music.
screenshots?
http://www.netpd.org/mmm.png
ymmv depending on your operating system. we put some effort in
detecting the operating system and setting the fontsize according to
it, but quirky xorg or dpi settings might screw things up again.
where can i get it?
we currently host the mmm at http://netpd.org/mmm-0.1.0.zip ,
alternatively, you can grab netpd, enter the chat, and if either of
the authors is online, download it directly through netpd and start
rocking.
what does "mmm" stand for?
mmm was originally just the working title, but we came to like it
somehow. the original meaning is "music making machine" but you can
substitute every m for whatever you want. so "massive multiplayer
music" is okay with us, too.
what is the inspiration?
having worked on/with the bagoftricks (lots inconsistently coloured
gop-patches to be connected freely) and netpd (lots of
inconsistent-looking windows to clutter up the screen), we came to
mock up an clean, dynamic interface in which modules don't bring their
own gop or open their own window, but log onto the interface that's
provided for them by the motherpatch. all modules sharing the same
interface made it easy for them to share the same sequencer and
arranger.
what are the dependencies?
mmm should work with pd-0.39 and zexy installed. iemlib is important
for many synth and effects patches, and there's even set of gem
modules you can chain if you want.
is it actually usable?
no. this 0.1.0 release is rather a tech demo and a taste of things to
potentially come. you can crunch some acid loops out of it already,
but don't sell your protools studio equipment to start working with
mmm on monday.
how does it work?
mmm's interface (mmmmain.pd) is divided into 3 parts: there is the
module/channel view, where you can chain up synths and effects on 8
different channels. select an empty field on a channel, and then use
the scrollbox on the left to select a patch and open it. when clicking
on a patch you loaded up in the module view, the 2nd view comes into
play: from there you control the patch's sliders on the left, right of
it is the stepsequencer for each of the slider (means everything is
sequencable!). yet you won't hear anything until you did the following
2 things: press play in the uppermost row of mmmmain, and set up the
arranger to play the stepsequence. the arranger is not module-based,
but controls all modules of a channel are grouped in the arranger. for
now, you can only select pattern 01 or nothing to play in the
arranger. so set up a loop for the first pattern (loopstart:0,
looplength:1) set the first field on the channel you got your patch on
in the arranger to p01 and start making some noise.
does it work online?
yes. mmm is compatible to netpd and will automatically log on to
netpd's server if you have the netpd chat open. you can also download
the whole mmm package through netpd. feel free to jam around the
world.
what's not working yet / what is planned?
as for now, there is no support for samples whatsoever, it isn't
planned to support them soon. further, there is no hard disk recorder
available yet, but it is planned. the arranger/sequencer combo is very
crippled at the moment, only supporting 1 16-step-pattern to choose
from and 1 page of 16 patterns in the arranger. this will change
rather soon. next there are plans for luxury editing functions,
especially in the sequencer like copy, paste, random pattern,
interpolation and so on. plans exist for full keyboard control, but
this will be worked on not too soon. the module roster is far from
being complete yet, more is to come.
can i save my stuff?
should be possible with the buttons above the channels. don't rely on
the result though, this is still 0.1.0
can i add my own modules?
modules are not to hard to write, but for now, the list of selectable
modules is hardcoded. look at all the 4m-* patches in the patches
folder to see how they are ticking. contact us for adding your patch
to the mmm or try to figure out yourself how it works
what's the license?
mmm is licensed under the gnu lgpl. if you think this is a too useful
product to be free of charge, please consider donating the amount of
money you would've paid for it (or the amount of money you got from
selling your protools equipment on monday) to a trust of your choice.
who are the authors?
mmm is developed by enrique erne (eni, swiss, pd{at}mild.ch) and
christopher charles (syntax_tn, germany, chr.m.charles{at}gmail.com).
we can be contacted via email, irc (#dataflow) or directly in the
netpd chat. several patches within mmm are based upon netpd versions
of them, check netpd for the original authors. mmm shares some of it's
netcode with netpd, by roman haefeli.
disclaimer.
we cannot give you any guarantees on using mmm, not even that you
have fun. it should be relatively harmless, but don't come crying to
us if mmm accidently hijacks your *mule and downloads david hasslehoff
recordings to your computer.
eofaq