Synthesis metal bars sound
HI,
i'm working on an installation based on this apllication made in java
i communique with pd via OSC
for each collision pd receive a bang with two parameters
height tube
position tube
i'm looking for synthesis metal bars sounds to transform this "thing" into a musical instrument
there is samples here
http://obiwannabe.co.uk/html/sound-design/sound-design-audio.html
http://obiwannabe.co.uk/sounds/effect-clonk-002-bar.mp3
http://obiwannabe.co.uk/sounds/effect-clonk-002-bar.mp3
http://obiwannabe.co.uk/sounds/effect-clonk-004-iron.mp3
http://obiwannabe.co.uk/sounds/effect-clonk-006-bar.mp3
What kind of simple patch should i have to make for this goal?
au revoir
Denis
Crack
> Andy, *astonishing* sounds.
> 100% Pure Pure Pure PureData?
> (allowed answers: YES!!!).
Thanks very much Alberto, as you surmise...yes indeed. Not just pure Pd but very efficient Pd. One tries to re-factor the equations and models, transforming between methods and looking for shortcuts, boiling each one down to the least number of operations. There are nicer sounds, but these ones are developed to use low CPU and run multiple instances in real-time.
> About EA: which games?
Truth be told, I don't know. If I did I would probably have to observe NDA anyway. Which is one reason I'm not working on them, because I am going to publish all my methods in a coherent and structured thesis - it's the best strategy to push procedural audio forwards for all. Maybe it will be personally rewarding later down the line. But I do talk to leading developers and R&D people, and slowly working towards a strategic consensus. All the same, I'd be rather cautious about saying who is doing what, games people like to keep a few surprises back
> So this means designing an audio engine which is
> both responsive to the soundtrack/score, as well as
> to the actual action and human input of the game?
> Why wouldn't PD be the natural choice?
Pd _would_ be the natural choice. Not least of all, it's BSD type license means developers can just embed it. But it has competitors, (far less capable ones imho) that have established interests in the game audio engine market including a vast investment of skills (game sound designers are already familiar with them). So rather than let Pd simply blow them out of the water one needs a more inclusive approach by saying "hey guys..you should be embedding Pd into your engines"
Many hard decisions are not technical, but practical. For example you can't just replace all sample based assets, and you need to plan and build toolchains that fit into existing practices. Games development is big team stuff, so Pd type procedural audio has to be phased in quite carefully. Also, we want to avoid hype. The media have a talent for siezing on any new technological development and distorting it to raise unrealistic expectations. They call it "marketing", but it's another word for uninformed bullshit. This would be damaging to procedural audio if the marketers hyped up a new title as "revolutionary synthetic sound" and everyone reviewed it as rubbish. So the trick is to stealthily sneak it in under the media radar - the best we can hope for with procedural audio to begin with is that nobody really notices Then the power will be revealed.
> Obi, I've noticed that a lot of your tutorials and
> patches are based on generative synthesis/modelling,
> rather than samples. Is this the standard in the game world?
No. The standard is still very much sample based, which is the crux of the whole agenda. Sample based game audio is extremely limited from an interactive POV, even where you use hybrid granular methods. My inspiration and master, a real Jedi who laid the foundations for this project is a guy called Perry Cook, he's the one who wrote the first book on procedural audio, but it
was too ahead of the curve. Now we have multi-core CPU's there's actually a glut of cycles and execs running around saying "What are we going to use all this technology for?". The trick in moving from Perrys models to practical synthetic game audio is all about parameterisation, hooking the equations into the physics of the situation. A chap called Kees van den Doel did quite a lot of the groundwork that inspired me to take a mixed spectral/physical approach to parameterisation. This is how I break down a model and reconstruct it piecewise.
> Is this chiefly to save space on the media?
Not the main reason. But it does offer a space efficiency of many orders of magnitude!!!! Just as a bonus
I don't think many games developers have realised or understood this profound fact. Procedural methods _have_ been used in gaming, for example Elite was made possible by tricks that came from the demo scene to create generative worlds, and this has been extended in Spore. But you have to remember that storage is also getting cheaper, so going in the other direction you have titles like Heavenly Sword that use 10GB of raw audio data. The problem with this approach is that it forces the gameplay to take a linear narrative, they become pseudo-films, not games.
> Cpu cycles?
No, the opposite. You trade off space for cycles. It is much much more CPU intensive than playing back samples.
> Or is it simply easier to create non-linear sound design
> this way?
Yes. In a way, it's the only way to create true non-linear (in the media sense) sound design. Everything else is a script over a matrix of pre-determined possibilities.
oops rambled again... back to it...
a.
ALSA
below you'll find my lsmod info. echomixer, the alsa-toolkit utility for echo audio products did work after doing [ # alsaconf ] however, I tried to test my config simply by doing this;
# aplay -vv *
ALSA lib confmisc.c:670:(snd_func_card_driver) cannot find card '0'
ALSA lib conf.c:3500:(_snd_config_evaluate) function snd_func_card_driver returned error: No such device
ALSA lib confmisc.c:391:(snd_func_concat) error evaluating strings
ALSA lib conf.c:3500:(_snd_config_evaluate) function snd_func_concat returned error: No such device
ALSA lib confmisc.c:1070:(snd_func_refer) error evaluating name
ALSA lib conf.c:3500:(_snd_config_evaluate) function snd_func_refer returned error: No such device
ALSA lib conf.c:3968:(snd_config_expand) Evaluate error: No such device
ALSA lib pcm.c:2143:(snd_pcm_open_noupdate) Unknown PCM default
aplay: main:550: audio open error: No such device
So therer is still a missing piece.
Module Size Used by
snd_layla24 36356 0
snd_seq_oss 40084 0
snd_seq_midi 9792 0
snd_seq_midi_event 8160 2 snd_seq_oss,snd_seq_midi
snd_seq 60456 5 snd_seq_oss,snd_seq_midi,snd_seq_midi_event
snd_rawmidi 28992 2 snd_layla24,snd_seq_midi
snd_seq_device 9708 4 snd_seq_oss,snd_seq_midi,snd_seq,snd_rawmidi
firmware_class 11744 1 snd_layla24
snd_pcm_oss 52032 0
snd_mixer_oss 20704 1 snd_pcm_oss
snd_pcm 91396 2 snd_layla24,snd_pcm_oss
snd_timer 26500 2 snd_seq,snd_pcm
snd 65908 9 snd_layla24,snd_seq_oss,snd_seq,snd_rawmidi,snd_seq_device,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer
soundcore 11204 1 snd
snd_page_alloc 11304 2 snd_layla24,snd_pcm
ALSA
Dunno what snd_pcm is returning there, but you should see a separate driver for the Layla24
like this
$ lsmod
snd-seq-midi 5152 0 (unused)
snd-virmidi 2080 0
snd-seq-virmidi 5128 0 [snd-virmidi]
snd-seq-midi-event 6240 0 [snd-seq-midi snd-seq-virmidi]
snd-seq 48784 0 [snd-seq-midi snd-seq-virmidi snd-seq-midi-event]
snd-layla24 149732 3 <--------*here*
snd-pcm 85860 2 [snd-layla24] <---and pcm is using it
You don't have to recompile kernel or anything, find a driver and use insmod
Apparently there's a utils package at Alsa Project website for the Echo Layla24 that sets up everything. Have you tried that one?
also, there's an ALSA Wiki up now that may help you
Timbre conversion
@daisy said:
I have read some where that "if a voice is at same pitch and same loudness and still if one recognize that two voices are different , it is becuase of TIMBRE (tone quality)". (I agree there are other features as well who need to consider).
Timbre is another word for spectrum. The spectrum of a sound is the combination of basic sine waves that are mixed together to make it. Every sound (except a sine wave) is a mixture of sine waves. You can make any sound by adding the right sine waves together. This is called synthesis.
@daisy said:
First Question:
So how we can calculate the TIMBRE of voice? as fiddle~ object is used to determine the pitch of voice? what object is used for TIMBRE calculation?.
[fft~] object splits up the spectrum of a sound. Think of it like a prism acting on a ray of light. Sound which is a mixture of sines, like white light, goes in. A rainbow of different colours comes out. Now you can see how much red, blue, yellow or green light was in the input. That's called analysis.
So the calculation that gives the spectrum doesn't return a single number. Timbre is a vector, or list of numbers which give the frequencies and amplitudes of the sine waves in the mixture. We sometimes call these "partials".
If you use sine wave oscillators to make a bunch of new sine waves and add them together according to this recipe you get the original sound back! That's called resynthesis.
@daisy said:
Second Question:
And how one can change TIMBRE? as pitch shifting technique is used for pitch? what about timbre change?Thanks.
Many things change timbre. The simplest is a filter. A high pass filter removes all the low bits of the spectrum, a bandpass only lets through some of the sine waves in the middle, and so on...
Another way to change timbre is to do analysis with [fft~] and then shift some of the partials or remove some, and then resynthesise the sound.
@daisy said:
I have a kind of general idea (vcoder). but how to implement it? and how to change formant?.
A vocoder is a bank of filters and an analysis unit. Each partial that appears in the analysis affects the amplitude of a filter. The filter itself operates on another sound (often in real time). We can take the timbre of one sound by analysing it and get it to shape another sound that is fed through the filters. The second sound takes on some of the character of the first sound. This is called cross-synthesis.
/doc/4.fft.examples/05.sheepgoat.pd
Help -> 7.Stuff -> Sound file tools -> 6.Vocoder
Fuck i love pd
hi brett, that track is still up on my site. for some reason the link comes out as a .pd file not an .mp3
http://www.m-pi.com/this-is-serious-mum.mp3
just cut and paste that and it will work.
also heaps of stuff here: http://www.m-pi.com/remixes
>It's weird that many ppl seem to be using pd but that the output~ page in the forum still has threads in it from 2004 in the top page!! <
it took me a few months of solid patching (a few hours every day) to get a workable setup for actually making tracks. it's certainly no small undertaking.
>I'm pretty new to pd and just working my way through tutorials at the moment, but do you have any tips with regard to actually going about customising your own setup?<
you are on the right track going through the tutorials. the way i did it was first to build stuff to cut up and effect samples, and then secondly make a system to control those processes live. mine was all based on the [key] command, and i just triggered everythign from my laptop's qwerty keyboard. this was nice when i was travellign as it meant i didn't need to cart any gear around. also good for playing live cos i could pick my computer up and jam on the dancefloor. there are a few options though, especially triggering stuff with sensors and such. but i'm sticking with the bare bones keyboard approach cos it works for me well enough.
> like whether to keep lots of separate instruments or try to keep everything under one roof...<
i try to keep my stuff in one patch as much as possible. a couple of reasons for that, but the main one for me was that i kept modifying abstractions and then other patches that relied on those abstractions would stop working. generally much easier just to have one or two or a few patches to do everythign you need. even if you incorporate everything you make into one patch it doesn't get too big. usually well under 1 meg.
>I think I will tend to mainly use samplers and control structures for controlling my external Midi gear, but in a live setup, not sure how to integrate it into Logic Pro?<
my thinking on this is that if you have a guitar it has 4 or 5 strings, and you manipulate those strings in a variety of ways to make most of the sounds you need. if you listen to my audio..all of that is just 2 or at most 3 channels! so i always have only 2 or 3 samples playing at once. my stuff from back then was a bit light..not really hard hitting on a dancefloor (which is what i'm interested in) ..but i think you do what to keep everythign as minimal as possible. as far as live performance goes, i wouldn't go anywhere near something like logic audio.
if you have midi gear, then def work on triggering that with pd. i'm working on synthesis within pd now, rather than the sample based stuff...but it's a constant battle to keep cpu usage to a minimum. triggering external devices will be no problem for pd and will leave you heaps of cpu for doing sample mashing.
can't stress enough though. KEEP IT AS SIMPLE AS POSSIBLE. for live music, traditional musicians only play one instrument at once. if you want to make whole songs live, then you are going to have to do the beats and bass and interesting stuff all at one time, so you want to keep it as simple as possible so that you can inject a lot of liveness into it. generally, the more channels of audio you have going at once, the less room there is for jamming out in an impromptu fashion....unless you have magic fingers.
>Look forward to hearing your stuff if possible.<
cool, thanks. quick background on my stuff..."this_is_serious_mum" is a live jam recorded in one take. just 2 channels of audio driving all the sounds from small sample loops being cut up in realtime by me pressing keys on the keyboard. it's a super simple setup, but i think the reason why it works ok is that i spent more time actually playing and practicing than i spent on coding the bastard. i toured across europe and japan and australia playing this stuff and it was generally well recieved. at really good gigs it was the biggest rush ever.
so yeah. good luck. grab the bull by the horns and just go for it.
Cheers,
matt
Midi in on linux
@Gimmeapill said:
do you have the alsa module snd-seq loaded ?
lsmod|grep snd_seq
snd_seq_dummy 4996 2
snd_seq_oss 36480 5
snd_seq_midi 9984 2
snd_rawmidi 27264 3 snd_usb_lib,snd_mpu401_uart,snd_seq_midi
snd_seq_midi_event 8960 2 snd_seq_oss,snd_seq_midi
snd_seq 59120 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event
snd_timer 25348 3 snd_rtctimer,snd_pcm,snd_seq
snd_seq_device 9868 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq
snd 58372 16 snd_usb_audio,snd_hwdep,snd_mpu401,snd_mpu401_uart,snd_seq_oss,snd_intel8x0,snd_ac97_codec,snd_rawmidi,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq,snd_timer,snd_seq_device
Thanks, Gimmeapill, but it has been loaded all along and of course I am not getting midi.
A quite basic question....(?)
Hi,
I am new to PureData,and I don't know if it can do what I want.
So I am trying to find a simple exemple of this :
- in a real time way :
- getting audio input from my sound card (audio in)
- redirect this input to my sound card (audio out)
The goal is - in a real time way :
- getting audio input from my sound card (audio in)
- apply some frequency treatment
- redirect the treated input to my sound card (audio out)
I can't manage to find this on the web...
please help.
Ta.
Frozen reverb
"Frozen reverb" is a misnomer. It belongs in the Chindogu section along with real-time timestretching, inflatable dartboards, waterproof sponges and ashtrays for motorbikes. Why? Because reverb is by definition a time variant process, or a convolution of two signals one of which is the impulse response and one is the signal. Both change in time. What you kind of want is a spectral snapshot.
-
Claudes suggestion above, a large recirculating delay network running at 99.99999999% feedback.
Advantages: Sounds really good, its a real reverb with a complex evolution that's just very long.
Problems: It can go unstable and melt down the warp core. Claudes trick of zeroing teh feedback is foolproof, but it does require you to have an apropriate control level signal. Not good if you're feeding it from an audio only source.
Note: the final spectrum is the sum of all spectra the sound passes through, which might be a bit too heavy. The more sound you add to it, with a longer more changing sound, the closer it eventually gets to noise. -
A circular scanning window of the kind used in a timestretch algorithm
Advantages: It's indefinitely stable, and you can slowly wobble the window to get a "frozen but still moving" sound
Problems: Sounds crap because some periodicity from the windowing is always there.
Note: The Eventide has this in its infiniverb patch. The final spectrum is controllable, it's just some point in the input sound "frozen" by stopping the window from scanning forwards (usually when the input decays below a threshold). Take the B.14 Rockafella sampler and write your input to the table. Use an [env~]-[delta] pair to find when the
input starts to decay and then set the "precession percent" value to zero, the sound will freeze at that point. -
Resynthesised spectral snapshot
Advantages: Best technical solution, it sounds good and is indefinitely stable.
Problems: It's a monster that will eat your CPUs liver with some fava beans and a nice Chianti.
Note: 11.PianoReverb patch is included in the FFT examples. The description is something like "It punches in new partials when theres a peak that masks what's already there". You can only do this in the frequency domain. The final spectrum will be the maxima of the unique components in the last input sound that weren't in the previous sound. Just take the 11.PianoReverb patch in the FFT examples and turn the reverb time up to lots.