MS decoding
Hey guys,
Wikipedia these , ad netsearch - I had for my memoire a few years ago...
Walt Disney- Fantasia_ they toured with live mixers in surround for that film. They used the pan pots developed by Blumlein.
(Brit)Blumlein was the guy who said mono can imitate spac eby being panned between two distribution sources (speakers).
There was Fletcher (Am)- Mr WALL OF SOUND -
each source, a speaker... Too expensive - and impossible for mass distrib. Blum wins...
Nowadays, with PD...
Surround
dac 1 2 3 4 5 6 7 8 then link with adat 9 10 11 12 etc....
(Soundcards - Protools digi01, motu 828 now sell for under 300 used!! - Motut drivers are current, but ot for linux. RME is linux/win/mac ready)
(Sound on Sound has great articles on the following!!!)
if you follow dolby...
123 1 and 3 ambient front, 3 center voice
4 5 - left right behind, mix of front with 5-20 ms delay, depending on image, with high and low cuts, so it imitates how we hear sound when it is behind us.
6 Bass - dumped to woofer below 110-80 hz.
Haven't read up on DTS (check sound on sound)
all the numbers are different depending on your setup of your program of choice, and sound card..But the distro is l/r front, l/r rear with delay ad filter cut. Center front, and bass.
This creates a uniform distro method for cinema - good to know if you have mixed a film stereo, and do not want the dolby effect in place!!!
or, when you get a dolby mixer to work on your film, she or he is preparig your film for this in the cinema...
test it:
just do a surround mix, saving your work. play it back on multitrack through a dolby amp from a surround dvd with coaxial...
then connect to the amp from a dvd reader with each track connected by a coaxial cable...
then connect a multitrack sound card directly to your amp, with each track linked to an out.....
you should hear some differences...
But the man at IEM, Mr PD and GEM, has been working on their surround for years - and has distributed the patch recently. Plus ambison is around for max...and I thought, there was a PD port....
But it begs the question of mass distro versus unique design and experiences of sound....
Alsa issues
So pd works great so far, except that it wants to hog all audio output on my system. If any other application is using audio (alsa), then i see "snd_pcm_open (output): Device or resource busy" from pd on the command line when trying to select the alsa output device in the gui. Any other apps gladly mix together (e.g. I can run as many mplayers or whatever as I want and they all have sound): it appears the alsa dmix plugin is enabled by default in the latest versions of alsa, and furthermore I think my card has an analog mixer too, but I'm not actually sure which method is being used by any or all of my other applications (dmix or the hw mixer). Still, pd is the only one that doesnt want to play nice. If all other sound using applications are closed, pd works fine.
I have preemptively posted some info if it might help in a diagnosis:
$ cat /dev/sndstat
Sound Driver:3.8.1a-980706 (ALSA v1.0.14 emulation code)
Kernel: Linux pompeii 2.6.22-14-generic #1 SMP Tue Dec 18 08:02:57 UTC 2007 i686
Config options: 0
Installed drivers:
Type 10: ALSA emulation
Card config:
HDA Intel at 0xee400000 irq 21
Audio devices:
0: AD198x Analog (DUPLEX)
Synth devices: NOT ENABLED IN CONFIG
Midi devices: NOT ENABLED IN CONFIG
Timers:
31: system timer
Mixers:
0: Analog Devices AD1981
$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog]
Subdevices: 0/1
Subdevice #0: subdevice #0
card 0: Intel [HDA Intel], device 1: AD198x Digital [AD198x Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
It should be noted that my distro/alsa install did not create an /etc/asound.conf by default, and everything seems to work without it.
ALSA
below you'll find my lsmod info. echomixer, the alsa-toolkit utility for echo audio products did work after doing [ # alsaconf ] however, I tried to test my config simply by doing this;
# aplay -vv *
ALSA lib confmisc.c:670:(snd_func_card_driver) cannot find card '0'
ALSA lib conf.c:3500:(_snd_config_evaluate) function snd_func_card_driver returned error: No such device
ALSA lib confmisc.c:391:(snd_func_concat) error evaluating strings
ALSA lib conf.c:3500:(_snd_config_evaluate) function snd_func_concat returned error: No such device
ALSA lib confmisc.c:1070:(snd_func_refer) error evaluating name
ALSA lib conf.c:3500:(_snd_config_evaluate) function snd_func_refer returned error: No such device
ALSA lib conf.c:3968:(snd_config_expand) Evaluate error: No such device
ALSA lib pcm.c:2143:(snd_pcm_open_noupdate) Unknown PCM default
aplay: main:550: audio open error: No such device
So therer is still a missing piece.
Module Size Used by
snd_layla24 36356 0
snd_seq_oss 40084 0
snd_seq_midi 9792 0
snd_seq_midi_event 8160 2 snd_seq_oss,snd_seq_midi
snd_seq 60456 5 snd_seq_oss,snd_seq_midi,snd_seq_midi_event
snd_rawmidi 28992 2 snd_layla24,snd_seq_midi
snd_seq_device 9708 4 snd_seq_oss,snd_seq_midi,snd_seq,snd_rawmidi
firmware_class 11744 1 snd_layla24
snd_pcm_oss 52032 0
snd_mixer_oss 20704 1 snd_pcm_oss
snd_pcm 91396 2 snd_layla24,snd_pcm_oss
snd_timer 26500 2 snd_seq,snd_pcm
snd 65908 9 snd_layla24,snd_seq_oss,snd_seq,snd_rawmidi,snd_seq_device,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer
soundcore 11204 1 snd
snd_page_alloc 11304 2 snd_layla24,snd_pcm
ALSA
Dunno what snd_pcm is returning there, but you should see a separate driver for the Layla24
like this
$ lsmod
snd-seq-midi 5152 0 (unused)
snd-virmidi 2080 0
snd-seq-virmidi 5128 0 [snd-virmidi]
snd-seq-midi-event 6240 0 [snd-seq-midi snd-seq-virmidi]
snd-seq 48784 0 [snd-seq-midi snd-seq-virmidi snd-seq-midi-event]
snd-layla24 149732 3 <--------*here*
snd-pcm 85860 2 [snd-layla24] <---and pcm is using it
You don't have to recompile kernel or anything, find a driver and use insmod
Apparently there's a utils package at Alsa Project website for the Echo Layla24 that sets up everything. Have you tried that one?
also, there's an ALSA Wiki up now that may help you
Timbre conversion
@daisy said:
I have read some where that "if a voice is at same pitch and same loudness and still if one recognize that two voices are different , it is becuase of TIMBRE (tone quality)". (I agree there are other features as well who need to consider).
Timbre is another word for spectrum. The spectrum of a sound is the combination of basic sine waves that are mixed together to make it. Every sound (except a sine wave) is a mixture of sine waves. You can make any sound by adding the right sine waves together. This is called synthesis.
@daisy said:
First Question:
So how we can calculate the TIMBRE of voice? as fiddle~ object is used to determine the pitch of voice? what object is used for TIMBRE calculation?.
[fft~] object splits up the spectrum of a sound. Think of it like a prism acting on a ray of light. Sound which is a mixture of sines, like white light, goes in. A rainbow of different colours comes out. Now you can see how much red, blue, yellow or green light was in the input. That's called analysis.
So the calculation that gives the spectrum doesn't return a single number. Timbre is a vector, or list of numbers which give the frequencies and amplitudes of the sine waves in the mixture. We sometimes call these "partials".
If you use sine wave oscillators to make a bunch of new sine waves and add them together according to this recipe you get the original sound back! That's called resynthesis.
@daisy said:
Second Question:
And how one can change TIMBRE? as pitch shifting technique is used for pitch? what about timbre change?Thanks.
Many things change timbre. The simplest is a filter. A high pass filter removes all the low bits of the spectrum, a bandpass only lets through some of the sine waves in the middle, and so on...
Another way to change timbre is to do analysis with [fft~] and then shift some of the partials or remove some, and then resynthesise the sound.
@daisy said:
I have a kind of general idea (vcoder). but how to implement it? and how to change formant?.
A vocoder is a bank of filters and an analysis unit. Each partial that appears in the analysis affects the amplitude of a filter. The filter itself operates on another sound (often in real time). We can take the timbre of one sound by analysing it and get it to shape another sound that is fed through the filters. The second sound takes on some of the character of the first sound. This is called cross-synthesis.
/doc/4.fft.examples/05.sheepgoat.pd
Help -> 7.Stuff -> Sound file tools -> 6.Vocoder
Fuck i love pd
hi brett, that track is still up on my site. for some reason the link comes out as a .pd file not an .mp3
http://www.m-pi.com/this-is-serious-mum.mp3
just cut and paste that and it will work.
also heaps of stuff here: http://www.m-pi.com/remixes
>It's weird that many ppl seem to be using pd but that the output~ page in the forum still has threads in it from 2004 in the top page!! <
it took me a few months of solid patching (a few hours every day) to get a workable setup for actually making tracks. it's certainly no small undertaking.
>I'm pretty new to pd and just working my way through tutorials at the moment, but do you have any tips with regard to actually going about customising your own setup?<
you are on the right track going through the tutorials. the way i did it was first to build stuff to cut up and effect samples, and then secondly make a system to control those processes live. mine was all based on the [key] command, and i just triggered everythign from my laptop's qwerty keyboard. this was nice when i was travellign as it meant i didn't need to cart any gear around. also good for playing live cos i could pick my computer up and jam on the dancefloor. there are a few options though, especially triggering stuff with sensors and such. but i'm sticking with the bare bones keyboard approach cos it works for me well enough.
> like whether to keep lots of separate instruments or try to keep everything under one roof...<
i try to keep my stuff in one patch as much as possible. a couple of reasons for that, but the main one for me was that i kept modifying abstractions and then other patches that relied on those abstractions would stop working. generally much easier just to have one or two or a few patches to do everythign you need. even if you incorporate everything you make into one patch it doesn't get too big. usually well under 1 meg.
>I think I will tend to mainly use samplers and control structures for controlling my external Midi gear, but in a live setup, not sure how to integrate it into Logic Pro?<
my thinking on this is that if you have a guitar it has 4 or 5 strings, and you manipulate those strings in a variety of ways to make most of the sounds you need. if you listen to my audio..all of that is just 2 or at most 3 channels! so i always have only 2 or 3 samples playing at once. my stuff from back then was a bit light..not really hard hitting on a dancefloor (which is what i'm interested in) ..but i think you do what to keep everythign as minimal as possible. as far as live performance goes, i wouldn't go anywhere near something like logic audio.
if you have midi gear, then def work on triggering that with pd. i'm working on synthesis within pd now, rather than the sample based stuff...but it's a constant battle to keep cpu usage to a minimum. triggering external devices will be no problem for pd and will leave you heaps of cpu for doing sample mashing.
can't stress enough though. KEEP IT AS SIMPLE AS POSSIBLE. for live music, traditional musicians only play one instrument at once. if you want to make whole songs live, then you are going to have to do the beats and bass and interesting stuff all at one time, so you want to keep it as simple as possible so that you can inject a lot of liveness into it. generally, the more channels of audio you have going at once, the less room there is for jamming out in an impromptu fashion....unless you have magic fingers.
>Look forward to hearing your stuff if possible.<
cool, thanks. quick background on my stuff..."this_is_serious_mum" is a live jam recorded in one take. just 2 channels of audio driving all the sounds from small sample loops being cut up in realtime by me pressing keys on the keyboard. it's a super simple setup, but i think the reason why it works ok is that i spent more time actually playing and practicing than i spent on coding the bastard. i toured across europe and japan and australia playing this stuff and it was generally well recieved. at really good gigs it was the biggest rush ever.
so yeah. good luck. grab the bull by the horns and just go for it.
Cheers,
matt
Midi in on linux
@Gimmeapill said:
do you have the alsa module snd-seq loaded ?
lsmod|grep snd_seq
snd_seq_dummy 4996 2
snd_seq_oss 36480 5
snd_seq_midi 9984 2
snd_rawmidi 27264 3 snd_usb_lib,snd_mpu401_uart,snd_seq_midi
snd_seq_midi_event 8960 2 snd_seq_oss,snd_seq_midi
snd_seq 59120 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event
snd_timer 25348 3 snd_rtctimer,snd_pcm,snd_seq
snd_seq_device 9868 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq
snd 58372 16 snd_usb_audio,snd_hwdep,snd_mpu401,snd_mpu401_uart,snd_seq_oss,snd_intel8x0,snd_ac97_codec,snd_rawmidi,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq,snd_timer,snd_seq_device
Thanks, Gimmeapill, but it has been loaded all along and of course I am not getting midi.
Frozen reverb
"Frozen reverb" is a misnomer. It belongs in the Chindogu section along with real-time timestretching, inflatable dartboards, waterproof sponges and ashtrays for motorbikes. Why? Because reverb is by definition a time variant process, or a convolution of two signals one of which is the impulse response and one is the signal. Both change in time. What you kind of want is a spectral snapshot.
-
Claudes suggestion above, a large recirculating delay network running at 99.99999999% feedback.
Advantages: Sounds really good, its a real reverb with a complex evolution that's just very long.
Problems: It can go unstable and melt down the warp core. Claudes trick of zeroing teh feedback is foolproof, but it does require you to have an apropriate control level signal. Not good if you're feeding it from an audio only source.
Note: the final spectrum is the sum of all spectra the sound passes through, which might be a bit too heavy. The more sound you add to it, with a longer more changing sound, the closer it eventually gets to noise. -
A circular scanning window of the kind used in a timestretch algorithm
Advantages: It's indefinitely stable, and you can slowly wobble the window to get a "frozen but still moving" sound
Problems: Sounds crap because some periodicity from the windowing is always there.
Note: The Eventide has this in its infiniverb patch. The final spectrum is controllable, it's just some point in the input sound "frozen" by stopping the window from scanning forwards (usually when the input decays below a threshold). Take the B.14 Rockafella sampler and write your input to the table. Use an [env~]-[delta] pair to find when the
input starts to decay and then set the "precession percent" value to zero, the sound will freeze at that point. -
Resynthesised spectral snapshot
Advantages: Best technical solution, it sounds good and is indefinitely stable.
Problems: It's a monster that will eat your CPUs liver with some fava beans and a nice Chianti.
Note: 11.PianoReverb patch is included in the FFT examples. The description is something like "It punches in new partials when theres a peak that masks what's already there". You can only do this in the frequency domain. The final spectrum will be the maxima of the unique components in the last input sound that weren't in the previous sound. Just take the 11.PianoReverb patch in the FFT examples and turn the reverb time up to lots.