Midi in on linux
@Gimmeapill said:
do you have the alsa module snd-seq loaded ?
lsmod|grep snd_seq
snd_seq_dummy 4996 2
snd_seq_oss 36480 5
snd_seq_midi 9984 2
snd_rawmidi 27264 3 snd_usb_lib,snd_mpu401_uart,snd_seq_midi
snd_seq_midi_event 8960 2 snd_seq_oss,snd_seq_midi
snd_seq 59120 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event
snd_timer 25348 3 snd_rtctimer,snd_pcm,snd_seq
snd_seq_device 9868 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq
snd 58372 16 snd_usb_audio,snd_hwdep,snd_mpu401,snd_mpu401_uart,snd_seq_oss,snd_intel8x0,snd_ac97_codec,snd_rawmidi,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq,snd_timer,snd_seq_device
Thanks, Gimmeapill, but it has been loaded all along and of course I am not getting midi.
Problem with sound / sound card
I hope anyone can help me out here.. I am by the way a newbie in PD, but have used Max/MSP for some time.
I have a couple of problems with PD when it comes to the sound part. I am running OSX on a G5, with a Digidesign digi002 rack as main sound card. The thing is, that i can´t get sound even when I run the "Test Audio and MIDI" patch from "Media", nor can I see anyhting happening in the boxes under "pd ------ audio-----"; it 0 all the way. This is with the digi002 soundcard. The same thing happens when I run the digi002 sound driver through JackOSX (so, the sound card i select in other words is Jack). On the other hand, MIDI in and out seems to work with digi002. Of course, I have also tried with the built-in sound card, and I have also tried with another sound card I got (M-Audio Ozone), and I get sound with both! Thanks a lot if anyone can give a hand on trying solving what´s wrong and how to fix it!
Also, I have another problem, when I open a new patcher, and want to create a sound object, it won´t let me write ~ ! I have tried to write the same, the same way in TextEdit, Max/MSP, and it works fine there, but not in PD. Also, it won´t let me copy and paste! If anyone knows anything about this issues, thanks!
Hanstein
Frozen reverb
"Frozen reverb" is a misnomer. It belongs in the Chindogu section along with real-time timestretching, inflatable dartboards, waterproof sponges and ashtrays for motorbikes. Why? Because reverb is by definition a time variant process, or a convolution of two signals one of which is the impulse response and one is the signal. Both change in time. What you kind of want is a spectral snapshot.
-
Claudes suggestion above, a large recirculating delay network running at 99.99999999% feedback.
Advantages: Sounds really good, its a real reverb with a complex evolution that's just very long.
Problems: It can go unstable and melt down the warp core. Claudes trick of zeroing teh feedback is foolproof, but it does require you to have an apropriate control level signal. Not good if you're feeding it from an audio only source.
Note: the final spectrum is the sum of all spectra the sound passes through, which might be a bit too heavy. The more sound you add to it, with a longer more changing sound, the closer it eventually gets to noise. -
A circular scanning window of the kind used in a timestretch algorithm
Advantages: It's indefinitely stable, and you can slowly wobble the window to get a "frozen but still moving" sound
Problems: Sounds crap because some periodicity from the windowing is always there.
Note: The Eventide has this in its infiniverb patch. The final spectrum is controllable, it's just some point in the input sound "frozen" by stopping the window from scanning forwards (usually when the input decays below a threshold). Take the B.14 Rockafella sampler and write your input to the table. Use an [env~]-[delta] pair to find when the
input starts to decay and then set the "precession percent" value to zero, the sound will freeze at that point. -
Resynthesised spectral snapshot
Advantages: Best technical solution, it sounds good and is indefinitely stable.
Problems: It's a monster that will eat your CPUs liver with some fava beans and a nice Chianti.
Note: 11.PianoReverb patch is included in the FFT examples. The description is something like "It punches in new partials when theres a peak that masks what's already there". You can only do this in the frequency domain. The final spectrum will be the maxima of the unique components in the last input sound that weren't in the previous sound. Just take the 11.PianoReverb patch in the FFT examples and turn the reverb time up to lots.
PD audio recognition
[fiddle~] gives you the main pitch of the sound incoming.
I do a project with sound analysis and sound production related to this analysis. For that moment, I've been inspired by a book " interactive musical systems" by Robert rowe.
I split the patch in two parts : LISTENER (analysis) ans PLAYERS (sound production).
Now I've focus 8 differents styles of incoming sound, very primitive with 3 couple of parameters :
CHAOS / REGULARITY
LONG / SHORT
STRONG / LOW
It has to be appreciate for different situations. But I think it's a good way to start. Now, I'm a little disturb by the actions of the PLAYERS. I'm split between two position : use a lot of different sounds (like audio files from crowd, weather, voices, drums and synthesis and live recording and playing )and focus just a limited range of sounds and use them to the death.
if you like to see the project, it's in french sorry :
http://impala.utopia.free.fr/projets/index.php?mode=plus&id=1
Ambisonics? and Matrix?
Sorry for the missing explanation ...
Ambisonic is a spatial sound reproduction technology which have been introduced in the 70s by the mathematician Gerzon. With Ambi the loudspeaker layout may be anything from few to many loudspeaker and it is intended to reproduce the direction of arrival.
You first encode your target sound field .... its like Fourier Transforms but in space! You can do it with a virtually defined sound field in your computer or you can encode real sound field using a set of microphones (omni + figure-8) correctly placed in space. Once you got the transformed version of your desired sound field ... you decode it using your loudspeaker layout. Decoding is matching your encoded sound field with your sound reproduction system.
For basic encoding/deconding ambisonic (there is some more advanced encoding/deconding rules) you compute everything with matrix. Thats quite faster and more efficient than computing every elements individually.
My ambisonic patch (which is not finished) use such matrix computation although is not really pure-data clean (with some sort of very fast "for" loop made with metro objects!) ...
Anyway I still have some doubts on my equations ...
For this reason, I would like to compare my patch with some other ambisonic work for PD. There is a complete paper on Ambisonic with CSound in Computer Music Journal, winter 2001 ...
Bye!