Meditation background generator
Thank you all for your replies, I didn't expect such feedback. Especially thanks jamesmcn - everything was described well, I just have something to add:
1 and 2. The top section performs two functions - it's smth. like "probability generator" (leftmost part), which, in accordance with LFO, defines how often droplet sound is being generated at the moment, and (rightmost one) an array containing tones, at which [vcf~]s of certain droplets resonate (0,2,3,5,7,8,10 in it mean C-minor scale). And every 5 seconds [route] picks these tones (notes) up from the array randomly and sends them to [vcf~] objects after [stream~] abstractions. These tones are transformed into frequencies in Hz to control bandpass filters.
3. Yes, the [stream~] is the meat of the synthesis, but it is not "very carefully" filtered noise, although should be. I just tried to make it sound like water droplets, and made some variations in cutoff frequency (first creation argument) and stream density (2nd creation argument) to make droplets sound more diverse. The stream~.pd itself is actually a very simple noise generator, and this is the oldest part of the patch itself. I wanted to make something like a rain noise, made this abstraction, and put it off until it came into play.
4 and 5. You know, if you delete the mixer/processor section (except reverb), you may not notice much difference: it just makes left and right channel slightly different from time to time - for merciless freezeverb to mix up left and right channels in one stream anyway. Buy the way - [freezeverb] is just an enhanced version of that you can see in help -> browser -> G08.reverb.pd. The only serious difference is that it uses delay_time_counter.pd, which calculates the times for delay lines in accordance with this formula: t = t1/2^(n/numlines)-t1/2, where t1 = the largest early reflection delay time value, numlines = total delay lines number (28 here), and n = current delay number (starting from 0). I found this algorithm here: http://musicdsp.org/archive.php?classid=4#44 but changed it a bit (actually, added "-t1/2" to make echoes appear earlier. I still don't understand completely how [freezeverb~] works. To be more precise, I don't understand what actually does [early_reflection_delay_line] do - but Miller Puckette in his example applied similar [reverb-echo-del] abstraction, and it works well! It makes a "power-preserving" mix, very useful thing in recirculating reverbs.
6. Two sine wave oscillators take their frequencies from two first randomly picked up notes from an array (see item 1 and 2). There are also two frequency modulators, 1846 Hz and 4 Hz sine waves, to saturate their spectrum. So it sounds a bit like noise, mainly because there are already too much sine waves from the [stream~] abstraction, and I thought it worth adding something at higher frequencies. And reverb smoothes these oversaturated sine waves, making them sound noisy.
7. How reverb similar to [freezeverb] works is described in help browser, I just can't understand why power preserving mix works. Also I tried to make one stereo reverb based on Miller Puckette's model, but a couple of experimental ones failed. This one is my best reverb ever.
8. Yes, and [master] abstraction is just a place where volume control, or spectrum analysis, are handy to perform from. Something to put all the wires at, and to listen to its output.
Meditation background generator
Great work, hamster!
Like lead, I've been playing this patch for over an hour and continue to be impressed. Along the way, I printed everything out and have been analyzing your meditation generator.
I'm not done yet, but the following may help others get started with their own analysis:
Starting with main.pd and working your way down:
1. main.pd can be broken down into three major sections. The top section consists of everything between [metro 5000] and [snapshot~] / [route ...]. The middle section is the row of ten identical structures that each start with a [stream~ ...] and end with a pair of [throw~ ...] objects. Finally, there is everything below those ten structures.
2. The top section produces LFO modulated envelopes on the left and ten-note sets on the right. This data is passed down to the middle and bottom sections.
3. The middle section is a row of ten similar synthesizers. Each synth gets its initialization from the top section. The meat of the synthesis is in [stream~]. As far as I can tell, it is very carefully filtered noise. The output of [stream~] is sent to a tuned bandpass filter, and then panned slowly from left to right.
4. The bottom section has four significant subsections: The mixer/processor is at the top left and the pad generator is on the right. Below these two sections is a pair of custom reverb units. At the very bottom is the [master] output and recording section.
5. I don't fully understand what is going on in the mixer/processor, though it appears that the left and right channels each get bandpass filters (by stacking a pair of [hip] on top of a pair of [lop]) and are randomly assigned independent control envelopes.
6. The pad synth on the right sounds like white noise, but is built around a trio of [osc~] units. Ironically, the ten identical synths in the middle section are built around [noise~] but don't sound like noise!
7. The custom reverbs are defined in [freezeverb]. It seems to be built on a total of 28 300ms delays. I don't understand how it works, but the layout is very nice to look at!
8. Finally, the output and record sections at the bottom seem fairly obvious.
Problem streaming with pdp\_theonice~
I'm attempting to use pdp_theonice~ to stream theora video to an icecast server. I'm having trouble getting this working - it seems to connect ok as long as there is audio and video being sent to pdp_theonice~ but I can't view the stream with vlc or an embedded html5 player (which I've been able to do when streaming video from other sources to this same icecast server). oggcast~ works just fine.
Does anyone have a simple patch that they know works for streaming theora video to icecast with pdp_theonice~?
I've checked out the giss examples, but these are way more complicated than I need, and hard to debug. Also, they crash when I try to connect (probably because no audio or video is making it to pdp_theonice~, as my tests have shown that pd crashes when you try to connect without audio and video).
I am on Ubuntu 10.04 with Pd-extended 0.42.5.
Thanks!
p.s. alternately.. if there's another way of streaming theora without pdp_theonice~ (using 3rd party software or otherwise) I'd be interested to know this as well.
p.p.s. I already posted this to the list, so I'm sorry if you're now reading this twice.
What's faster, you think? "Audio" or data?!
i think the only difference is the rate at which the messages are processed.
at control rate (data) i think the refresh rate is something around 70hz, but at audio rate, it's whatever you set your samplerate too (so well into the tens of thousands). audio rate therefore will be more intensive, but also more accurate.
If you can't hear a difference then control rate is gonna save you those CPU cycles.
Array issue please help
@Nk said:
So am I right in thinking the first 'filename' is left channel/stereo and the 2nd 'filename' is right channel?
No, but holy shit you seem to have stumbled on how to isolate a single channel of a multichannel file. So, thanks for that!
Well, you're sort of right, but they're not filenames; they're array names. What I was trying to say with my earlier example (and sorry for not being clear, I was rushing a bit) is that you would have two arrays, in this case named "rightarray" and "leftarray". These two arrays would hold the right and left channel of a stereo file "foo.wav", respectively.
Now, when [soundfiler] loads a multichannel file, it puts each channel in a different array depending on the number of array names you give it. If you only give it one, it loads the first channel of the file. If you give it two, it loads the first two, etc. With stereo files, the first channel is the left one and the second is the right one. So if you only give it one array, it will only load the left channel. In order to load the right channel, you have to give it a second array name. But, it seems that if you make both array names the same, it will load the left channel into the array, and then overwrite that with the right channel. Brilliant!
Perform decision making
Got an even better solution using route lists. "route list" will tear off the list prefix and you could then feed that into another route.
For example if the options were: list A blah blah OR list B blah blah
Then feeding into "route list" would output A blah blah OR B blah blah
You could then feed that into "route A B" which will give you what you want. Remember route strips away the matching symbol.
More nickels & dimes....
03.connection.pd : is the author of tutorials around ?
no, that's not a miracle:
the right inlet of the [+] object stores the incoming value until the the operations gets started by an incoming event at the left inlet. if you connect the number box first to the right inlet and then to the left inlet, the following happens:
the number box shows 10, then the second inlet of the [+] object receives the value first and stores it, but it will not trigger an operation. immediately the left inlet receives the value (10) - this event triggers the operation (in this case an addition) and results in an output value of 20 (10 on the right + 10 on the left, then processed addition)
if you first connect to the left inlet of [+] and then the right inlet, the left inlet receives the data and imediately triggers the opereation - so it receives a 10 and triggers an addition with the last stored value at right input:
if you tweaked the number box from 0 to 10, the last stored input at the right inlet has to be a 9, so the result is 19. if you tweaked it down from a higher number, the last stored value must have been an 11 so the result is 21.
edit: i'm sorry, i must have been to tired, when i typed that shit yesterday.... i changed some things now! hope you understand what i mean...
How to populate 1 array with 4 incomming number streams
Hi all,
This should be the easiest thing in the world, but I cant for the life of me figure it out.
I need to populate an array with input from four different number streams were the order of appearence of numbers in the stream puts them into an queue to bang messages from 0,1,2 etc.
A brief explanation
I presume is pretty
easy when you know how, but A brief explanation of the project might be in order
The idea is to back project onto a series of screens and give people
IR LED "paintbrushes" so they can paint with procedural graphics and
sound.
We're using "touchlib" blob tracking software (and webcams)to
differentiate between the blobs. the software assigns each blob a
numbered ID for the length of its lifetime, based on the order in
which they come into existence : so the first blob in existence is "ID
0" (until it dies, when it takes its pace in the queue), the second
is "ID 1" etc.
These IDs allow us to assign specific graphics to different blobs in
Processing, and also to give each an individual piece of audio.
Its easy with just one machine sending these messages as each ID
corresponds nicely to the order of tracks to be triggered in the
sequencer,
but we're using four separate modular machines each running touchlib
and we want the sound to be global.
We have networked the machines and each of the four graphics modules
can talk to the machine running the sound. The sound module is running
PD which receives messages from the other machines and then sends MIDI
messages to the sequencer. So PD is getting four streams of numbers -say from zero to three- which correspond to the order in which touchlib blob IDs pop into existence - (each stream local to its own machine)
these numbers trigger a fade in/out of a
mixer track in say for example Reason). Ideally the first person who enters the space will
trigger some pad sounds (fader 1 in reason say) regardless of which
screen they paint on.
that way it will work if there is only one person in the space. The
next person would trigger some percussion, and the full track would
build naturally. The alternative is to have every ID locked to a
sound, meaning it would really only work for four people in the space.
So to the question. There are 4 data streams coming into PD, literally
numbers 0 - 3 in each number box. as you can see in the
"four_machine_dilemma"patch attached.
what I need to do is fix it so that if (and only if) computer A has
sent a message triggering track 1 that computer B, or the next stream,
when it sends its own "ID number 1" is converted to ID number 2 , that
is, it occupies the next position in the global array, triggering
track two (because track 1 is occupied) even though it thinks it is
"ID number 1", and so on down the chain.
is there some way to store a boolean for the track's on state and use
it to reassign a value to the next incoming message?
Or just to fill positions in an array with the incoming messages in
the order they are received. It seems like it it should be
straight-forward but I'll be buggered blind if I can figure it out.
Hope this is not to long winded for a simple question.
Thanks in advance,
wadeorz
Automatic saving a array as wave file.
Hey,
I don't know if I'm understanding you correctly, but it seems like you think that the [savepanel] object has something to do with the actual saving of the soundfile.
It doesn't it just generates a filename. It's the 'write' message being sent to the soundfiler which causes the soundfiler object to write to disk.
My suggestion is this:
[counter] (assuming this outputs a float when you bang the top)
|
[makesymbol sound%s.wav]
|
|write $1(
|
[soundfiler]
The makesymbol will take whatever float you send it and output a symbol soundXXX.wav (the %s in the argument is replaced by whatever float you input to the object). The write $1 then prepends write to the filename, and sends the instruction to soundfiler to write the wav file.
Theo
Appending samples to the end of a wave file?
Hey all-
Is there a way to append samples to the end of a wave file in PD? I'm working on some voice activity detection stuff, and I basically want to read in a multi-channel wave file, determine if every 80 samples is speech or non-speech, and add another channel that contains the information about speech / non-speech.
I would like to write all of the original channels, plus the new channel, to a new wave file. My concern is that the channels will be too large to simply load them all into arrays, and then write out all the arrays (maybe I'm wrong and maybe I should just do that). I would like to basically...
read in 80 samples
determine speech
write out 80 samples (on all 5 channels)
repeat until the end of the file
Is there any way to do this? Or should I just try to load them all into arrays and just write the file @ the end (the files are like 20 minutes each, 4 channels, 8kHz).
I've looked at soundfiler (will let you skip parts of the array, but not parts of the file) and writesf (seems to only write in real time; stopping must be followed by an open before more writing, which overwrites the file)...
Any help would be appreciated!
All the best,
-Zach