Timbase for audio looping
thank you for help.
maybe ClaudiusMaximus could explain this part theory how it works:
[mod 1] [mod 2] [mod 4] [mod 8]
[sel 0] [sel 0] [sel 0] [sel 0]
"0, 1 500" "0, 1 1000" "0, 1 2000" "0, 1 4000"
[vline~] [vline~] [vline~] [vline~]
[*~ len1] [*~ len2] [*~ len4] [*~ len8]
Timbase for audio looping
something like this is what i'd use (at 120bpm, should be a little more work to make it work for any bpm):
[metro 500]
|
[f]X[+ 1] // count beats
|
[t f f f f ]
| \ \ \
[mod 1] [mod 2] [mod 4] [mod 8]
[sel 0] [sel 0] [sel 0] [sel 0]
"0, 1 500" "0, 1 1000" "0, 1 2000" "0, 1 4000"
[vline~] [vline~] [vline~] [vline~]
[*~ len1] [*~ len2] [*~ len4] [*~ len8]
[tabread4~ smp1] [tabread4~ smp2] [tabread4~ smp4] [tabread4~ smp8]
How to read a soundfile in different speeds or backwards
first approach, phasor~:
advantages: "it can easily speed up/down, reverse, slice all by just altering the frequency/phase"
disadvantage: the phase will only be reset at the start of the next block (64 samples by default). so, for slicing, phasor~ will cut off between 1 and 64 samples from the start of the next slice. for percussion sounds with a sharp attack, this can really dull the sound.
second approach: traditional vline~:
advantage: instant triggering allows for precise control of the audio stream.
disadvantage: you cannot change the rate of a vline~ once it is set.
my favourite approach: counter-driven vline~
make a very fast metro ([metro 1] is even ok), then connect that to a [f p]x[+ n] counter.
this counter is in turn connected to vline~ driven by [pack p p+n]---[$1, $2 1(, where n is the speed and p is the position.
to reset the sound, or jump to a different slice, just send the new value to the right inlet of the [f p], and then bang the metro to start straight away. by changing the value of n, speed / direction changes will take place within 1 ms, or instantly if a new slice is triggered.
probably didn't explain that too well, patch attached.
\[vline~\] object
hi, advanced users!
I couldn't get much help on vline~ help file.
What I want to do is to make long and dynamic amplitude control on
repetitive osc~ frequencies, so that the cycle of osc~ output becomes "musical" period.
But each time I try to put some list in the vline~,
it just doesn't work.
Can I make, at least, some kind of continuous vibrato effect with vline~?
or is there a better object for that?
any examples?
As you can notice, I'm extremely newbie in both DSP and PD world.
By the way, what does it mean "a time interval" and "initial delay" of vline~??
I'm sorry if I'm too stupid, but it's very hard to know alone....
please help me!!
here's my poor patch trying to make musical amplitude control....
#N canvas 748 414 454 304 10;
#X obj 64 175 osc~ 440;
#X obj 75 240 dac~;
#X obj 297 242 ezdac~;
#X obj 130 141 vline~;
#X obj 67 209 *~;
#X obj 140 45 metro 2000;
#X obj 142 11 tgl 15 0 empty empty empty 0 -6 0 10 -262144 -1 -1 0
1;
#X msg 137 74 1 1200 \, 0 230 800;
#X connect 0 0 4 0;
#X connect 3 0 4 1;
#X connect 4 0 1 0;
#X connect 4 0 1 1;
#X connect 5 0 7 0;
#X connect 6 0 5 0;
#X connect 7 0 3 0;
thank you very very much for any help!!!!
Fuck i love pd
hi brett, that track is still up on my site. for some reason the link comes out as a .pd file not an .mp3
http://www.m-pi.com/this-is-serious-mum.mp3
just cut and paste that and it will work.
also heaps of stuff here: http://www.m-pi.com/remixes
>It's weird that many ppl seem to be using pd but that the output~ page in the forum still has threads in it from 2004 in the top page!! <
it took me a few months of solid patching (a few hours every day) to get a workable setup for actually making tracks. it's certainly no small undertaking.
>I'm pretty new to pd and just working my way through tutorials at the moment, but do you have any tips with regard to actually going about customising your own setup?<
you are on the right track going through the tutorials. the way i did it was first to build stuff to cut up and effect samples, and then secondly make a system to control those processes live. mine was all based on the [key] command, and i just triggered everythign from my laptop's qwerty keyboard. this was nice when i was travellign as it meant i didn't need to cart any gear around. also good for playing live cos i could pick my computer up and jam on the dancefloor. there are a few options though, especially triggering stuff with sensors and such. but i'm sticking with the bare bones keyboard approach cos it works for me well enough.
> like whether to keep lots of separate instruments or try to keep everything under one roof...<
i try to keep my stuff in one patch as much as possible. a couple of reasons for that, but the main one for me was that i kept modifying abstractions and then other patches that relied on those abstractions would stop working. generally much easier just to have one or two or a few patches to do everythign you need. even if you incorporate everything you make into one patch it doesn't get too big. usually well under 1 meg.
>I think I will tend to mainly use samplers and control structures for controlling my external Midi gear, but in a live setup, not sure how to integrate it into Logic Pro?<
my thinking on this is that if you have a guitar it has 4 or 5 strings, and you manipulate those strings in a variety of ways to make most of the sounds you need. if you listen to my audio..all of that is just 2 or at most 3 channels! so i always have only 2 or 3 samples playing at once. my stuff from back then was a bit light..not really hard hitting on a dancefloor (which is what i'm interested in) ..but i think you do what to keep everythign as minimal as possible. as far as live performance goes, i wouldn't go anywhere near something like logic audio.
if you have midi gear, then def work on triggering that with pd. i'm working on synthesis within pd now, rather than the sample based stuff...but it's a constant battle to keep cpu usage to a minimum. triggering external devices will be no problem for pd and will leave you heaps of cpu for doing sample mashing.
can't stress enough though. KEEP IT AS SIMPLE AS POSSIBLE. for live music, traditional musicians only play one instrument at once. if you want to make whole songs live, then you are going to have to do the beats and bass and interesting stuff all at one time, so you want to keep it as simple as possible so that you can inject a lot of liveness into it. generally, the more channels of audio you have going at once, the less room there is for jamming out in an impromptu fashion....unless you have magic fingers.
>Look forward to hearing your stuff if possible.<
cool, thanks. quick background on my stuff..."this_is_serious_mum" is a live jam recorded in one take. just 2 channels of audio driving all the sounds from small sample loops being cut up in realtime by me pressing keys on the keyboard. it's a super simple setup, but i think the reason why it works ok is that i spent more time actually playing and practicing than i spent on coding the bastard. i toured across europe and japan and australia playing this stuff and it was generally well recieved. at really good gigs it was the biggest rush ever.
so yeah. good luck. grab the bull by the horns and just go for it.
Cheers,
matt
\[xgroove~\] - vline or phasor based
roman just posted this on the mailign list:
theoretically, this approach should also work well with changing the
pitch while playing, if i am not totally mistaken. i never tried to
implement it myself yet, but since you send a message to [vline~] you
also know at any time, where [vline~] actually is. the idea is to
measure the time between the inital message to [vline~] and the moment,
where you want to change the pitch. with the timevalue and the values
from the initial message you could calculate [vline~] actual position.
with taking into account [vline~]'s actual position and the new pitch,
you could generate a new message for [vline~]. like that, it should be
possible to change the pitch at any time with (sub-?)sample accuracy and
without having jumps in the playback
Pd+icube
yeah it would definately work with pd.
and i think there are a few pd users who might have an icube system.
the main problem is that the icube stuff is really overpriced.
i think most of the sensor - pd users are on the mailing list, so you can direct your questions there, but don't be surprised when they tell you to steer clear of the icube stuff. really it is a ripoff. the sensors they sell you, you can buy very cheaply at electronics shops. all you have to do is learn how to solder some wires to them and stuff.
i hear lots about arduino, so maybe that's a good way to go. http://www.arduino.cc/
Modifying sounds(how do people works)
look in the help patches, i think it's the 3rd folder that has a few pitch shift / flanger / filter type things////
i've never used the loop object, so i can't say what it's gonna be like....but if you use something like tabread4~ , and learn how it works by taking a line input. then you can modify that line input to make the sound slow down and speed up and reverse and stuff in realtime.
also, if you can get your head round that, and you've got a fast enough processor...there's heaps of awesome stuff you can do by modifying the help patches for the fft~ stuff.
How hard is pd to learn? and is it worth it?
>the possibilities of pd seem huge and exciting.
this is true..in answer to your question "is it worth it?" ...well of course
>i spent a couple of hours messing about with pd the other night, just with the examples in the help files, seeing what it can do to sound and it is very exciting.
ok...good start ali. the way i learnt was to make a few projects for myself, for audio devices which i wanted to make. ...then just went through the help files and whatnot to find bits which i needed to make these devices.
my first project was a sequencer that would sync up 8 different soundfiles and play bits of each sample in order...ie 1 beat from the 1st sample, followed by 1 beat from the 2nd sample, then a beat from the 3rd,,,,,etc
it took about 1 month to learn enough pd to get that far. maybe 100 hours.
as far as the coolest things i have done...pd lets me actually play live now...live imprrovised jams based on sample loops which i can manipulate in realtime. i use inputs from my computer's keyboard to trigger effects and change sequences and pitch and whatever. even the guys using ableton "live" or whatever aren't really doing much live improvisation like that, so that's pretty exciting i guess. there are some examples of my tracks in older threads on this board...might not be your cup of tea exactly, but they're all recorded dirtectly out of pd, with no further editing or sequencing or anything.
:::
about the maths: it depends on what you want to do really....with the stuff i do, sample sequencing and resequencign and stuff, the maths is pretty simple...dividing a loop into 32 parts, multiplying sample frequencies by time...etc. it's pretty basic primary school maths for that, which is the way i like it, cos i'd rather be making rock n roll than sitting down and doing my maths homework.
however, i still haven't got far with doign actual synthesis from sine waves and stuff, cos that does involve some higher maths. i'll leave that stuff to the people who are interested in it i reckon. there's also lots of amazing stuff with fourier analysis that i'd love to understand, but it's a bit hard for me.
so basically, there are lots of things you can do without too much math, but i'm sure that the people who understand it are gonna have an advantage.
anyway nice to meet you...any more questions, this is your place. good to see a few more people here lately too, it was a bit quiet for a while.