How to let an incoming signal result in playing in one key&scale?
Hi all, I kind of new to Pd and this is my first post here so bare over with me
I'm trying to make a synth-like patch that gets a signal in (from a microphone) and control an oscillator (osc~) so that the notes coming out is always within a certain (selectable) scale and key. I've already made a small patch that process the signal from the microphone, so my incoming signal is auto-calibrated to be between 0 (no signal/activity) and 1 (max signal/activity).
I now want to spilt my interval up in say 8 (I select major/ionian scale, and for instance the key C), so for an incoming signal of 0 it plays C3. If the signal become larger a D3 will be heard, and so on. For max signal I will hear C4. If I get for example a F3 and the signal gets weaker it should play E3. (hope you get the idea)
I know how to adjust the scale with just adding to a midi-note-number, so I reckon it would be easier to operate with midi-numbers rather than frequency...
I'm unsure on how to implement this. Maybe I should define all the different scales I want to use in tables, and pick notes from it by moving back and forth within the table (if this is possible) or maybe it's a smarter solution.
I hope any of you have any ideas. I know my explanation was not the best, but I want the end result to be a patch which let me play it like an instrument with the input from a microphone, always in tune with selected key/scale.
[adc~]
|
[module with noise filtering and auto-calibration (already made this)]
|
[magic module I need help with ]
| \
[osc~] or: [mtof]
| |
[dac~] [osc~]
|
[dac~]
Any ideas or reflections will be greatly appreciated,
best regards,
ZnakeByte
My live patch
this is old... BUT
when I try to open this I get this error:
spigot~
... couldn't create
spigot~
... couldn't create
spigot~
... couldn't create
spigot~
... couldn't create
spigot~
... couldn't create
spigot~
... couldn't create
wahwah~: an audio wahwah, version 0.1 (ydegoyon@free.fr)
spigot~
... couldn't create
spigot~
... couldn't create
spigot~
... couldn't create
spigot~
... couldn't create
expr, expr~, fexpr~ version 0.4 under GNU General Public License
[makesymbol] part of zexy-2.2.1 (compiled: Jul 21 2008)
Copyright (l) 1999-2007 IOhannes m zmölnig, forum::für::umläute & IEM
[msgfile] part of zexy-2.2.1 (compiled: Jul 21 2008)
Copyright (l) 1999-2007 IOhannes m zmölnig, forum::für::umläute & IEM
[list2symbol] part of zexy-2.2.1 (compiled: Jul 21 2008)
Copyright (l) 1999-2007 IOhannes m zmölnig, forum::für::umläute & IEM
[folder_list] $Revision: 1.12 $
written by Hans-Christoph Steiner <hans@at.or.at>
compiled on Jul 21 2008 at 06:08:28
setting pattern to default: C:/Users/Cody/Desktop/ma4u/*
error: signal outlet connect to nonsignal inlet (ignored)
... you might be able to track this down from the Find menu.
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
expr divide by zero detected
whats up?
Spectral synthesis
I'm trying to play with help patches. For instance, with morphine the errors are these:
morphine~
... couldn't create
error: inlet: expected '' but got 'transition'
... you might be able to track this down from the Find menu.
error: signal outlet connect to nonsignal inle morphine~
... couldn't create
error: inlet: expected '' but got morphine~
... couldn't create
error: inlet: expected '' but got 'transition'
... you might be able to track this down from the Find menu.
error: signal outlet connect to nonsignal inle morphine~
... couldn't create
error: inlet: expected '' but got 'transition'
... you might be able to track this down from the Find menu.
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
t (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
'transition'
... you might be able to track this down from the Find menu.
error: signal outlet connect to nonsignal inlet (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
t (ignored)
error: signal outlet connect to nonsignal inlet (ignored)
and then
error: inlet: expected '' but got 'float'
Thanks for the help
Zero pad an array
n+m-1 because when i convolve a signal (length m) with a filter (length n eg. a long ir of a room). Then the result will be length m+n-1.
And if not zeropadded the excess samples will be wrapped around.
i got this idea from: http://www.dspguide.com/CH9.PDF
where is stated:
"Now consider the more general case in Fig. 9-9. The input signal, (a), is 256
points long, while the impulse response, (b), contains 51 nonzero points. This
makes the convolution of the two signals 306 samples long, as shown in (c).
The problem is, if we use frequency domain multiplication to perform the
convolution, there are only 256 samples allowed in the output signal. In other
words, 256 point DFTs are used to move (a) and (b) into the frequencydomain. After the multiplication, a 256 point Inverse DFT is used to find the
output signal. How do you squeeze 306 values of the correct signal into the
256 points provided by the frequency domain algorithm? The answer is, you
can't! The 256 points end up being a distorted version of the correct signal.
This process is called circular convolution. It is important because you want
to avoid it."
Block-length comb filter introduced when trying to reroute audio
I have a system set up where any number of sound sources can be routed into a few different buses, i.e. throw~ 0-bus, throw~ 1-bus, throw~ 2-bus, throw~ 3-bus.
I also have an effects chain that can pull audio from any of the buses (except for the master bus, which could cause serious feedback loop problems) and then output to any bus.
Because send~ is one-to-many and throw~ is many-to-one, and I want many-to-many, I have a subpatch that does this for each bus N:
[catch~ N-bus]
|
[send~ N-bus-s]
So, that way, I can have anything write to any bus as well as read from any bus.
The 3-bus acts as the master bus - anything thrown to it will be output via a subpatch that contains a [r~ 3-bus-s] connected to a [dac~].
There's a problem: If i have sound source S connected to the master bus, and if I then have the effects chain receiving a signal from bus B, and then have source S throw to bus B while remaining connected to the master bus as well, I get a comb filter, where the signal going from S through an empty effects chain to the master bus is added 64 samples later to the signal going straight from S to the master bus.
I want a solution that retains all the flexibility in routing of signals. Am I asking the impossible? I think the [throw~ 3-bus] from the end of the effects chain is the problem, because it then has to go through the [catch~ 3-bus]-[send~ 3-bus-s] before it is finally picked up by the [r~ 3-bus-s]-[dac~].
I guess I could just try to an add an artificial latency to the master bus... I thought I tried that, though, and it didn't work.
Thanks a million!
Distortion/compression thing I'm trying to do.
Hi. I'm still kind of a novice at pd, so what I'm trying to might not be a good idea. It doesn't seem to be working any how.
Anyways, I was trying to make a distortion type deal where you select a window in the signal, say .5 to .75 of the absolute value of the signal, or whatever you like, and have it so the signal below that window (say 0 to .5) isn't reduced in amplitude, the signal above that window (.75 to 1) is reduced by a factor of your choice, I'll call it a, a value between 1 and 0, and the signal within the window ramps steadily from no reduction to reduction by a factor of a, so it's smooth.
What I've been doing for this is multiplying the whole signal by a clipped version of the absolute value of the signal that's multiplied and added to such that instead of going from .5 to .75 it goes from 1 to a. So, anything below .5 is multiplied by 1, anything above .75 is multiplied by a, and everything in between is multiplied by something between 1 and a. I thought this would produce a smooth sort of distortion, depending on how much the amplitude is reduced by and the width of the window, but no matter what I do I get a harsher distortion, and there are little rectangular cuts in the peaks that shouldn't be there, unless my logic about all of this is wrong.
This is roughly what's going on.
Signal
|
abs~
|
clip~ .5 .75
|
-~ .5 ( 0 to .25)
|
/~ .25 (0 to 1)
|
*~ -(1 - a) (given a reduction factor of .6 this would be 0 to -.4)
|
+~ 1
And that would result in a range of 1 to .6. And I multiply that by the original signal. I'm just kinda messing around with pd trying to make stuff, so maybe this is just a bad way to do this. Is it just too many operations on the clip to multiply it at the right time?
I can put a patch on this later if you need.
Best EQ for save CPU load
Just a question to clarify:
You have 120 input signals.
How many output signals do you need?
What kind of EQ do you need?
Just low/mid/high levels with fixed frequencies? (simple EQ)
Or parametric EQ with everything changeable for each input signal?
If you just need simple EQ, you can get away with this I think, as hardoff suggested:
[inlet~]
|
+----+-----+
| | |
[*~] [*~] [*~] these adjust low/mid/high levels for this input signal
| | |
above part for each input signal
below part for each output signal
| | |
[lop~] [bp~] [hip~] do the EQ for this output signal
| | |
+-----+-------+
|
[outlet~]
If you need parametric EQ for each channel, there's no getting away from needing 120x EQs, as far as I can tell...
Best EQ for save CPU load
[*~ ] just multiplies the signal by a number or fraction. so, if you multiply by 0.5, the volume of that signal will be half.
then, a basic eq would have stuff like this:
a) lowpass
[lop~ 500]
midpass
[hip~ 500]
|
[lop~ 2000]
c) highpass
[hip~ 2000]
for ease of drawing here, let's use a send/receive pair to show what happens to the signal in a simple one channel eq:
[inlet~]
|
[s~ my-signal]
[r~ my-signal]
|
| [0 ....-> 1]
| |
[*~ ]
|
[lop~ 500]
|
[throw~ eqd-signal]
there's the lowpass section. we just multiply the signal by a fraction between 0 and 1 to decide how much of the signal goes through the lowpass section.
here's the mid eq:
[r~ my-signal]
|
| [0 ....-> 1]
| |
[*~ ]
|
[hip~ 500]
|
[lop~ 2000]
|
[throw~ eqd-signal]
again, we just multiply the signal's amplitude to decide how much will go through the mid-eq section.
and finally the high end:
[r~ my-signal]
|
| [0 ....-> 1]
| |
[*~ ]
|
[hip~ 2000]
|
[throw~ eqd-signal]
then...we just catch~ all the signals at the end, and output to our speakers, or headphones or whatever:
[catch~ eqd-signal]
|
[dac~]
so, if you were to multiply that into 120 subpatches, every subpatch would contain 3 amplitude multiplications [*~ ] , and then 3 sets of filters.
however, because the 3 filter sections will be the same in every subpatch, you can just multiply the amplitudes (very cheap on cpu) and then output all 120 signals to one group of filters before going to the output.
if you look at the eq i posted as a patch, the aplitude multiplication occurs AFTER the filters. so it would be like:
[inlet~ ]
|
[lop~]
|
[*~ ]
|
[outlet~]
but, you can switch the order and have the same result:
[inlet~ ]
|
[*~ ]
|
[lop~ ]
|
[outlet~]
Changing phasor freq at phase wraparound
Yeah, it's possible to do exactly as you require for slow moving signals, by sending a [0( to the right [phasor~] inlet and forcing its phase to 0 on a transition. Problem is that this operation is block-accurate not sample-accurate so for fast signals comparable with the audio block size some dirty clicking will happen. [vline~] on the other hand can construct nice accurate time domain lines that aren't block quantised.
The lowpass used with [sig~] is a little trick to slew the control signal and make it more like an analog synthesiser. You could also use [line~] here with a 5-20ms follow time.
A signal that moves fast contains higher frequencies than a signal that moves slowly.
Let's say we have a signal that moves from 0 to 3 in 4 samples.
A perfectly linear function like a wire outputs the same number that
goes in, so for an input that goes {0, 1, 2, 3 } the output
would also go {0, 1, 2, 3}. Easy.
Now if we replace the wire with a function that makes the output be the average of the current value and the last two values, what happens? A low pass filter works by averaging together previous inputs, so it has to have some kind of memory. If the signal was at 0 to begin with then let's assume previous memory locations will contain 0.
out[0] = (0 + 0 + 0 )/3 = 0 same as before
out[1] = (1 + 0 + 0 )/3 = 0.33 now our output moves more slowly
out[2] = (2 + 1 + 0 )/3 = 1 trying to catch up
out[3] = (3 + 2 + 1 )/3 = 2 the averaging acts like an 'inertia'
out[4] = (3 + 3 + 2 )/3 = 2.66
out[5] = (3 + 3 + 3 )/3 = 3 finally we get there
It took us 6 steps to get to where we would have got in 4 steps.
This way of thinking about filters makes easy sense, but to really to understand you can read about how filters work below, but you need to sit down and scratch your chin over the maths. Millers explanation is in Argand/pole form which I find more difficult, Smiths examples include more block diagrams and C code that are more instructive to programmers imo.
http://crca.ucsd.edu/~msp/techniques/latest/book-html/node140.html
http://ccrma.stanford.edu/~jos/filters/Simplest_Lowpass_Filter_I.html
I wonder....
Hello shankar,
A piece of knowledge that might be necessary for your further experiments -
Pd has two kinds of signals. It is just like Csound or Max in this respect. There are audio signals which shift blocks of sound around the program and there are control signals which are messages giving instructions (usually to things which operate on audio signals). These two rarely mix, they are actually computed at different rates. But they work together in a patch.
In your question you ask why the [delay] object will not delay the right-hand signal. The [delay] is a unit that operates on control messages, and you are trying to operate on an audio signal. If you look carefully in the console you will see a warning saying "audio signal connected to control inlet" or something. The object you need (actually two) is the [delwrite~] and [delread~] pair.
So...
[adc~]
| \
| [delwrite~ name 500]
|
| [delread~ name 500]
\ /
[dac~]
Notice that they aren't connected, they are bound by the "name" which
means many reads and writes can happen on the same delay buffer.
" Meanwhile, some others now offer external professionals as little as $50-100 for original jingles and programme ids."
LOL! I've had worse here in England The BBC always paid me well. I know the idea of a national station is a very socialist idea, especially one funded by mandatory taxes, but I can't say they didn't maintain high standards and pay for it. I think I did some of my best work then and I really enjoyed the process. Work for the independents just got thinner and thinner through 1998-2002. Corporations like Endymol were tight fisted misers to put it nicely. The payment me and my colleagues got for Big Brother was insulting considering the level of expertise we brought to the gig. Now I realise how much money they made from that awful program I feel quite sick
But it's all a learning experience. A trick they do over here in the UK is called the pitching system, that's where they get 8 or 10 groups to do the work for one slot and pick the best one. While being the epitome of modern capitalism it is also an unworkable economic bonfire of effort and talent. So fuck them, let them scrape the barrel for 10 year old library music and standard midi files, and let their audiences judge them on it. Anyway, I'm sounding old now, but I don't feel it, I'm very bullish about the future with tools like Pd in my hands.
"So, it was a reeeeeaaal pleasure for me to begin to learn, earlier this year (mainly from Lawrence Casserley), that what had actually happened to the cutting-edge of music and creative-audio was that it was entirely elsewhere!"
Yep, right now all the really cool stuff is right on the fringes. I think these things go through cycles.We are at the nadir of the midi-sampling phase, on the tail end of the old and new tools and new processes are bubbling up from the underground again and I feel happy to be on that wave. It takes so long for the mainstream to tune in that the party is always over by the time they show up in their suits, chequebook in hand. But I think this time something quite different is going on because the current media giants are such dinosaurs who cannot integrate the synaesthetic and convergent work of the new progressives perhaps an entire new movement will have time to grow before it is siezed.
"To cap that, I also do believe that the old need not always give way entirely to the new ~ by which I mean that surely good evolution ideally carries goodies forward"
Absolutely, it would be foolish to reject well established techniques out of hand. That's what the box shifters in Japan would like for us, to forget our technical and artistic history and be wholly dependent on mysterious hidden technology that writes and produces sanitised music. If anything I strive to preserve old wisdom in a new context, for the reason I said that many vital skills like recording, compression, EQ are becoming lost arts because nobody cares about quality much now and these are skills not taught properly on music technology courses any longer. Sampling and MIDI will always have their place somewhere. Also, composition and music are not taught in British schools any longer, so there's an "artistic recession" emerging here with a whole generation of kids who think all music is just made by stitching prefab loops together and a growing disconnect between composition, production and performance. But history shows that the new wave always subverts something quite unexpectedly. Now that the media nazis have all our history owned and locked down for the next 100 years I expect it will come from somewhere very surprising...who knows, a classical renaisance? Whatever, the next "punk" is not going to come out of proprietry systems, it is going to be an open source thing I think.
btw, I have a half built waveshaping sitar for you here on my desktop, but if you want
to play with a similar thing while I finish it have a look at
http://www.obiwannabe.co.uk/html/music/6SS/six-waveshaper.html
best regards
andy