The problem with that is that one pitch contains many frequencies that are harmonically relate (i.e. integer multiples of the fundamental frequency). So filtering out the fundamental won't eliminate the pitch. You could use a comb filter to take out the harmonics, but if other pitches share those frequencies (which happens with consonant intervals) it could make detection of those other pitches more difficult. This is one of the major problems with polyphonic pitch detection. It's common for chords to contain notes with overlapping frequency content, so it can be difficult to determine whether certain frequencies belong to one note or several.
-
Detecting Chord's with adc~
-
Hi Deepness, nice to see a new user among us, welcome.
As Maelstorm says it's a pretty difficult task. I (as you'll notice) am up at the beginning of the conversation. I got pretty good results for notes that trigger sounds etc, although obviously not for chords, using [fiddle~]. There's also [sigmund~] which is a little simpler in it's output possibilities.
I'm working in vocal recognition and as Maelstorm and others point out 'the' big problem in all these situations is filtering out/recognising certain frequencies .... and then some. Funnily enough it makes me think of a markov chain algorithm (an HMM thought of by Rabiner?)which is used in voice recognition. It's got me thinking if this could be implemented in PD!
-
Look at it as a physiology problem. The ear can identify the notes that make up the chord because critical parts of the brain have been taught that the combination of notes that make up the chord; the fundamentals present in the chord, and their several harmonics (overtones or partials), as well as the 'beating' interactions will give you the sound of the chord. Also there are 'amplitude' issues. That means that the higher pitched notes that make up the overtones are generally much lower in amplitude that the fundamental that generated them. The key point,- the brain, in its own wonderful way, was taught to recognize that chord. BTW - how the brain does that is a whole other discussion, very interesting, but very complicated. So that is what a piece of 'machinery' would have to imitate.
However when you try to imitate that with 'hardware' you are stuck with the problem that you will have three or possibly more fundamental notes and then all of there harmonics all interacting.
Possibly you could create the spectrum of the chord which, theoretically, will show you all the notes, - the fundamentals and the several (or many) harmonics including 'aplitude' interaction. Then possibly you could pump that data into a good, high performance processor and use some very elegant AI type software. The software would analyze the list of individual notes, put each fundamental together with its harmonics, isolate the beat generated notes, account for amplitude differences and them present to you the result. The most interesting result would be the list of the original three or four notes that went into the original chord. The program you'd need is really the 'alter-ego' as it were, of what the brain was taught to recognize the chord.
Hope this helps.
-
There was one particular technique that I came across a while back that does something that I think is similar to what you are talking about. It first did a spectral analysis of the notes that an instrument could play and make a dictionary database of them. So, for example, the dictionary might contain a bunch of magnitude spectra of piano notes. It could then take a recording of a chord and find the combination of notes in the dictionary that best matches the spectrum of the chord. It worked very well for pre-recorded material, but obviously it required too much processing to be practical in real time.