Hello Forum,
I've been working in a patch that means to translate real time sound spectral data into color. Please excuse the verbosity but I will try to introduce some information first as I want to be very clear with my question:
In sound, Frequency is perceived as pitch, amplitude as loudness, while the amount of waves sounding simultaneously and their relationship between each other is perceived as timbre; the higher the amount of simultaneous frequencies, the more ‘chaotic’ the timbre (e.g: white noise), the lower the amount, the purest the timbre (eg: sine tone)
In Color, oversimplifying, we could say frequencies are perceived as spectral hues, amplitudes as brightness, while the amount of waves vibrating simultaneously and the relationship between each other determines mostly saturation, the higher the ammount of simultaneous frequencies, the less saturated the color (eg: white color), the lower the ammount, the purer the color (eg: spectral color), BUT the thing is that simultaneity also could be perceived as the mixing of different colors, it all depends on some sort of relation between amplitude and ¿harmony?
And this is were my questions begin, I would like to read a sound spectrum as if it were an electromagnetic spectrum (for now, please forget about the octaves issue), mapping frequency as spectral hue, amplitude as brightness and the amount of individual waves, and their relationship among each other to determine the saturation, but in order to do this, I need to know how color is synthesized from a spectrum.
So:
-
How do I know when a frequency with certain amplitude should be considered as part of the synthesis or disregarded?
-
How many peaks do I take into consideration?
-Are there any methods around to do this?
Any pointers would be most appreciated.
Thanks a lot in advance!