I've got an audio signal which varies at a very slow rate and wanted to measure its mean level over one second, and have had the perhaps rather crackpot idea of sampling the audio signal 1000 per second, and then getting the mean level from it.
What I have in mind it to use a metro and snapshot, but I'm wondering if I'm going down the wrong road with that?
The reason for this idea, is there are a few sharp spikes in the audio signal, and I'd like to smooth them out, so it is following the general trend of the signal, rather than religiously following each peak and trough.