Great work Maximus, this filter sounds great! It even works at high Q values (q=10) the way it should. Good job in implementing Houvilainen's model.
I've noticed in the SVN that you've made a revision after this post. So you've optimized the tanh function and it no longer needs to be upsampler 4x, correct?
I've done an experiment to calculate the error of the tanh table using different table sizes (all multiples of and found that the ttanh function causes an error of about six orders of magnitude since it doesn't round to the nearest integer when calculating the index t (from the sample x). I then decided to try linear interpolation and got slightly better results. The difference between truncation and rounding is significant, the difference between rounding and interpolation is probably not. On my machine, muug's help ran at about 6.5% cpu using rounding, while with interpolation it ran at 8%.
Another thing, the table size should be SIZE+1, but the error caused by that is unmeasurable.
I'm uploading a modified version which allows you to change the default table size of 512 to some other size with a -DMUUG_TILDE_TABLE_SIZE=262144 compiler option. It also lets you change from rounding to linear interpolation with the compiler option -DMUUG_INTERPOLATE=1, the default is 0 (rounding). This version also gets the current sampling rate, it does not assume 44100. I've tested it at 96000 and 4800, works perfectly. Pi is now a literal constant instead of a variable (MUUG_PI). And finally I changed the calculations to double precision with no discernible performance loss.
I'm also including the results of my experiment. It basically ran a large quantity of numbers between -4 and 4 through the table look up tanh and compared it with the actual tanh.
cheers.
http://www.pdpatchrepo.info/hurleur/muug~-2010.zip