I've tested the Zack and Cort algorithm which allow a timbral mix between two samples (audio example IO6 in Pd manual) but found it somewhat limited (I had been especially much more impressed a few years ago by a demo in Ircam, where the sounds of the MGM Lion and a human voice perfectly mixed).
I guess algorithms for mixing timbres has progressed since Z&C. I would be interested in publications on this topic, as well as patches if there are some.
Also, Speech Synthesis has been there for a long time now. Beside the most obvious vocoder application, are there other work being done to apply these techniques in the field of music ? I would be interested in getting directions in this area.