• daisy

    Question 1: Why it is necessary to multiply the signal with HANN window function before fourier transformation?

    Question 2: How I may get a hann function value by using pd object?

    posted in technical issues read more
  • daisy

    The objective of this program is to convert the timbre of voice. (A voice timbre would be converted into B's voice timbre)

    I have documented this code from the book "the theory and technique of Electronic Music" from the site "http://crca.ucsd.edu/%7Emsp/techniques/latest/book-html/".

    I have not a good understanding of the code. and this code is not working? so can any body tell me where is the problem?

    One Major Problem:
    This code is expecting some data through tabreceive~ $0-hann, from some other abstraction and then multiply that data by using object*~. I have read some thing like HANN formular for the analysis of non perodic signal. so I think the other abstraction would provide this HANN array value?

    You may find the snap shot of this code from above book under the article "timbre stamp".

    In this article the writer says that this is a code of timbre stamp. A technique used to convert timbre. I was told by Mr obiwannabe, that there are other patches of this code but I dont know, from where I find those patches.

    Thanks.

    http://www.pdpatchrepo.info/hurleur/changetimbre1.pd

    posted in technical issues read more
  • daisy

    when i run this program, it gives me this error (no such array), what is wrong? . This pd file is the implementation of VoCoder.

    2nd Question:
    What is diference between (adc~ and inlet~) and (dac~ and outlet~). Both are used for audio input and output and are same ( at least to me.)

    http://www.pdpatchrepo.info/hurleur/timbre.pd

    posted in technical issues read more
  • daisy

    There is a image file attached with this msg. It is a snapshot of a pd file. I just want to ensure that is it a right way to change timbre. pls see the attached file.

    http://www.pdpatchrepo.info/hurleur/timbreChange.png

    posted in technical issues read more
  • daisy

    I have read some where that "if a voice is at same pitch and same loudness and still if one recognize that two voices are different , it is becuase of TIMBRE (tone quality)". (I agree there are other features as well who need to consider).

    First Question:
    So how we can calculate the TIMBRE of voice? as fiddle~ object is used to determine the pitch of voice? what object is used for TIMBRE calculation?

    Second Question:
    And how one can change TIMBRE? as pitch shifting technique is used for pitch? what about timbre change?

    Thanks.

    posted in technical issues read more
  • daisy

    I want to change the pitch of a recorded sound (say the pitch of recorded sound is 50, I want to change it to 60+). There is a sample G09.pitchshift.pd, I studied it but could not absorb it fully.
    Is there any simple sample to undersatnd pitch shifting? Any help would be highly appreciated.

    I have a litte undersatnding , what is pitch shifting ?

    posted in technical issues read more
  • daisy

    1. To cross synthesise two voices you must ensure that two speakers make exactly the same utterances which are phonetically aligned. This is hard as I can tell you from experience of recording many voice artists. Even the same person will not speak a phrase the same way twice.

    << This is not possible in my experiment, as I am supposed to morph the actual conversation, so it is upto subjects what ever they want to speak. There is some work done by (Oytun Türk , Levent M.Arslan) who conducted experiment in passive enviornment (not at real time).

    1. The result is not a "timbral morph" between the two speakers. The human voice is very complex. Most likely the experiment will be invalidated by distracting artifacts.

    Here's some suggestions.

    1. Don't "morph" the voices, simply crossfade/mix them.

    << yes I also want to do this, crossfading and mix, as I just want to create illusion, so that listner start thinking whether it is B's voice or A's voice>>

    1. For repeatable results (essential to an experiment) a real-time solution is probably no good. Real time processing is very sensitive to initial conditions. I would prepare all the material beforehand and carefully screen it to make sure each set of subjects hears exactly the same signals.

    <<Yes, I agree, but it is demand of experiment, I can not control the environment, but to create some good illusion (or to distract listner, I may add noise in the siganl at real time, it would challange the listner brain in identification, so I may use such kind of tricks for the sucess factor)

    1. If you want a hybrid voice (somewhere between A and B) then vocoding is not the best way. There are many tools that would be better than Puredata which are open source and possible to integrate into a wider system.

    <<Actually now I have a little familarity with pure data, so it is more good for me to stick with it for a while (due to short time), yes if i continue my phD in this domain, I would explore other tools as well. for the time being it is a kind of pilot study>>

    Question :
    Is VoCoder alone is sufficient for morph/mix/crossfade among two voices?
    or I should also add the pitch shifting module with VoCoder to get some more qualitative results.

    Question 2:
    I already tried this VoCoder example, but could not change it according to my requirements. In my requirements, I have a target voice (the target voice is phonetically rich) and now source speaker is speaking (what ever he want to speak) and source voice is changing into target voice. (illusion/crossfade/mix)

    The (changetimbre1.pd ) file that I attached first, give you an idea of what kind of operational interface I am looking for.

    Question 3:
    what should be the ideal length of target wave file?

    Before the start of experiment, I would collect the voice sample of all participants.

    I am highly oblige for your earlier help and looking for more (greedy). Meanwhile I would once again study this vocoder example to change it according to my requiement. ( though i doubt I may change it).

    Thanks.

    posted in technical issues read more
  • daisy

    Yes U are right but I am also not wrong:) Actually neither I am a signal processing student nor I have any ambition in this field. My actual task is some thing different.

    I have to evaluate (Only EVALUATION) of voice morphing technology. There are some other avaialble voice morpher tools (such as AVS, Vodi, and many others etc) but those tools do not serve my purpose as they are hard to integrate with my other application.

    So if I would not make myself dirty with DSP, it would not hurt my master:) I and my master goal and intentions are different. Even my master is not from Engineering field . He is a socialgist . :) at least her last degree is in socialiogy.

    I am also not an from electrical engineering field and I also know, that may be at higher level I would undersatnd the meaning of

    Filters, FFT, pitch, timbre, formant, pitch shifting, Vocoder, phase alignment etc.

    but I would never understand this domain in depth (not due to my incompetency rather due to my different domain)

    so as you said

    "It will not help you if I just change it to work. Neither will you learn much if somebody just completes your assignment for you :)".

    So it would works for me, as my final goal is some thing different. (that goal is experientaion/evaluation, Human Psycholgy etc )

    So Guru, I assure you, that if you do this work for me. You would not be penalized by Plagrism act:). rather I would fully document your name in my final thesis.:)

    I give you some idea, what I want to do

    A says: let's go to cienma (centre)
    B says: no busy in course work (left of A)
    C says: me too (right of A)
    B says: hmm... we would go on TUESDAY. (morph into C,s voice)

    (morphed B,s voice into C's voice and send at right channel of A ).

    Now in this all conversation, A brain is tune up that what ever is coming from left is B's voice and on right it is C's voice.

    so Here i would challange the human brain in voice recognition process? and would try to identify that how much location matters in voice recognition?

    so you can guess that I have to do nothing with DSP:) Sorry I try to convience you - though I know It is really hard (impossible or even sin) to convience TEACHERS.

    posted in technical issues read more
  • daisy

    I have written this piece of code from the book "The Theory and Technique of Electronic Music ". The book discuss it under the article "Timbre stamp (``vocoder")". that I am assuming would change the timbre of voice.

    I have not a detailed knowledge of DSP, as I am not a DSP student.

    I am just using the approach of "bulit-in component" of software engineering. So can you add some thing in this code, so that it works, and change the timbre.

    I am thankful to you for your already assistance in pitch shifting module.?

    I have to implement two things.
    1- pitch shifting
    2- timbre shifting

    Thanks.

    posted in technical issues read more
  • daisy

    now it works. there was some problem with the unzip utility. so pls dont upload it now.
    thanks.

    posted in technical issues read more

Internal error.

Oops! Looks like something went wrong!