Running your patches on Android usind PdDroidParty in 10 Steps
i can' t understand some steps :
-Place your patch and files with a droidparty_main.pd in the "patch" subfolder
which droidparty_main_pd ? Have i to rename my patch as "droidparty_main.pd" or have i to put my patch and files in the same folder with droidparty_main.pd ?
which "patch" and wich subfolder ? Where ?
-Pack the patch subfolder into an Android zip resource: `./pack-patch`
What does `./pack-patch` mean ? Is this a folder path ?
I have dowloaded RAR for Android and i can make a .rar file but i don't know where to put it.
-Follow the instructions in the README.txt file for building.
Where is the README.txt file ?
Thanks a lot and best regards.
Pure Data generative music radio stream up and running
a better bet is to download the entire repo of patches from github as a zip file here
when you want to play a patch as a stand alone version, copy the patchComs.pd abstraction into the patch folder, then start the patch as usual. patchComs mimics the patch startup as if it were played on the radio and handles the audio output
On pd-list, m.e.grimm linked to this external [autotuned~]:
[autotuned~] is a Pd port (by maxus germanus) of the autotalent LADSPA plugin written by Thomas Baran.
I've tried [autotuned~] with all sorts of test tones and voice. Latency is fixed (at 2048 samples with SR 44K1). The output does show artifacts, in the form of alias-like frequencies and slow amplitude- and phase-modulation. The artifacts differ from phase vocoder and naive time domain pitch shifters. The sound is not as clean as from [soundtouch~]. However, in contrast with [soundtouch~], [autotune~] can freely modulate pitch factor without producing crackling noises.
[autotuned~]'s C code shines a good light on the topic of pitch shifting and it's inherent problems. Pitch detection is done by windowed / unwindowed autocorrelation. Pitch is registered (and implemented) as a function of analyzed period length in integer number of samples. Of course, this is not very accurate, but on the other hand it wouldn't be possible to cut & paste signal segments with fractional period lengths anyway.
[soundtouch~] (my Pd port of Olli Parviainen's SoundTouch library) works different. It does not try to find pitch, but finds the ideal stitch point by correlation of signal tails intended for overlap. The best match is always selected, making the cut smaller or bigger, depending on the actual phase of the signal. When the user sets conditions correctly, according to (monophonic periodic) input material, no audible artifacts are produced.
I am now thinking that 'pitch detection' could better be interpreted as 'pitch indication'. A provisional period length indicator which could be used to guide a fine-tuned correlation process as is used in [soundtouch~]. It may then be possible to 'look ahead' for good stitching regions, and decrease latency time. Or is this a naive idea?
Pure Data controls huge RGB Led Panels
I just released my Led Panel controlling software called PixelController. The frontend of my controling software is created as a PureData patch. I also created the Led Panels myself, they are called PixelInvaders.
As a video says more than 1000 words, check out: http://vimeo.com/27453711.
The PureData patch is available on GitHub:
https://github.com/neophob/PixelController/tree/master/data File: ledgui4.pd
The patch works fine on Windows, Linux and OSX, so no fancy plugins are needed. And the patch uses OSC and Midi signals.
If you're interested getting such a panel, check out: http://www.indiegogo.com/pixelinvadersDIY?a=167555&i=addr
Better sounding guitar distortion ... beyond \[clip~\] and \[tanh~\]
thank you for your feedback !
wow, nice to finally see this!
why are highs contrary to what you'd expect? at a low sample rate, the highs are folded back into low frequencies, but when you upsample, the highs are preserved as true highs. i think it works just how it should. The upsampled version is certainly much clearer and brighter to my ears. Particularly with a high distortion level.
That makes sense. The upsampling workaround officially wins (I'll try x16)! I think I focus too much on highs, as I tend to find this disto patch rather 'acid'; the original sample sounds much darker than the distorted sound. I don't owe the actual pedal so I only rely on my 'feeling', which is far from reliable Of course it's logical to add highs with such a nonlinearity, and lows are filtered several times.
Anyway I still find the heavily distorted sounds have a strong 'schh schh schh component' in the highs, and I can't remember having heard that as strong in actual analogic effects. Would you agree with that? Of course I know this is a rather basic 'physically-informed' design, and that analog will always sound better
actually, this patch really demonstrates the effect of aliasing. if you turn the tone knob down as far as it will go, and also turn the distortion down to zero, and turn aliasing off, you can clearly hear the rustling noise caused caused by those wrapping frequencies.
turn aliasing back on again, and the noise is gone.
Very strange, when I do what you say, the upsampled version sounds like the 'not upsampled one', with a 'sch sch' noise added !
The BJT gains are bound to my 'signal amplitude policy' : input file or audio source and output should never clip. These gains can be seen as follows : the first one (before the clipper) adjust 'how early' distortion occurs, and the second one gives the distorted signal a boost in order to give similar subjective level than dry signal.
The values were found empirically.
This might be where I have the biggest issue, though the article doesn't make it so clear, either. [...] the DS-1 isn't a baby's distortion pedal.
[...] Also, you don't need to calculate the boost into the filter coefficients. That's only useful for plotting. You can just use [*~] before or after the filter do accomplish it.
I understand all these arguments. I'll modify the patch. At the beginning I was using this kind of reasoning, but as the 'nominal input level' is -20dbu (http://www.bossus.com/gear/productdetails.php?ProductId=127&ParentId=254#), and the dbu definition I found seemed difficult to bind to 'our' db, I just dared to make the basic (36db-20db=16db~6.3 times in amplitude) operation... not far from the 6 in my patch Not very scientific, though.
Anyway I understood another reason why my 'subjective hearing' failed: feedind my patch with a 0db normalised sample maximizes input level, and the result will always be 'over the top' compared to non-active guitar pickups with a volume knob not always pushed to max. In other words, if I look at demos on youtube the result heard will be less distorted than my patch's. Anyway this can always be seen as an additional parameter for a 'parametric DS1 deluxe edition patch'
As mod said, it's not so much more highs as less lows, and those lows are a result of aliasing. To my ears, the upsampled version sounds less muddy. (By the way, in your upsampled portion, you have a different argument for the second [DS1-bjt_stage~]. Making them equal makes the difference even less noticeable, and draws more attention to the mud than the highs.)
Ok, upsampling wins !
Yes, there is a difference, but it's not a Matlab thing. It's the choice of the logarithmic scaling in the x-axis. The article uses powers-of-ten as equal distances. Mine uses [mtof] for the scaling, so that a semitone, octave, or whatever musical interval is the same distance. Also, I made an adjustment so that everything between 0 and about 20 Hz (at 44.1k) gets squashed in the leftmost 10% of the graph. If I didn't to that, then about half the plot would be taken up with frequencies below the audible range.
Ok, perfect ! Everything's cool now.
(commenting peaking after ToneStage)
This has nothing to do with your tone stage. It's because of the passband ripple in the Chebychev filters. The IEM Chebychev filters have a 1 dB ripple, though I don't actually know if that means +/- 1 dB or +/- .5 dB. Either way, it's creating a boost at some frequencies, and pushing the output down by 1 dB should keep it below [-1, 1]. This could also be contributing to the highs, as the ripple is typically more pronounced near the cutoff frequency.
Mmh, I thought the same, but then I decided to check the plots taken out of the 'not upsampling' part, and the peaking is still there ! The Chebyshev filters can be the source, then ...
The output from [tanh~] will never clip, so as long as you make up for the ripple and don't boost the second BJT stage, you should be fine.
Just one more thing to add for now, and that is you're doing too much in the upsampled portion. The only thing that needs to be in there is the non-linear function ([tanh~]) and the anti-aliasing filters. Everything else is linear and doesn't benefit from upsampling, so it's just creating more computational load.
Of course I precisely want to get rid of this peaking to be able to fully rely on [tanh~] to 'master' my output gain. I'll try the FIR instead of cheby (as proposed by acreil) some day. (I don't even know yet how they work and how I'll have to implement it).
I left the 'full upsampled chain' in this patch only to see if someone would have commented it, and we totally agree. But this 'playing the noob' attitude of mine is a rather raw 'fishing method' for getting stimulating information ... sorry this lead you to lay down the whole picture BUT nothing wasted, as your final 'delicate' -1db cut surprises me and I'll use it (when I'll get rid of peaking) and moreover, as you are reminding me that you use a 18000 cutoff freq, I'll give a look again at it, just trying to find a good criterion to pick one
Thank you everybody,
P.S. : sorry for those very long sentences ... not very clear
As my last message states, I was able to open most of your patches with out any errors after loading iemgui and pdmtl. I also tried to load in from your svn account, but I am getting errors like "file does not seem to be a URL"
What I am tying to do is rebuild a sampler/looper/collage-r that I built a while ago, but using your patches. I am hung up on where a new .wav file resides after you hit the toggle on your [rc-record]? Because I want to call up the file through your [rc-sfplay~]. I tried to do some digging, used the [open] message with a different path, and could not find the newly recorded sound file, with the names test.wav and hello.wav, as you have under the "set" message going into your [file.pat.current].
The idea here is that I would like to use your stereo mixer to take in a signal, have it connect to your [rc-record], and then playback that newly recorded sound file through your [rc-sfplay]. Theoretically this should be pretty straight forward. But, since this project will be in a new folder, I figure I need to know how to access that folder (or some other folder) to call up the newly created .wav file.
Sorry this is so long. And, again, thank you for the help.
Well, that version of rc-record does not handle folders correctly and I believe the files end up being saved in it's root folder, so rc-patches. Get the latest version which saves in the folder of the parent patch which is using it.
The link works fine for me, check your svn command. I can see the files in my browser by just opening:
Also, if you want to do a looper or short sampler, it'd make more sense to record into a table and playback from there. The method you describe is much slower since you're loading the sample from disk everytime it's played. This is fine for a long track etc, but if you're just doing smaller snippets, playing from RAM via a table is much faster. I've been meaning to add this funcitonality to rc-sample~ or create an rc-looper~ etc ...
Kinect and pure data
thanks. here's the patch.
so it's a demo patch, simplified to one unit of each device. it's not the exact patch i used for the videos, but a later more complete version. it's not debugged because i'm working on new chains of comand and gesture recognition, and there's a readme only for the patch which might interest you: there's one patch ("mud") which is a granular sampler, and another one ("metakin") which receives, translates, and dispatches the osc data from osceleton.
please let me know if you use this patch or part of it, or if you modify, debugg, or enhance it.
How to create a patch which disorts recorded audio signal?
First of all, I am very new to this interactive programming thing so bear with me x)
I wanted to create a patch which takes recording from the microphone, reads its pitch/frequency, then if it's frequency is above a certain amount it will decrease it by X, and if it is below a certain amount it will increase it by X. The result which I'm hoping to achieve is to make people who speak in high-pitched voices sound low pitched, and people who speak in low-pitched voices sound high-pitched.
Firstly, is this possible at all? And if it is, is it possible in real time?
I thought of taking the input, putting it through fiddle~, then somehow taking this number and using it as an if > / if< . Then after that, I would somehow manipulate that same input (of course the input then wouldn't go through fiddle as fiddle simply analyzes it and outputs it as numbers, no?) by getting it through a certain object which changes the pitch based on these "if > <" messages, and finally outputs it.
So, how do I do this? What objects do I use? Is there something better than fiddle? I've heard of an addon for Max/MSP called analyzer~, and I actually tried using it but I got confused and anyway had similar results I know i'm going to need headphones so that I don't get an infinite speaker/mic loop.
Thanks in advance I need to have this object completed by sunday hopefully.
Export to exe or dmg format
Pd and Pd-extended is free software and you may redistribute it, even in modified form, provided you follow the license terms (BSD and GPL).
If you are on OSX, it is very easy to create your own application with Pd-extended under the hood. Select menu-item 'file>>Make app from patch' or 'file>>Make app from folder'. What you get is a Pd-extended package with your patch as startup patch. The Pd window is minimized directly after startup, so at a first glance you don't notice it is Pd. But it is still fully functional with all Pd-extended libraries in it (120 MB!), and the possibility to edit patches and create new ones. With some extra tweaking, you can replace the Pd icon with an icon of your own make. The preferences file of this 'app' is within the app folder. If you included all necessary abstractions, and eventually your homebrew externals, you can distribute it as a stand alone app.
But probably you are not on OSX, otherwise you would have already seen this option. For Linux or Windows you could do something similar to the 'Make app' as described above, but do it by hand. You could write an executable tcl script to start Pd with your patch as startup patch, and eventually include other options in the script. (Pd uses Tk/tcl for graphics and other purposes, so it is included in every binary distribution of Pd-extended). The user can click that script to open your 'application'.
Disadvantages of distributing apps instead of patches are:
- you need to make separate distributions for every platform
- applications are large so you need ample download bandwidth on your server or host
- if Pd is obscured, you can't refer to Pd pages for support
All taken together, I see little advantage for distributing stand alone apps rather than Pd patches. If you want to make user-friendly distributions of your patches, you could organise them in a decent directory structure, where abstractions and other essential files are included in the search path by the [declare] object. For the user it is then a matter of installing a recent Pd(-extended) if they do not have it yet, and opening the main patch in your patches package. If all goes well, this is piece of cake, and on the other hand if they have troubles with soundcards etcetera, this is not something you could have prevented by supplying an app instead of a patch.
here's a copy of a mail i sent a friend, and the corresponding patches.
you can see these patches being used here:
ok here's a simplified version of the patch i use. i've just modified the "mud" patch and haven't checked it all, so there are bugs and errors everywhere, but i guess you're just interested in the abs which receive and dispatch the data from kinect.
so the kinect is received by osceleton and what i get in pd is osc messages. basically it's x, y, and z coordinates for each point of the body. so you'll be interested in the patches "kinector" and "shooter".
- it translates the osc into data that the granular sampler "mud" can understand (0 to 1 linear).
- move the horizontal sliders to chose a user and a joint.
- toggle from "value" to "CC". in X Y and Z type a sending chanel number. in the granular patch, toggle from value to CC, so you can affect a receiving chanel number for each automatisable parameter.
- hit the "learn" buton and then cover with your body the area you wish to use. this sets minimums and maximum for each axis. if you want to calibrate the whole body at ounce first select "all_joints". hit the "learn" buton again to end calibration. body motion is now active.
- the toggle on the top right activates remote sound control for the "learn" function, for if you work alone. enable it, use the vertical slider to choose the gate for incoming volume. stand at your starting point, and clap or scream. calibrate, and clap again.
- if you toggle from "abs" to "rltv", instead of calibrating the movement of each joint in absolute space it will consider their relative distance to the torso joint. the advantage of this is one movement will have the same effect wherever you are positioned in the space.
- you save, open, and load presets as textfiles on your drive. you can save presets for the whole patch on the top right of the master patch.
- basically the same as kinector, but used for one-shots instead of continuous changes.
- chose a user, a joint, an axis, and a direction
- type a chanel cumber where it says CC
- in "time", type a time in miliseconds. everytime a joint passes a chosen point in space in a chosen direction, it will output a line from 0 to 1 in the chosen time.
- calibrate in the same way. you can use "all_joints" too bu there's a huge error somewhere so if you do first toggle to "value".
- same as kinector for the rest.
ok here you go. i don't know how much you know pd, so that's why i explained as much as i could. these patches are absolutely not clean, they're my first ideas since i got the kinect, and i'm working on more to have one tight patch in the end (including speed detection, movement prevision to compensate latency, etc ... ).
ok hope this helps.
if you have trouble using the "mud" patch let me know. if you are going to use the patch, please let me know and make sure you mention it's mine.