Running your patches on Android usind PdDroidParty in 10 Steps
i can' t understand some steps :
-Place your patch and files with a droidparty_main.pd in the "patch" subfolder
which droidparty_main_pd ? Have i to rename my patch as "droidparty_main.pd" or have i to put my patch and files in the same folder with droidparty_main.pd ?
which "patch" and wich subfolder ? Where ?
-Pack the patch subfolder into an Android zip resource: `./pack-patch`
What does `./pack-patch` mean ? Is this a folder path ?
I have dowloaded RAR for Android and i can make a .rar file but i don't know where to put it.
-Follow the instructions in the README.txt file for building.
Where is the README.txt file ?
Thanks a lot and best regards.
Pure Data generative music radio stream up and running
a better bet is to download the entire repo of patches from github as a zip file here
when you want to play a patch as a stand alone version, copy the patchComs.pd abstraction into the patch folder, then start the patch as usual. patchComs mimics the patch startup as if it were played on the radio and handles the audio output
On pd-list, m.e.grimm linked to this external [autotuned~]:
[autotuned~] is a Pd port (by maxus germanus) of the autotalent LADSPA plugin written by Thomas Baran.
I've tried [autotuned~] with all sorts of test tones and voice. Latency is fixed (at 2048 samples with SR 44K1). The output does show artifacts, in the form of alias-like frequencies and slow amplitude- and phase-modulation. The artifacts differ from phase vocoder and naive time domain pitch shifters. The sound is not as clean as from [soundtouch~]. However, in contrast with [soundtouch~], [autotune~] can freely modulate pitch factor without producing crackling noises.
[autotuned~]'s C code shines a good light on the topic of pitch shifting and it's inherent problems. Pitch detection is done by windowed / unwindowed autocorrelation. Pitch is registered (and implemented) as a function of analyzed period length in integer number of samples. Of course, this is not very accurate, but on the other hand it wouldn't be possible to cut & paste signal segments with fractional period lengths anyway.
[soundtouch~] (my Pd port of Olli Parviainen's SoundTouch library) works different. It does not try to find pitch, but finds the ideal stitch point by correlation of signal tails intended for overlap. The best match is always selected, making the cut smaller or bigger, depending on the actual phase of the signal. When the user sets conditions correctly, according to (monophonic periodic) input material, no audible artifacts are produced.
I am now thinking that 'pitch detection' could better be interpreted as 'pitch indication'. A provisional period length indicator which could be used to guide a fine-tuned correlation process as is used in [soundtouch~]. It may then be possible to 'look ahead' for good stitching regions, and decrease latency time. Or is this a naive idea?
Better sounding guitar distortion ... beyond \[clip~\] and \[tanh~\]
thank you for your feedback !
wow, nice to finally see this!
why are highs contrary to what you'd expect? at a low sample rate, the highs are folded back into low frequencies, but when you upsample, the highs are preserved as true highs. i think it works just how it should. The upsampled version is certainly much clearer and brighter to my ears. Particularly with a high distortion level.
That makes sense. The upsampling workaround officially wins (I'll try x16)! I think I focus too much on highs, as I tend to find this disto patch rather 'acid'; the original sample sounds much darker than the distorted sound. I don't owe the actual pedal so I only rely on my 'feeling', which is far from reliable Of course it's logical to add highs with such a nonlinearity, and lows are filtered several times.
Anyway I still find the heavily distorted sounds have a strong 'schh schh schh component' in the highs, and I can't remember having heard that as strong in actual analogic effects. Would you agree with that? Of course I know this is a rather basic 'physically-informed' design, and that analog will always sound better
actually, this patch really demonstrates the effect of aliasing. if you turn the tone knob down as far as it will go, and also turn the distortion down to zero, and turn aliasing off, you can clearly hear the rustling noise caused caused by those wrapping frequencies.
turn aliasing back on again, and the noise is gone.
Very strange, when I do what you say, the upsampled version sounds like the 'not upsampled one', with a 'sch sch' noise added !
The BJT gains are bound to my 'signal amplitude policy' : input file or audio source and output should never clip. These gains can be seen as follows : the first one (before the clipper) adjust 'how early' distortion occurs, and the second one gives the distorted signal a boost in order to give similar subjective level than dry signal.
The values were found empirically.
This might be where I have the biggest issue, though the article doesn't make it so clear, either. [...] the DS-1 isn't a baby's distortion pedal.
[...] Also, you don't need to calculate the boost into the filter coefficients. That's only useful for plotting. You can just use [*~] before or after the filter do accomplish it.
I understand all these arguments. I'll modify the patch. At the beginning I was using this kind of reasoning, but as the 'nominal input level' is -20dbu (http://www.bossus.com/gear/productdetails.php?ProductId=127&ParentId=254#), and the dbu definition I found seemed difficult to bind to 'our' db, I just dared to make the basic (36db-20db=16db~6.3 times in amplitude) operation... not far from the 6 in my patch Not very scientific, though.
Anyway I understood another reason why my 'subjective hearing' failed: feedind my patch with a 0db normalised sample maximizes input level, and the result will always be 'over the top' compared to non-active guitar pickups with a volume knob not always pushed to max. In other words, if I look at demos on youtube the result heard will be less distorted than my patch's. Anyway this can always be seen as an additional parameter for a 'parametric DS1 deluxe edition patch'
As mod said, it's not so much more highs as less lows, and those lows are a result of aliasing. To my ears, the upsampled version sounds less muddy. (By the way, in your upsampled portion, you have a different argument for the second [DS1-bjt_stage~]. Making them equal makes the difference even less noticeable, and draws more attention to the mud than the highs.)
Ok, upsampling wins !
Yes, there is a difference, but it's not a Matlab thing. It's the choice of the logarithmic scaling in the x-axis. The article uses powers-of-ten as equal distances. Mine uses [mtof] for the scaling, so that a semitone, octave, or whatever musical interval is the same distance. Also, I made an adjustment so that everything between 0 and about 20 Hz (at 44.1k) gets squashed in the leftmost 10% of the graph. If I didn't to that, then about half the plot would be taken up with frequencies below the audible range.
Ok, perfect ! Everything's cool now.
(commenting peaking after ToneStage)
This has nothing to do with your tone stage. It's because of the passband ripple in the Chebychev filters. The IEM Chebychev filters have a 1 dB ripple, though I don't actually know if that means +/- 1 dB or +/- .5 dB. Either way, it's creating a boost at some frequencies, and pushing the output down by 1 dB should keep it below [-1, 1]. This could also be contributing to the highs, as the ripple is typically more pronounced near the cutoff frequency.
Mmh, I thought the same, but then I decided to check the plots taken out of the 'not upsampling' part, and the peaking is still there ! The Chebyshev filters can be the source, then ...
The output from [tanh~] will never clip, so as long as you make up for the ripple and don't boost the second BJT stage, you should be fine.
Just one more thing to add for now, and that is you're doing too much in the upsampled portion. The only thing that needs to be in there is the non-linear function ([tanh~]) and the anti-aliasing filters. Everything else is linear and doesn't benefit from upsampling, so it's just creating more computational load.
Of course I precisely want to get rid of this peaking to be able to fully rely on [tanh~] to 'master' my output gain. I'll try the FIR instead of cheby (as proposed by acreil) some day. (I don't even know yet how they work and how I'll have to implement it).
I left the 'full upsampled chain' in this patch only to see if someone would have commented it, and we totally agree. But this 'playing the noob' attitude of mine is a rather raw 'fishing method' for getting stimulating information ... sorry this lead you to lay down the whole picture BUT nothing wasted, as your final 'delicate' -1db cut surprises me and I'll use it (when I'll get rid of peaking) and moreover, as you are reminding me that you use a 18000 cutoff freq, I'll give a look again at it, just trying to find a good criterion to pick one
Thank you everybody,
P.S. : sorry for those very long sentences ... not very clear
How to create a patch which disorts recorded audio signal?
First of all, I am very new to this interactive programming thing so bear with me x)
I wanted to create a patch which takes recording from the microphone, reads its pitch/frequency, then if it's frequency is above a certain amount it will decrease it by X, and if it is below a certain amount it will increase it by X. The result which I'm hoping to achieve is to make people who speak in high-pitched voices sound low pitched, and people who speak in low-pitched voices sound high-pitched.
Firstly, is this possible at all? And if it is, is it possible in real time?
I thought of taking the input, putting it through fiddle~, then somehow taking this number and using it as an if > / if< . Then after that, I would somehow manipulate that same input (of course the input then wouldn't go through fiddle as fiddle simply analyzes it and outputs it as numbers, no?) by getting it through a certain object which changes the pitch based on these "if > <" messages, and finally outputs it.
So, how do I do this? What objects do I use? Is there something better than fiddle? I've heard of an addon for Max/MSP called analyzer~, and I actually tried using it but I got confused and anyway had similar results I know i'm going to need headphones so that I don't get an infinite speaker/mic loop.
Thanks in advance I need to have this object completed by sunday hopefully.
Export to exe or dmg format
Pd and Pd-extended is free software and you may redistribute it, even in modified form, provided you follow the license terms (BSD and GPL).
If you are on OSX, it is very easy to create your own application with Pd-extended under the hood. Select menu-item 'file>>Make app from patch' or 'file>>Make app from folder'. What you get is a Pd-extended package with your patch as startup patch. The Pd window is minimized directly after startup, so at a first glance you don't notice it is Pd. But it is still fully functional with all Pd-extended libraries in it (120 MB!), and the possibility to edit patches and create new ones. With some extra tweaking, you can replace the Pd icon with an icon of your own make. The preferences file of this 'app' is within the app folder. If you included all necessary abstractions, and eventually your homebrew externals, you can distribute it as a stand alone app.
But probably you are not on OSX, otherwise you would have already seen this option. For Linux or Windows you could do something similar to the 'Make app' as described above, but do it by hand. You could write an executable tcl script to start Pd with your patch as startup patch, and eventually include other options in the script. (Pd uses Tk/tcl for graphics and other purposes, so it is included in every binary distribution of Pd-extended). The user can click that script to open your 'application'.
Disadvantages of distributing apps instead of patches are:
- you need to make separate distributions for every platform
- applications are large so you need ample download bandwidth on your server or host
- if Pd is obscured, you can't refer to Pd pages for support
All taken together, I see little advantage for distributing stand alone apps rather than Pd patches. If you want to make user-friendly distributions of your patches, you could organise them in a decent directory structure, where abstractions and other essential files are included in the search path by the [declare] object. For the user it is then a matter of installing a recent Pd(-extended) if they do not have it yet, and opening the main patch in your patches package. If all goes well, this is piece of cake, and on the other hand if they have troubles with soundcards etcetera, this is not something you could have prevented by supplying an app instead of a patch.
here's a copy of a mail i sent a friend, and the corresponding patches.
you can see these patches being used here:
ok here's a simplified version of the patch i use. i've just modified the "mud" patch and haven't checked it all, so there are bugs and errors everywhere, but i guess you're just interested in the abs which receive and dispatch the data from kinect.
so the kinect is received by osceleton and what i get in pd is osc messages. basically it's x, y, and z coordinates for each point of the body. so you'll be interested in the patches "kinector" and "shooter".
- it translates the osc into data that the granular sampler "mud" can understand (0 to 1 linear).
- move the horizontal sliders to chose a user and a joint.
- toggle from "value" to "CC". in X Y and Z type a sending chanel number. in the granular patch, toggle from value to CC, so you can affect a receiving chanel number for each automatisable parameter.
- hit the "learn" buton and then cover with your body the area you wish to use. this sets minimums and maximum for each axis. if you want to calibrate the whole body at ounce first select "all_joints". hit the "learn" buton again to end calibration. body motion is now active.
- the toggle on the top right activates remote sound control for the "learn" function, for if you work alone. enable it, use the vertical slider to choose the gate for incoming volume. stand at your starting point, and clap or scream. calibrate, and clap again.
- if you toggle from "abs" to "rltv", instead of calibrating the movement of each joint in absolute space it will consider their relative distance to the torso joint. the advantage of this is one movement will have the same effect wherever you are positioned in the space.
- you save, open, and load presets as textfiles on your drive. you can save presets for the whole patch on the top right of the master patch.
- basically the same as kinector, but used for one-shots instead of continuous changes.
- chose a user, a joint, an axis, and a direction
- type a chanel cumber where it says CC
- in "time", type a time in miliseconds. everytime a joint passes a chosen point in space in a chosen direction, it will output a line from 0 to 1 in the chosen time.
- calibrate in the same way. you can use "all_joints" too bu there's a huge error somewhere so if you do first toggle to "value".
- same as kinector for the rest.
ok here you go. i don't know how much you know pd, so that's why i explained as much as i could. these patches are absolutely not clean, they're my first ideas since i got the kinect, and i'm working on more to have one tight patch in the end (including speed detection, movement prevision to compensate latency, etc ... ).
ok hope this helps.
if you have trouble using the "mud" patch let me know. if you are going to use the patch, please let me know and make sure you mention it's mine.
DJ/VJ scratching system
First my story: (you can skip down to END OF STORY if you want)
Ever since I saw Mike Relm go to town with a DVDJ, I've wanted a system where I could scratch and cue video. However, I haven't wanted to spend the $2500 for a DVDJ. As I was researching, I found a number of different systems. I am not a DJ by trade, so to get a system like Traktor or Serrato with their video modules plus turntables plus hardware plus a DJ mixer, soon everything gets really expensive. But in looking around, I found the Ms.Pinky system and after a little bit, I found a USB turntable on Woot for $60. So I bought it. It was marketed as a DJ turntable, but I knew that it wasn't really serious since it had a belt drive, but it came with a slip-pad and the USB connection meant that I wouldn't need a preamp. And so I spend the $100 on the Ms.Pinky vinyl plus software license (now only $80). This worked decently, but I had a lot of trouble really getting it totally on point. The relative mode worked well, but sometimes would skip if I scratched too vigorously. The absolute mode I couldn't get to work at all. After reading a little more, I came to the conclusion that my signal from vinyl to computer just wasn't strong enough, so I would need maybe a new needle or maybe a different turntable and I didn't really want to spend the money experimenting. I think that the Ms. Pinky system is probably a very good system with the right equipment, but I don't do this professionally, so I don't want to spend the loot on a system.
Earlier, before I bought Ms.Pinky (about two years ago), I had also looked around for a cheap MIDI USB DJ controller and not found one. Well, about a month ago, I saw the ION Discover DJ controller was on sale at Bed, Bath & Beyond for $50. They sold out before I could get one, but Vann's was selling it for $70, so I decided that that was good enough and bought one. I had planned to try to use it with Ms. Pinky since you can hook up MIDI controllers to it. But it turns out that you can hook up MIDI controllers to every control except the turntable, so that was a no go. If I had Max/MSP/Jitter, I could have changed that, but that's also way expensive. So, how should I scratch? My controller came with DJ'ing software and there's also some freeware, like Mixxx, but none of this has video support. So I look around and find Pure Data and GEM.
And I see lots of questions about scratching, how to do it. And there are even some tutorials and small patches out there, but as I look at them, none of them are quite what I'm looking for. The YouTube tutorial is really problematic because it's no good at all for scratching a song. It can create a scratching sound for a small sample, but it's taking the turntable's speed and using that as the position in the sample. If you did that with a longer song, it wouldn't even sound like a scratch. And then there are some which do work right, but none of them keep track of where you are in the playback. So, whenever you start scratching, you're starting from the beginning of the song or the middle.
So, I looked at all this and I said, "Hey, I can do this. I've got my spring break coming up. Looking at how easy PD looks and how much other good (if imperfect) work other people have done, I bet that I could build a good system for audio and video scratching within a week." And, I have.
END OF STORY
So that's what I'm presenting to you, my free audio and video scratching system in Pure Data (Pd-extended, really). I use the name DJ Lease Def, so it's the Lease Def DJ system. It's not quite perfect because it loads its samples into tables using soundfiler which means that it has a huge delay when you load a new file during which the whole thing goes silent. I am unhappy about this, but unsure how to fix it. Otherwise, it's pretty nifty. Anyway, rather than be one big patch, it relies on a system of patches which work with each other. Each of the different parts will come in several versions and you can choose which one you want to use and load up the different parts and they should work together correctly. Right now, for most of the parts there's only one version, but I'll be adding others later.
There's a more detailed instruction manual in the .zip file, but the summary is that you load:
the engine (only one version right now): loads the files, does the actual signal processing and playback
one control patch (three versions to choose from currently, two GUI versions and a MIDI version specific to the Ion Discover DJ): is used to do most of the controlling of the engine other than loading files such as scratching, fading, adjusting volume, etc.
zero or one cueing patch (one version, optional): manages the controls for jumping around to different points in songs
zero or one net patch (one version: video playback): does some sort of add-on. Will probably most commonly be used for video. The net patches have to run in a separate instance of Pd-extended and they listen for signals from the engine via local UDP packets. This is set-up this way because when the audio and video tried to run in the same instance, I would get periodic little pops, clicks, and other unsmoothnesses. The audio part renders 1000 times per second for maximum fidelity, but the video part only renders like 30 or 60 times per second. Pure Data is not quite smooth enough to handle this in a clever real-time multithreading manner to ensure that they both always get their time slices. But you put them in separate processes, it all works fine.
So, anyway, it's real scratching beginning exactly where you were in playing the song and when you stop scratching it picks up just where you left off, you can set and jump to cue points, and it does video which will follow right along with both the scratching and cuing. So I'm pretty proud of it. The downsides are that you have to separate the audio and video files, that the audio has to be uncompressed aiff or wav (and that loading a new file pauses everything for like 10 seconds), that for really smooth video when you're scratching or playing backwards you have to encode it with a codec with no inter-frame encoding such as MJPEG, which results in bigger video files (but the playback scratches perfectly as a result).
So anyway, check it out, let me know what you think. If you have any questions or feedback please share. If anyone wants to build control patches for other MIDI hardware, please do and share them with me. I'd be glad to include them in the download. The different patches communicate using send and receive with a standard set of symbols. I've included documentation about what the expected symbols and values are. Also, if anyone wants me to write patches for some piece of hardware that you have, if you can give me one, I'll be glad to do it.
Keith Irwin (DJ Lease Def)
SEND AND RECEIVING BY OSC
Hi guys, sorry if this question has been asked before, but we haven't been able to find any help online for our problem.
I am currently trying to send information to my friends laptop using OSC.
We have been using these patches : http://www.sendspace.com/file/4uhnc4
We created a network in network preferences which we then both connected to. We then opened the send patch on my computer (Macbook Pro, I7, osx 10.6.6 8GB RAM) and the receive patch on my mates computer (Macbook White, 1.83ghz, osx 10.4.11 2GB RAM)
I then changed the IP address in the send patch to that of my friends IP address. We then changed the port number to 9000 in all boxes on the send and receive patch.
I got this process to work internally by opening both patches on my computer (Macbook Pro) and it worked fine.
My friends PD is showing an error saying OSC deprecated.
If anyone can shed any light on this, or help us out, it would be greatly appreciated!
Pure Data generative music radio stream up and running
hey mnb, i just downloaded the patches and gave them a listen, really really like them, nice work. I'll have a quick check through them just to make sure that everything that needs it has the $0 arguments and then I'll get them up and running.
if it will run on the atom then it should be fine to run on the server. I'll test it out first like I have been with lead's patches and let you know if there are any issues, thanks for helping out. I'll ping you an email so you have my address, i realise that it isn't so obvious on my website heh.
Maelstrom, I was wondering if there might be something like that but hadn't found it for some reason. thankyou for pointing it out to me, I'm testing it out now. will take a bit of rejigging as i realise the namecanvas would have to be in the top layer and cant be in a subpatch or abstraction. pretty sure i've got it tho.
currently the $0 argument of the patchComs abstraction is already being sent to the main patch as this is how the audio is routed between the patches. there are a set of send objects in the patchComs called $0-l and $0-r, python receives the $0 value and then sends messages to the master patch which sets a pair of receive objects to get their audio from those sends. means that i can do the crossfading in the main patch and not have to send too many control messages to the loaded patches.
hopefully I can have this working in the next couple of days.
audiolemon, i figured it was a picture of an old timey radio. i quite like it tbh heh.