Artists using Pure Data
@anechoic said:
I'm not famous but my new CD will contain many uses of Pd
also check out the article I wrote on my switch from OS X -> Linux
Great! I have Linux/Windos/OSX installed. I discover GNU/Linux 8 years ago. If your favorite software is on Linux, use linux!
Kim, the other day I discover your work, searching for MAX/MSP music. Great to see you here. Im new to MAX/PD, as a MAX user, what do you think of PD?
Artists using Pure Data
I'm not really sure what you mean by "less professional." If you're saying it might be less suited for professional use, I would have to disagree. I have actually found Pd-extended to be more stable than Max 5. The differences in sound quality are generally negligible, if at all noticeable. And there are few things that one can do that the other can't (and they *both* have their advantages over the other). They are both great pieces of software, and you can probably get them both to do what you need them to do. Max is a commercial product, but that doesn't necessarily make it better suited for professional use. It just makes it commercial software.
The only thing that I've found that Max hands-down does better than Pd is the gui. But, really, it's not that big of a deal, because the gui offerings in Pd give you what you need to function. Most of the stuff in Max is eye candy that you end up wasting a lot of time working on. There are very few instances where the gui is actually very important to the functionality of the patch.
Max also has ReWire capabilities (one of the advantages of being commercial), but they are so buggy and the implementation is so crappy that it is hardly usable. David Zicarelli himself even said, in so many words, that ReWire sucks.
As far as famous artists go, keep in mind that Max has a longer history than Pd and so has has a bit of a foothold in this area. And, more importantly, it is a commercial product. You're likely to find a list of famous people using just about any commercially successful product because the advertising department knows it will get people to buy it. They find out if an artist has used it, even if only for a small part of one track, and then the go around saying, "Hey, Aphex Twin uses Max. You should buy it if you want to be like him."
As someone who has both programs, I can say this: Max/MSP/Jitter is not $800 better than Pd-extended. It's not even $250 better (which is the student discount price). And if you really must go for Max, the transition from Pd is not hard at all. I would at least say play with Pd for a while and decide if it's a paradigm that you really enjoy. If you find that you need the extra goodies that Max has and Pd doesn't, then download the demo and see if it's worth it.
Artists using Pure Data
Hi there. My little problem is that i can't choose between buying Max/MSP or diving for free in Pd. So i started to look for comparisons between these two.
As i see in wikipedia Max link http://en.wikipedia.org/wiki/Max_(software) there is a long list of well known artists (like Aphex Twin, Tim Hecker) who are using it. As for Pd, there is absolutely no information on the web about famous musicians who are working with this software..
So my question is do you know any electronic/electroacoustic artists who are using Pd?
Or maybe in generally Pd is well less professional than Max?
From what i have tried i can say that sound quality is identical, just Max's interface is far more advanced..
Maybe Pd better suited for game music and for creating things like Reactable?
Thanks for answers.
Handy little oxygen8 midi middleman patch
hi guys! so i've had an oxygen8 for a few years now and i see them everywhere, so i'm sharing this handy little patch i made.
basically you're stuck with 8 knobs, and two sliders (modwheel and data entry). all this patch really does is takes the keyboard's input, numbers those 8 knobs and 2 sliders in sets of 10, and lets you switch between those sets with the hradio for up to 120 different controller values. the key input is passed straight through so even when you switch between various controls you can play the keyboard consistently.
even if you don't have an oxygen8, this patch will give you a little selfcontained set of sliders that you can use as a midi controller... so it's still useful for when you're not at home with your keyboard, or if you don't even have one.
basically all this patch does is take those 10 controls and lets you switch between 12 sets of them. it's useful for me in ableton so when i need to map more parameters than i have knobs for, i can assign more, and the numbering system is much easier to stay on top of than the default control values for those knobs (it's like 17, 80, 74, no consistency it seems).
on linux you should be able to jack the keyboard to pd's midi in, then jack the output to wherever you want. i'm currently on windows and i select usb keyboard in for input, and loopbe for output.
the numbers do nothing but change when you switch the hradio - the sliders are the corresponding controls (with the mod wheel as slider 9 and the data entry knob as slider 10).
come to think of it i don't think i tested the pitch bend wheel, i've been using this patch almost entirely for parameter controlling and not playing the oxygen8 notes at all. [notein] is patched directly into [noteout]
any questions/comments/ideas please, post them. this is a real quick patch i put together that worked almost better than i wanted it to but it can be very expanded upon. i was going to add symbols so you could tag/name all 120 controls but i was having trouble figuring out a way to store them and recall them, and send/receive to the symbols... so i just scrapped that.
basically all i do is make a tiny pd window and make [SCET], and just have that sitting at the bottom of the screen under my DAW (in this case ableton).
i haven't run into any conflicts yet for the most part but it's possible the controller numbering system might conflict with certain apps/synths/etc.
cheers guys!
Pd/rjdj skillshare @ Eyebeam, NYC, Dec 5th
http://eyebeam.org/events/rjdj-skillshare
December 5, 2009
12:00 -- 1:30 PM : Introductory workshop on Pd with Hans-Christoph Steiner
2:00 -- 6:00 PM : SkillShare w/Steiner and members of RjDj programming team
Free, capacity for up to 30 participants
RSVP HERE: http://tinyurl.com/ykaq3l3
Hans-Christoph Steiner returns to Eyebeam with members of the RjDj programming team from Europe to help turn your iPhone or iPod-Touch into a programmable, generative, and interactive sound-processor! Create a variable echo, whose timing varies according to the phone's tilt-sensor or an audio synthesizer that responds to your gestures, accelerations and touches. Abuse the extensive sound capabilities of the Pure Data programming language to blend generative music, audio analysis, and synthy goodness. If you're familiar with the awesome RjDj, then you already know the possibilities of Pure Data on the iPhone or iPod Touch (2nd and 3rd generation Touch only).
Creating and uploading your own sound-processing and sound-generating patches can be as easy as copying a text file to your device! In this 4-hour hands-on SkillShare, interactive sound whiz and Pure Data developer Hans-Christoph Steiner and several of the original RjDj programers will lead you through all the steps necessary to turn your phone into a pocket synth.
How Eyebeam SkillShares work
Eyebeam's SkillShares are Peer-to-Peer working/learning sessions that provide an informal context to develop new skills alongside leading developers and artists. They are for all levels and start with an introduction and overview of the topic, after which participants with similar projects or skill levels break off into small groups to work on their project while getting feedback and additional instruction and ideas from their group. It's a great way to level-up your skills and meet like-minded people. This SkillShare is especially well-suited for electronic musicians and other people who have experience programming sound. Some knowledge of sound analysis and synthesis techniques will go a long way.
We'll also take a lunch break in the afternoon including a special informal meeting about how to jailbreak your iPhone!
Your Skill Level
All levels of skill are OK as long as you have done something with Pd or Max/MSP before. If you consider yourself a beginner It would help a lot to run through the Pd audio tutorials before attending.
NOTE: On the day of the SkillShare we will hold an introductory workshop from 12:00 until 1:30 PM, led by Steiner, for those who want to make sure they're up-to-speed before the actual SkillShare starts at 2:00. The introductory workshop is for people who have some done something in Pd or Max/MSP but are still relative beginners in the area of electronic sound programming.
What You Should Bring
You'll need to bring your iPhone or iPod Touch (2nd or 3rd generation Touch only), your own laptop, a headset with a built-in mic (especially if using an iPod Touch) and the data cable you use to connect your device to your laptop. Owing to a terrific hack, you won't even need an Apple Developer License for your device!
More Information
RjDj is an augmented reality app that uses the power of the new generation personal music players like iPhone and iPod Touch to create mind blowing hearing sensations. The RjDj app makes a number of downloadable scenes from different artists available as well as the opportunity to make your own and share them with other users. RjDj.me
Pd (aka Pure Data) is a real-time graphical programming environment for audio, video, and graphical processing. Pd is free software, and works on multiple platforms, and therefore is quite portable; versions exist for Win32, IRIX, GNU/Linux, BSD, and MacOS X running on anything from a PocketPC to an old Mac to a brand new PC. Recent developments include a system of abstractions for building performance environments, and a library of objects for physical modeling for sound synthesis.
kill your television
GEM on Linux Laptop
I currently use a Macbook (OS 10.5) for all my PD patching. However, as I am primarily focused on using GEM for live video performance alongside musical groups, I am thinking about getting a second laptop for performances (to keep my Macbook safe).
I'm thinking of getting an Acer Aspire One netbook running linux.
I'd like to know the pros and cons of this.
-Will swapping patches between Mac OS and Linux be a problem (I'm guessing no, but I figured I'd ask)?
-I've heard of some problems with VGA out on Linux laptops, is this going to be an issue?
-Does the netbook have enough processing power for general GEM applications? I'm generally not dealing with video files, but rather particle generation, shape manipulation, GIF texturing, and audio-response.
-Are there any other issues that you think of given this scenario? (and if so, what other affordable/really-cheap laptops are there out there that I can run linux on)
Windows or Mac?
If your intention is to use windows to develop and linux to perform, your patches should be portable between windows and linux. In fact going from windows to linux should be easier since the latter has certain pd features the former doesn't.
But watch out for case insensitive filesystems. In windows and mac the filesystem is case preserving but not sensitive, while linux has mostly case sensitive filesystems. For PD this means that under linux you can have two distinct patches, ex: Not.pd and not.pd while in windows this would not be allowed. Porting from windows to linux this example shouldn't be a problem, but you might have an abstraction patch saved as not.pd and use it as "NOT" in another patch. Under windows this will work since "NOT" will be matched to not.pd, but under linux it won't work since it will be looking for NOT.pd
College Project, I was recommended to use Pd to Achieve this...
Hmm, the project sounds interesting, a melody for each personality.
You need to take your time over this one. If your tutor thinks this is something that can be done in a few days I'd say that's a terrible underestimate. Did you leave it to the deadline... tsk!
What you need is a melody generator with a number of parameters. You need less parameters than there are variables in your test results (to do it simply) and some way of reducing the test results to a smaller set of melodies.
First convert the test results to numeric values.
Then apply these to a mapping that converts the scores to parameters making music you think suits each personality.
Create a few formulas, one for each generator parameter that maps the test results onto a factor for each generation control.
For example;
Do you wear black and lthink the Cure are the best band eva (YN)? Y
We will
a) Rock you [0]
Overcome [0]
c) Stay in tonight, because I haven't got a stitch to wear [X]
The Police are
a) Doing their best to balance law and order with the liberal values of a post industial society [0]
Fascist tools of an opressive regime [0]
c) The best rock/pop trio of the 1980s [X]
maps onto
Introvert-extrovert : Shoegazing emo depressiveness 6
Motivation: Rocking Godhead attitude 1
Individuality- conformity : Sheep factor 5
Humour/lightness : Slack factor 6
which maps onto
Tempo 110
Scale - 70% minor 30% major
Change factor 8
Liveliness factor 4
Because the problem is complex and involves data collection, scaling, mapping and generation my advice would be to keep everything VERY simple. Melodies with a choice of four or five note parameters, in simple patterns, keep the number of questions to 8 or less.
Start by building a melody generator that has something like the following properties
- tempo
- scale division
- liveliness (change magnitude)
- Density (rests vs notes)
Then come back with that to show us and a list of your questions and explanation of how you interpret the scores.
Errors using linux event objects
Hi all. I'm off and running with a Pd project on my Mac, but apparently using my wacom intuos 3 tablet to its full extent is a lost cause on Mac, so I've decided to hook up a separate Linux machine to gather and send tablet data.
I've converted my evil Dell XP laptop to a sexy dual-boot Ubuntu Gutsy machine, my tablet is all installed and working with the latest driver from linuxwacom.sf.net, and Pd-Extended looks and works great. I'm using the 0.40.3 pd-extended package (I know it's unsupported, but all the issues below are identical when using the 0.39.3 release, which I tried first, so hopefully you'll indulge me anyway .
Problem is that the tablet and Pd still won't play nice together. When I use any of the linux event objects (hid, hidio, linuxevent) to open my tablet device (/dev/input/wacom), it gets most of the way there, which I can see because lots of good info shows up in the Pd console, but no events are actually generated, and I see errors in the terminal window from which I launched Pd:
evdev EVIOCGABS ioctl: Invalid argument
Can anyone help with this? I realize that this really concerns the event externals, but I'm flexible about using anything which will get the job done. If you let me know what info will help, I'll provide, and if there's any other way I can be generally helpful, let me know and I'll get right on it.
Thanks in advance,
Alex
Info which may be of use...
Here's the first line from my dmesg:
dmesg
[ 0.000000] Linux version 2.6.22-14-generic (buildd@terranova) (gcc version 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)) #1 SMP Tue Dec 18 08:02:57 UTC 2007 (Ubuntu 2.6.22-14.47-generic)
Here's what shows up in the terminal when I launch Pd via "sudo pd":
priority 8 scheduling enabled.
priority 6 scheduling enabled.
tk scaling is 1.33483483483
<init> : Avifile RELEASE-0.7.47-070916-12:47-4.1.3
<init> : Available CPU flags: fpu vme de pse tsc msr mce cx8 sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 tm pbe up est tm2
<init> : 1200.00 MHz Intel(R) Pentium(R) M processor 1600MHz detected
Here's the errors which show up in the terminal when attempting to open /dev/input/wacom with any of the event objects:
evdev EVIOCGABS ioctl: Invalid argument
evdev EVIOCGABS ioctl: Invalid argument
evdev EVIOCGABS ioctl: Invalid argument
evdev EVIOCGABS ioctl: Invalid argument
(...lots more of this...)
Here's what shows up in the Pd console (looks good):
info: open 1
info: device 6
info: total 0
info: poll 25
info: range key btn_0 0 0
info: range key btn_1 0 0
info: range key btn_2 0 0
info: range key btn_3 0 0
info: range key btn_4 0 0
info: range key btn_5 0 0
info: range key btn_6 0 0
info: range key btn_7 0 0
info: range key btn_0 0 0
info: range key btn_1 0 0
info: range key btn_2 0 0
info: range key btn_3 0 0
info: range key btn_4 0 0
info: range key btn_0 0 0
info: range key btn_1 0 0
info: range key btn_2 0 0
info: range key btn_3 0 0
info: range key btn_4 0 0
info: range key btn_5 0 0
info: range key btn_6 0 0
info: range key btn_7 0 0
info: range key btn_10 0 0
info: range key btn_11 0 0
info: range key btn_12 0 0
info: range rel rel_wheel 0 0
info: range abs abs_x 0 54204
info: range abs abs_y 0 31750
info: range abs abs_rx 0 4096
info: range abs abs_ry 0 4096
info: range abs abs_rz -900 899
info: range abs abs_throttle -1023 1023
info: range abs abs_wheel 0 0
info: range abs abs_pressure 0 0
info: range abs abs_distance 0 0
info: range abs abs_tilt_x 0 0
info: range abs abs_tilt_y 0 0
info: range abs abs_misc 0 0
info: range msc msc_serial 0 0
info: vendorID 0x0026
info: productID 0x01e9
info: name Wacom Intuos3 6x11
eom
Error: tabsend~: $O-hann: no such array
- To cross synthesise two voices you must ensure that two speakers make exactly the same utterances which are phonetically aligned. This is hard as I can tell you from experience of recording many voice artists. Even the same person will not speak a phrase the same way twice.
<< This is not possible in my experiment, as I am supposed to morph the actual conversation, so it is upto subjects what ever they want to speak. There is some work done by (Oytun Türk , Levent M.Arslan) who conducted experiment in passive enviornment (not at real time).
- The result is not a "timbral morph" between the two speakers. The human voice is very complex. Most likely the experiment will be invalidated by distracting artifacts.
Here's some suggestions.
- Don't "morph" the voices, simply crossfade/mix them.
<< yes I also want to do this, crossfading and mix, as I just want to create illusion, so that listner start thinking whether it is B's voice or A's voice>>
- For repeatable results (essential to an experiment) a real-time solution is probably no good. Real time processing is very sensitive to initial conditions. I would prepare all the material beforehand and carefully screen it to make sure each set of subjects hears exactly the same signals.
<<Yes, I agree, but it is demand of experiment, I can not control the environment, but to create some good illusion (or to distract listner, I may add noise in the siganl at real time, it would challange the listner brain in identification, so I may use such kind of tricks for the sucess factor)
- If you want a hybrid voice (somewhere between A and then vocoding is not the best way. There are many tools that would be better than Puredata which are open source and possible to integrate into a wider system.
<<Actually now I have a little familarity with pure data, so it is more good for me to stick with it for a while (due to short time), yes if i continue my phD in this domain, I would explore other tools as well. for the time being it is a kind of pilot study>>
Question :
Is VoCoder alone is sufficient for morph/mix/crossfade among two voices?
or I should also add the pitch shifting module with VoCoder to get some more qualitative results.
Question 2:
I already tried this VoCoder example, but could not change it according to my requirements. In my requirements, I have a target voice (the target voice is phonetically rich) and now source speaker is speaking (what ever he want to speak) and source voice is changing into target voice. (illusion/crossfade/mix)
The (changetimbre1.pd ) file that I attached first, give you an idea of what kind of operational interface I am looking for.
Question 3:
what should be the ideal length of target wave file?
Before the start of experiment, I would collect the voice sample of all participants.
I am highly oblige for your earlier help and looking for more (greedy). Meanwhile I would once again study this vocoder example to change it according to my requiement. ( though i doubt I may change it).
Thanks.