Midi to hz, and hz to midi formulas
I've recently taken "the tour" to Mathematics Stack Exchange. Initially, I thought it was going to be some sort of light-hearted pics plus some words about the site. Fortunately, it was much better than that.
"The tour" is an interesting way of inviting you to read the rules. I think these rules also act as a reminder, a reminder of the important stuff.
You can, of course, take the "tour" by yourself, but these are some of the things you can read there.
- Ask questions, get answers, no distractions.
- This site is all about getting answers. It's not a discussion forum. There's no chit-chat.
- Get answers to practical, detailed questions.
- Focus on questions about an actual problem you have faced.
- Not all questions work well in our format. Avoid questions that are primarily opinion-based, or that are likely to generate discussion rather than answers. Questions that need improvement may be closed until someone fixes them.
As @dangrondang (F.) said, hope this helps.
New to the forum and new to PD
Hi I am new to PD and have been playing with Max Msp for 6 months on and off.
I am interested in sound modelling , I am a Guitar Player and also very interested in
live interactive jamming over the internet and live broadcast possibilities with interaction also.
Sadly two of the best show cases for this sort of thing Deep Rock Drive and Live at the Paradiso have not continued but will I think come back in one shape or form.
they were both really rather good, I also started to develop my own concept in the UK before moving to Sweden where I am continuing to develop the idea.
I am a fan of Kompoz and Ninjam and have discovered some other interesting initiatives in these directions also.
I have been working with Linux also this summer converting my net book and trying to get reaper to run in linux and playing with Ardour and Audacity I have been testing various of the music based and multi media based distros. Mostly I work on an Intel Mac and use Reaper and Logic as my interest in Plugins and porting windows based plugins for use of Mac and linux has increased I have become more curious and frustrated by how many plugins are catering solely for windows based systems.
I am very interested in the open source ethos and particularly passionate about Music education and its wider application to encourage learning across other academic and vocational disciplines Leveraging the Cool so to speak.
All of these are relatively new interests I have spent the past 10 years recovering from a successfull career in property development in London I retired to the country and pursued artistic and holistic goals finding I had sold my Soul at the wrong crossroads to pursue the wrong muse i.e money and not art. I am now blissfully happy and considerably less endowed with the devils gold it must be said.
I have landed up at PD following a complete conversion and inspiration from one of your members inventions Slice Jockey and live decomposer, the modelling possibilities of DSP convolution have been occupying my studies almost exclusively in recent months and Slice Jockey really was akin to Moses meeting the Burning Bush for me a truly amazing piece of work.
SO here I am with my first technical question How does one load a patch that is in code into PD? I have downloaded this link http://www.toxonic.de/#guitar_rack and can not find what to do with the page of code that results in my right clicking the link? With the age old abmonition RTFM ringing in my ears I have searched and looked and failed to find anything to give me a clue ( If you don't know , and I don't ) this stuff can seem wholly baffling. Any suggestions gratefully recieved.
Compiled external saying "couldn't create" when being added to PD
I'm having trouble getting my external to work, It compiles with 5 warnings
sineq.c:48: warning: unused variable ‘x’
sineq.c:49: warning: unused variable ‘in1’
sineq.c:50: warning: unused variable ‘in2’
sineq.c:51: warning: unused variable ‘in3’
sineq.c:52: warning: unused variable ‘in4’
It does a "make" successfully but I get this warning message
/usr/bin/ld: warning: cannot find entry symbol xport_dynamic; defaulting to 00000000000007f0
but when I try and add it in PD it says "couldn't create". I've looked at the pan~ tutorial and the d_osc.c file as recommended, which did help. I tried to take pieces from the two which I thought were applicable to my situation but I'm still having some issues.
Here's a link to the workflow (dropbox)
Here's a link to the C code online (pastebin)
My external is a reproduction of the sinewave equation with 4 inputs and one output my logic is to have 4 inlets one for the frequency,amplitude,phase and vertical offset and an output for the created signal. Granted this isn't the final equation but this will help me understand how to create the full equation once done. If you want to see the full equation I'll be using here's a link to it below. Basically it's a 1 second periodic signal with the sample rate at 44100 which the equation gives me control over the frequency,amplitude,phase and vertical offset.
Another question I have is what do I use for the t (time) for my final equation is that the t_sample object in PD? or do I need to create a for loop counting from 1-44100 for a 1 second 44100 sampled equation?
PS: I'm compiling on ubuntu 10.04 using gcc
PD from Max, missing Umenu
Lurked for a while, first post now.
I've been using Max/MSP pretty heavily since last December, and I've fallen in love with the visual coding paradigm. For a variety of reasons PD appeals to me more than Max (though I did get used to the bells and whistles). I've replaced all my hardware effects (loopers, stompboxes, synths, sequencers) with max patches, and I'd be comfortable enough making the same over again in PD, but I'm having trouble translating a few objects that I relied on heavily in Max.
Umenu is a big one, the ability to receive numbers and output corresponding text comes in handy with scripting lots of objects, and routing sends and receives. It's ease of populating, rewriting, and deleting of entries makes many complex tasks easily manageable. It also functions as a great label when I hid things away. I suspect there's an equivalent method, but nothing leaps out. Should I be looking at some sort of coll type object? Struct?
In the DSP realm, I'm a little worried not to see Rate~ or some analog of it. I use phasor~ to control my loops, and Rate~ works magic on polymetric sequences. I could probably rig up a similar system scaling and wrapping a signal, but straight up Rate would be best.
In order to be able to slot effects in and out of different parts of the signal chain, I've been using combinations of bpatchers and polys, with scripting to switch between different stompbox abstractions. I found the graph on parent option, but it doesn't seem (at least not obviously) that I can use scripting on it to call up different subpatches. Please correct me if I'm wrong.
I think those three might just cover it (at least for the time being). I've found the externals pulled from the max library to be an amazing resource, and between overlapping objects, manuals, and the answers on this forum most of what I need to know is readily available. If those objects lack analogs in PD I may just bite the bullet and get into java/c/python...at the moment I have no experience with text based programming, but I'm just looking for an excuse, really.
I'll be hitting the manuals cover to cover as best I can in the morning, but I'd be glad to have my patches running sooner all the same.
All help is much appreciated.
Foreground/background gestion in a patch with \[knob\] and \[image\]
I'm trying to make a nice gui using knobs surimposed over an image loaded by [image]. First I noticed (long time ago) that a knob created before the [image] will be in a layer under it and becomes invisible if placed where the image stands.
My workaround is creating knobs after the image. But if I want a 'two-state' image, thus refreshing the latter the knobs disappear forever.
Is there a way to handle background/foreground plane numbers or something to set my knobs (and other elements) 'always on top' ?
Workshop: Xth Sense - Biophysical generation and control of music
April 6, 7, 8 2011
Xth Sense – biophysical generation and control of music
2.Hinterhaus Etage 2
12059 Berlin Neukölln
The workshop offers an hands-on experience and both theoretical and practical training in gestural control of music and bodily musical performance, deploying the brand-new biosensing technology Xth Sense.
Developed by the workshop teacher Marco Donnarumma within a research project at The University of Edinburgh, Xth Sense is a framework for the application of muscle sounds to the biophysical generation and control of music. It consists of a low cost, DIY biosensing wearable device and an Open Source based software for capture, analysis and audio processing of biological sounds of the body (Pure Data-based).
Muscle sounds are captured in real time and used both as sonic source material and control values for sound effects, enabling the performer to control music simply with his body and kinetic energy. Forget your mice, MIDI controllers, you will not even need to look at your laptop anymore.
The Xth Sense biosensor was designed to be easily implemented by anyone, no previous experience in
electronics is required.
The applications of the Xth Sense technology are manifold: from complex gestural control of samples and audio synthesis, through biophysical generation of music and sounds, to kinetic control of real time digital processing of traditional musical instruments, and more.
Firstly, participants will be introduced to the Xth Sense Technology by its author and led through the assembling of their own biosensing wearable hardware using the materials provided.
Next, they will become proficient with the Xth Sense software framework: all the features of the framework will be unleashed through practical exercises.
Theoretical background on the state of art of gestural control of music and new musical instruments will be developed by means of an audiovisual review and participatory critical analysis of relevant projects selected by the instructor.
Eventually, participants will combine hardware and software to implement a solo or group performance to be presented during the closing event. At the end of the workshop, participants will be free to keep the Xth Sense biosensors they built and the related software for their own use.
~ Perspective participants
The workshop is open to anyone passionate about sound and music. Musical background and education does not matter as long as you are ready to challenge your usual perspective on musical performance. Composers, producers, sound designers, musicians, field recordists are all welcome to join our team for an innovative and highly creative experience. No previous experience in electronics or programming is required, however participants should be familiar with digital music creation.
Participation is limited to 10 candidates.
Preregistration is required and can be done by sending an email to email@example.com
Requirements and further info
Participants need to provide their own headphones, soundcards and laptops with Pd-extended already installed.
Musicians interested in augmenting their favourite musical instrument by means of body gestures are encouraged to bring their instrument along. More information about the Xth Sense and a video of a live performance can be viewed on-line at
6-7-8 April, 11.00-19.00 daily (6 hours sessions + 1 hour break)
EUR 90 including materials (EUR 15).
A collection of GLSL effects?
i get these errors...it doesnt want to link
error: [glsl_fragment]: shader not loaded
linking: link 9.80909e-45 7.00649e-45
linking: link 9.80909e-45 1.12104e-44
[glsl_program]: linking with uncompiled shader
[glsl_program]: Link failed!
[glsl_vertex]: Vertex_shader Hardware Info
[glsl_vertex]: MAX_VERTEX_ATTRIBS: 16
[glsl_vertex]: MAX_VERTEX_UNIFORM_COMPONENTS_ARB: 4096
[glsl_vertex]: MAX_VARYING_FLOATS: 64
[glsl_vertex]: MAX_COMBINED_TEXTURE_IMAGE_UNITS: 32
[glsl_vertex]: MAX_VERTEX_TEXTURE_IMAGE_UNITS: 16
[glsl_vertex]: MAX_TEXTURE_IMAGE_UNITS: 16
[glsl_vertex]: MAX_TEXTURE_COORDS: 8
Meditation background generator
Thank you all for your replies, I didn't expect such feedback. Especially thanks jamesmcn - everything was described well, I just have something to add:
1 and 2. The top section performs two functions - it's smth. like "probability generator" (leftmost part), which, in accordance with LFO, defines how often droplet sound is being generated at the moment, and (rightmost one) an array containing tones, at which [vcf~]s of certain droplets resonate (0,2,3,5,7,8,10 in it mean C-minor scale). And every 5 seconds [route] picks these tones (notes) up from the array randomly and sends them to [vcf~] objects after [stream~] abstractions. These tones are transformed into frequencies in Hz to control bandpass filters.
3. Yes, the [stream~] is the meat of the synthesis, but it is not "very carefully" filtered noise, although should be. I just tried to make it sound like water droplets, and made some variations in cutoff frequency (first creation argument) and stream density (2nd creation argument) to make droplets sound more diverse. The stream~.pd itself is actually a very simple noise generator, and this is the oldest part of the patch itself. I wanted to make something like a rain noise, made this abstraction, and put it off until it came into play.
4 and 5. You know, if you delete the mixer/processor section (except reverb), you may not notice much difference: it just makes left and right channel slightly different from time to time - for merciless freezeverb to mix up left and right channels in one stream anyway. Buy the way - [freezeverb] is just an enhanced version of that you can see in help -> browser -> G08.reverb.pd. The only serious difference is that it uses delay_time_counter.pd, which calculates the times for delay lines in accordance with this formula: t = t1/2^(n/numlines)-t1/2, where t1 = the largest early reflection delay time value, numlines = total delay lines number (28 here), and n = current delay number (starting from 0). I found this algorithm here: http://musicdsp.org/archive.php?classid=4#44 but changed it a bit (actually, added "-t1/2" to make echoes appear earlier. I still don't understand completely how [freezeverb~] works. To be more precise, I don't understand what actually does [early_reflection_delay_line] do - but Miller Puckette in his example applied similar [reverb-echo-del] abstraction, and it works well! It makes a "power-preserving" mix, very useful thing in recirculating reverbs.
6. Two sine wave oscillators take their frequencies from two first randomly picked up notes from an array (see item 1 and 2). There are also two frequency modulators, 1846 Hz and 4 Hz sine waves, to saturate their spectrum. So it sounds a bit like noise, mainly because there are already too much sine waves from the [stream~] abstraction, and I thought it worth adding something at higher frequencies. And reverb smoothes these oversaturated sine waves, making them sound noisy.
7. How reverb similar to [freezeverb] works is described in help browser, I just can't understand why power preserving mix works. Also I tried to make one stereo reverb based on Miller Puckette's model, but a couple of experimental ones failed. This one is my best reverb ever.
8. Yes, and [master] abstraction is just a place where volume control, or spectrum analysis, are handy to perform from. Something to put all the wires at, and to listen to its output.
A collection of GLSL effects?
it's been a long time since i started wondering about getting some advanced visual effects out of pd. I know "advanced visuals" could mean a lot of different things, but let's say I am thinking of pixel stuff like depth of field, bloom, glow, blurring. I kind of tried everything, from basic pix effects to freeframe fx and gridflow convolutions but no matter what I do, since these effects are cpu based the resulting patch is always dead slow.
My first question is: as far as i know pd is born as an audio software, does it make sense to keep pushing it into the domains of visuals?
Don't get me wrong, I love pd and I know the amazing stuff you could get out gem and gridflow. Let's think of all these kind of 3d manipulations, sound visualization, video mixing, opencv stuff, pmpd physics simulation, just to name a few. You could just get some wonderful visuals by only using geos and simple texturing. But, sometimes, I find myself in front of limitations, like the ones about pixel effects I said before, and I wonder if I should just leave pd to what it's good for and move to video driven software like vvvv or "classic" programming environment like Processing.
I know a lot of stuff I've been talking about could be achieved with an irrelevant cpu cost by leaving calculations to the gpu. I think GLSL potential is extremely huge and I got to work some basic blurring, glowing and blooming effects I found on the web, but still seems a little workaroundy for me (especially multipass rendering).
Here is the second question: could opengl and glsl scripting be the solution to my first question? and what do you guys think about having a place where we can host a (hopefully growing) collection of ready to use GLSL effects along with example patches? maybe with a standard framework of objects for multi texture effects and general GLSL handling?
Ok, that's all. Any feedback will be extremely appreciated.
Here follows a simple GLSL blooming effect applied to gem particles (works on macosx 10.5, pd extended, gem 0.92.3)
Artists using Pure Data
I'm not famous but my new CD will contain many uses of Pd
also check out the article I wrote on my switch from OS X -> Linux
Great! I have Linux/Windos/OSX installed. I discover GNU/Linux 8 years ago. If your favorite software is on Linux, use linux!
Kim, the other day I discover your work, searching for MAX/MSP music. Great to see you here. Im new to MAX/PD, as a MAX user, what do you think of PD?