Midi to hz, and hz to midi formulas
The circles we calculated the orbits to be proved each to be slightly short of a true, pure circle, thus returning the satellites to the ground. Pi more precisely can be evaluated to 3.1446055, and not the perpetuation of imperfection, and the maintenance of ignorance still taught today.
Sorry to be blunt, but what you are stating is awfully incorrect and the way you proclaim to know this "hidden truth" is really awufully ignorant. So you are stating that there is a mistake on the 3rd decimal digit of pi... It so happens that this decimal place was precisely calculated centuries ago, and the result still (obviously) stands today. There are tons of different approaches on how to get correct digits of pi, such as by using certain convergence series as mentioned by @seb-harmonik.ar. One can manually calculate the 3rd digit of pi using these type of series, and this result has been known for more than a millennium (in the year 480, the value known was 3.1415926, which is way more precise than what you state. By the early 18th century, we knew 100 digits of pi, none of which have been "corrected" later on. See: https://en.wikipedia.org/wiki/Chronology_of_computation_of_π). There is no way that in the 1960's pi had to be defined as 3.1446... instead of 3.1415..., that is just pure ignorance and witchery.
But reading about your number on the internet, I found that this 3.1446055 appears in several articles about satellites, pyramids and all that "dark hidden world that nobody tells you about open your eyes sheep people controlled by the illuminati" kind of talk, but not a single time in a serious mathematical or physics website/journal/wiki. Sorry but the world is not a dark place controlled by people with magic powers trying to keep you in the shadows... simply do not use random blogs as source of information.
The assumption of equal temperament proves a repetitious vexation, denotes ignorance, apathy, laziness, and/or unfamiliarity with the math of music. Worse, this assumption proves far too common among engineers / mathematicians / scientists / software programmers. Which then deprives us musicians of useable tools, and the stupidity of the need for MIDI tuning standard / scala / etc, is the vacuum such engenders.
As for the "stupidity" of using a tempered system, that is just as stupid as using a non-tempered one: it is simply a convention, upon which we built centuries of music. Western music has been based on it for long time (although contemporary composers, myself included, often choose to use micro-intervals), and the decision to create MIDI around this is as logical as it gets when you think what they were aiming at. Or should we have went through all sorts of trouble to incorporate all crazy stuff in the MIDI protocol (which is an Western creation), such as the possibility of having Indian micro-tonal scales? But wait, then what about gamelan scales? No wait, what about <insert ethnomusic genre here> scales?
From a practical point of view, you can still use MIDI cents, and you can also directly use frequencies if you want to precisely define the pitch of a sound. You can compose music in 12-tones, 27-tones or 193-tones if you wish to. The tools are here, and Pd allows you to do whatever you want with them (I myself have composed works using Pd and MIDI that deals with microtones and microtonal glissandi in real-time).
I hope you won't get personally offended with my message, but I can't really read this type of statement, which tries to propagate pseudo-scientific kind of stuff, without writing a strong reply.
DIY2 - Effects, Sample players, Synths and Sound Synthesis.
Hey, mod... I post here a thread in pd-list, because I want to use your compressor... and I want your opinion.
For my live looping-system, I do beatbox, using a Shure58. The mic has very good quality, but I realized that the kicks and snares, are recorded very loud (max signal = 4). So, I want to use a compressor (I though using tanh(), but some people told me that it distorts a lot and extra harmonics).
I want to use your compressor (almost my FXs uses DIY2 effects). After several tests, I concluded that it is very difficult to compress the first peak of a kick, for example. Then, the audio is compressed... but the first peak is still there.
When trying to save the original sound to disk, Pd has to normalize from 4 to 1. And, compressed with your compressor, too: 4 to 1 (more or less).
So, I thought about using the compressor with attack and release at lowest and then, for those peaks, use something like [expr~ (tanh($v1/1.5))*1.5], not to cut to 1, but 1.5. So, It don't distort the sound so much.
What do you think?
Multiple mice and keyboards as \[hid\] not for X input
I know I can target individual mice and keyboards with [hid] but is there a way to keep the linux Xserver from using them as input?
Old thread, I know, but for anyone else who stumbles upon it...
I think what you want is the `xinput` command. First, find out what devices you have:
$ xinput list ⎡ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ Logitech USB Trackball id=8 [slave pointer (2)] ⎜ ↳ Wacom Intuos3 6x8 eraser id=9 [slave pointer (2)] ⎜ ↳ Wacom Intuos3 6x8 cursor id=10 [slave pointer (2)] ⎜ ↳ Wacom Intuos3 6x8 id=11 [slave pointer (2)] ⎜ ↳ Logitech USB-PS/2 Optical Mouse id=12 [slave pointer (2)] ⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Power Button id=7 [slave keyboard (3)] ↳ Unicomp Endura Keyboard id=13 [slave keyboard (3)]
You can detach the device, preventing it from controlling the X pointer/keyboard, by using `xinput float`, e.g.
$ xinput float "Logitech USB Trackball"
(or use its numeric ID.) The control stream will still come through via HID. In fact, you can use `xinput` to access the control stream as formatted text as well:
$ xinput test "Logitech USB Trackball" motion a=1176 a=607 motion a=1177 a=608 motion a=1177 a=609 motion a=1177 a=610 button press 1 button release 1
The `xidump` command from the Linux Wacom Project can be used in a similar way.
To reattach the floated device and regain control of the X pointer:
$ xinput reattach "Logitech USB Trackball" "Virtual core pointer"
You can detach keyboard devices as well, but watch for the Enter key getting (virtually) stuck!
Pd on win7 64bit
I don't use a Windows, but from what I know Pd only uses one processor core. However, there are a couple of ways around this.
One way is to split up the patch, run each part of the patch with a separate instance of Pd and use netsend or OSC to communicate between the instances. I've not tried this myself, but check out this posting on the forum: http://puredata.hurleur.com/sujet-2957-cpu-load-only-use-single-core
The other way is to use the [pd~] object. This object was added to Pd 'to allow users to embed separate Pd processes inside each other, so that the OS can schedule the processes on separate CPUs' – check out Miller Puckette's paper, Multiprocessing in pd for more information. With the [pd~] object you are able to run different instances of pd through a master patch, but without the hassle of using OSC or netsend to communicate between them.
PD sending to two devices?
You can assign discreet inputs and outputs but not multiple outputs across different audio devices. Sound software has to access sound hardware through it's drivers and it can't handle 2 sets of calls to two different drivers (if someone has the proper tech jargon, please step in).
There are ways of cheating but they all stink in some way:
On windows: VAC/audio repeater combo
You can use 'Virtual Audio cables' to create virtual devices that you can use for input and output between software. You can piggy-back that audio to different hardware devices by using the audiorepeater utility. So you can set up VAC to make 2 cables (VAC1 and VAC2). PD will use VAC1 and VAC2 as outputs. (Since VAC1 or VAC2 aren't hooked up to real hardware, you don't hear anything.) Then you run 2 instances of audiorepeater. One instance routes VAC1 to 'Onboard speaker out' and the other instance will route VAC2 to 'USB audio device out'.
Downside: Latency, Latency, Latency
The sound is delayed through each step in the chain, and the audio repeater latency can only go but so low before you get scratchy static and dropouts.
On Linux: ALSA virtual device
You can make a virtual soundcard using ALSA that can combine the outputs of two audio cards into one
'virtual' one and then set JACK to output to that device (the setup of which is too complex to explain here).
Each soundcard has it's own timing, and there's no way to keep them in synch, so you can control the latency and you'll have clicks, pops, dropouts, and tons of underruns.
On Mac OSX:
I haven't tried it, but I believe you can also make an aggregate device
There is also a VAC/audiorepeater type program you can use called Soundflower, but like VAC it's really only for routing sound between apps, not between hardware.
Long story short, between the latency introduced by the software and the clock timing differences between soundcards, you can really get anything usable.
FFT freeze help
Brace for wall of text:
My patch is still a little messy, and I think I'm still pretty naive about this frequency domain stuff. I'd like to get it cleaned up more (i.e. less incompetent and embarrassing) before sharing. I'm not actually doing the time stretch/freeze here since I was going for a real time effect (albeit with latency), but I think what I did includes everything from Paulstretch that differs from the previously described phase vocoder stuff.
I actually got there from a slightly different angle: I was looking at decorrelation and reverberation after reading some stuff by Gary S. Kendall and David Griesinger. Basically, you can improve the spatial impression and apparent source width of a signal if you spread it over a ~50 ms window (the integration time of the ear). You can convolve it with some sort of FIR filter that has allpass frequency response and random phase response, something like a short burst of white noise. With several of these, you can get multiple decorrelated channels from a single source; it's sort of an ideal mono-to-surround effect. There are some finer points here, too. You'd typically want low frequencies to stay more correlated since the wavelengths are longer. This also gives a very natural sounding bass boost when multiple channels are mixed.
Of course you can do this in the frequency domain if you just add some offset signal to the phase. The resulting output signal is smeared in time over the duration of the FFT frame, and enveloped by the window function. Conveniently, 50 ms corresponds to a frame size of 2048 at 44.1 kHz. The advantage of the frequency domain approach here is that the phase offset can be arbitrarily varied over time. You can get a time variant phase offset signal with a delay/wrap and some small amount of added noise: not "running phase" as in the phase vocoder but "running phase offset". It's also sensible here to scale the amount of added noise with frequency.
Say that you add a maximum amount of noise to the running phase offset- now the delay/wrap part is irrelevant and the phase is completely randomized for each frame. This is what Paulstretch does (though it just throws out the original phase data and replaces it with noise). This completely destroys the sub-bin frequency resolution, so small FFT sizes will sound "whispery". You need a quite large FFT of 2^16 or 2^17 for adequate "brute force" frequency resolution.
You can add some feedback here for a reverberation effect. You'll want to fully randomize everything here, and apply some filtering to the feedback path. The frequency resolution corresponds to the reverb's modal density, so again it's advantageous to use quite large FFTs. Nonlinearities and pitch shift can be nice here as well, for non-linear decays and other interesting effects, but this is going into a different topic entirely.
With such large FFTs you will notice a quite long Hann window shaped "attack" (again 2^16 or 2^17 represents a "sweet spot" since the time domain smearing is way too long above that). I find the Hann window is best here since it's both constant voltage and constant power for an overlap factor of 4. So the output signal level shouldn't fluctuate, regardless of how much successive frames are correlated or decorrelated (I'm not really 100% confident of my assessment here...). But the long attack isn't exactly natural sounding. I've been looking for an asymmetric window shape that has a shorter attack and more natural sounding "envelope", while maintaining the constant power/voltage constraint (with overlap factors of 8 or more). I've tried various types of flattened windows (these do have a shorter attack), but I'd prefer to use something with at least a loose resemblance to an exponential decay. But I may be going off into the Twilight Zone here...
Anyway I have a theory that much of what people do to make a sound "larger", i.e. an ensemble of instruments in a concert hall, multitracking, chorus, reverb, etc. can be generalized as a time variant decorrelation effect. And if an idealized sort of effect can be made that's based on the way sound is actually perceived, maybe it's possible to make an algorithm that does this (or some variant) optimally.
Pitch (Attack) Tracker - Bonk~ vs. Sigmund~
I'm not sure if you're replying to something I've said but I'll assume you are.
By 'musical transients' I meant pitched input sources (and their respective onsets). Instruments, voice, etc... I agree that [bonk~] is better suited for percussive sounds.
Maybe an example will help explain what I intended to say:
In some cases I want to, in realtime, fill several arrays with content from a microphone. Now I have no idea of what the said content is going to be beforehand but I know it has to be useful (i.e. intelligible and sound good when processed). From experimentation it seems that [bonk~] while useful for triggering buffer recordings will also trigger erroneously (for this case) on other sounds such as coughs, clicks, general short sounds. This means that a buffer of maybe 1 or 2 seconds has a few ms of a transient sound and the rest noise, which isn't great.
Using [fiddle~] and [sigmund~] on the other hand means that more likely than not the buffer is filled with 'useful' material. Such as someone speaking, a motorbike or an instrument. I find these objects are good at giving me onset points for longer, interesting sounds from a microphone.
Btw, I think you're right that [bonk~] is based on calculating the distribution of energy in the frequency spectrum. It's tricky however to adjust the frequency bins for the right frequency range, say 300-3000Hz for human voices. IIRC I think the default is 11? You need to do quite a lot of adjustment to get the right number of bins and make sure they start from the correct position and set the bandwidth accordingly. Even after doing that [bonk~] still would go crazy on sounds we didn't want. In end the [fiddle~] was a cleaner solution.
p.s. I may be talking out my arse as well
How do i make a patch more cpu effectuate?
well, the main thing to do is just really look at your patch and see if you can find ways to simplify it. That will get you a long way there.
here are some other things i have noticed while optimizing patches:
* avoid using GUI objects unless really necessary
* avoid audio rate signal objects when it is possible to do the same thing with control objects. Only use audio rate signals when it is absolutely necessary to calculate every sample.
* if you have a patch that uses [noise~], then just use one [noise~] object for the whole patch, and something like [s~ $0-noise] and then multiple copies of [r~ noise]. Generally this sort of logic should be applied whenever possible in your patch. For example, if you need a note, and then also another note one octave higher, then just use a [phasor~] for the original note, and then use [*~ 2] and [wrap~] to double the pitch for the next octave. This will be cheaper than 2 [phasor~]'s.
* [tabread4~] is pretty hungry because of its 4-point interpolation. I found that making a simple 2 point interpolation for sample playback worked ok. (use 2 [tabread~] objects, and a [+~ 1] before the 2nd one, and then send the output of both [tabread~] objects into the left side of [*~ 0.5] to average them.
* [*~ 0.5] is more efficient than [/~ 2] (no idea why...but...), also i can't remember, but [+~ -2] may have been more efficient that [-~ 2]
* keep your delay lines as short as possible
* for stereo, you will often have things that are common to both the left and right channel. Only calculate these once, and then send to both channels, rather than calculating individually twice.
"audio i/o stuck"
For each program that is using the ASIO drivers, an ASIO control panel will pop up in the system tray (you might need to adjust your system tray preferences). Each panel controls an instance of the ASIO drivers for that program. You can open that panel and set the inputs and outputs you want to use for that instance.
I said in the above post that ASIO can connect to one program at a time....but that isn't the full explanation. Each ASIO instance can connect 1 program to the same hardware inputs and outputs at a time.
So if you had a soundcard that had 4 outputs, you can setup 2 programs to use the ASIO drivers, one instance goes to output1 and output2, and the other instance can go to output3 and output4.
Getting your sound program running through ASIO (that's by default connected to all inputs and outputs) and getting sound from the browser (Firefox? Chrome? IE?)
is tricky. The browser doesn't use ASIO, it's using the default windows sound driver to access the soundcard. Try loading the browser first then load PD with ASIO or vica versa.
I've only been able to get a consistent working setup by having multiple (2 or more) soundcards. Windows controls one (browser sound, programs that don't use ASIO) and ASIO drivers control the other (PD, tracktor, cubase, FLstudio, etc). Route the sound output from both cards into an external mixer, and the mixer controls the main speakers (or monitors)
External Autio Interface mic problem
I am looking for help, to solve my problem with pure data and my audio interface. I am using PD as life performing software, and I need to use my external audio interface for better quality and avoid sound delay.
Interface I am using, is zoom R24 recorder with interface and some other stuff. PD can find it easy and it use no problem as sound output device, but once I set up input to use mic, it cut the sound off, and mic have no sight of life... I was trying different drivers, and possibilities to turn mic on, but no luck...
Is there anyway how I could use it as audio input and output for life performing... Maybe I need some special drivers, or use special commands to turn it on?
My internal audio card works fine, and the setup as internal audio card as input, and output to external interface works also... But once I switch, no signals! What should I do?!