How do I pack multiples arguments into one sendOSC object, and unpack it on the other side?
OSC messages can contain (usually do contain) routing as well. So you can send for example.........
The slashes denote the parts of the route in the same way as for a folder structure in your operating system, and you can prepend First_Message/First_Part/ to a message as you have done already with the "send".
For incoming messages you can then route the data you are expecting using OSCroute maybe (I am not sure which object... for vanilla, or extended?).....
I am sure that you can also route osc messages in a similar way in the other program, as it is part of the OSC format specification.
"First_Message" etc. can be any word you wish, containing symbols and numbers, but it is best to start the message routes with a symbol....... so woof23 for example.
Playing back a sequence of text in Pure Data.
@LiamG, the first is not hard to pull off, but I would want to gather the keystrokes in the format that qlist uses to read, time - send - value, even when PD is not solely active. Is there such an application for OSX that allows you to have two programs 'write-active' at the same time? So that the given text-writer can write in his preferred word processor while perhaps at the same time writing these into a background-running pd patch.
@whale-av, I didn't know of the
realtimeobject, that would certainly make creating the relevant qlist-'score' inside a PD patch do-able.
Given my own programming limitations, I think I'll try to make a patch that would write the timemarked qlist-score as a patch for the write to run in the backround if I can manage to have both a word processor and PD active at the same time.
Thank you for your help so far.
Newbie stuck on canvas colour control.
I guess you need [trigger]. Check its help patch. The right slider should connect to a
[t b f] ("t" is short for trigger, "b" for bang", and "f" for float). then
[t b f]'s left outlet will go to the left inlet of the multiplication object. the right outlet will go to the right inlet of the multiplication object.
[t b f] has a right-to-left execution order, so the float will go out first to the right inlet of
[* ] and then the bang to its left inlet, triggering the output.
BTW, it's better if the title of your thread describes your problem better, "Newbie here stuck on a simple problem" doesn't say much, and many people might not even bother to take a look.
Also, once solved, better edit the title and put a "[SOLVED]" in the beginning...
use of threads for i²c I/O external : looking for a good strategy
I'm developing an audio device based on a cubieboard2(armf). I use potentiometers read by an adc that communicates through i²c protocol with my cubie. An i²c lcd display shows the desired parameters values.
I wrote externals in order to get data from the potentiometers (and other switches, and a rotary encoder), or send data to the display. Everything was working, but the i²c I/O functions calls were leading to clicks 'n pops in the audio, which was unacceptable. I use a rt patched debian with selected rt priorities for irqs and audio as I always did with success, and I am looking for a smart way to make my pd patches communicate with my physical interface, in a transparent way at audio level.
Then a opened this topic : http://forum.pdpatchrepo.info/topic/9489/external-i2c-data-reader-leads-to-clicks-standalone-version-works-better , and @Eeight proposed to implement threads, kindly giving a template in order to show me the way.
And now I'd need some advice. I'm not a real C programmer, just learning the empirical way, and I'm reaching my limits... I made a threaded version of the external that reads the potentiometers every x milliseconds, and it works perfectly, no audio pollution anymore. But when I made a threaded version of the external that handles the i²c display (thus receiving incoming messages such as "position x y", or "write message"), it lead to timing problems.
Some of the messages get uninterpreted because my writing function is threaded, and takes the form of an infinite while(1) loop containing a "usleep(10000)" instruction at the end of it in order to limit cpu load. The problem is that this leads to the "loss" of most of the incoming messages...
When reading potentiometers values, it's ok to get them merely one every 10ms, but when you have to send messages that can be interpreted merely one every 1oms, you have to send them to the external with delays, which works (That's my present situation), but is tiedious and inelegant.
Would someone have an idea of how this problem could be addressed in a more elegant manner ?
Here is how I implemented it :
- I create a thread for an infinite while (1) loop containing a call to a writing function whenever a flag equals 1, followed by a usleep(10000) instruction.
This "thread" is running from the beginning and awaits for a nonzero value of the flag to enter in action. It reads the string to be displayed from the object's data structure, but only when the "clocking" allows it together with the flag
- a "write" method can receive a t_symbol : the string to display. When a "write blahblah" message is received by the object, the string "blahblah" is stored in the object's data structure, as the flag which is set to 1.
- Then the next time the threaded loop evaluates the flag, it displays "blahblah", resets the flag to 0 and sleeps for 10ms.
But when you use incoming messages using 10 lines messages (allowing for cursor positioning and writing orders), they of course flow through the code in far less than 10ms, hence my problems... In other words the string to be displayed can be changed several times in the object's data structure without being actually displayed because of the relatively slow "clocking".
Please forget the incredible length of this message, together to the fact that I don't post the source yet because of a basic shame of my inelegant coding style. Maybe will I finally clean it and post it later.
I conceptualized this the other day. The main reason I wanted to make this is because I'm a little tired of complicated ableton live. I wanted to just be able to right click parameters and tell them to follow midi tracks.
The big feature in this abstract is a "Midi CC Module Window" That contains an unlimited (or potentially very large)number of Midi CC Envelope Modules. In each Midi CC Envelope Module are Midi CC Envelope Clips. These clips hold a waveform that is plotted on a tempo divided graph. The waveform is played in a loop and synced to the tempo according to how long the loop is. Only one clip can be playing per module. If a parameter is right clicked, you can choose "Follow Midi CC Envelope Module 1" and the parameter will then be following the envelope that is looping in "Midi CC Envelope Module 1".
Midi note clips function in the same way. Every instrument will be able to select one Midi Notes Module. If you right clicked "Instrument Module 2" in the "Instrument Module Window" and selected "Midi input from Midi Notes Module 1", then the notes coming out of "Midi Notes Module 1" would be playing through the single virtual instrument you placed in "Instrument Module 2".
If you want the sound to come out of your speakers, then navigate to the "Bus" window. Select "Instrument Module 2" with a drop-down check off menu by right-clicking "Inputs". While still in the "Bus" window look at the "Output" window and check the box that says "Audio Output". Now the sound is coming through your speakers. Check off more Instrument Modules or Audio Track Modules to get more sound coming through the same bus.
Turn the "Aux" on to put all audio through effects.
Work in "Bounce" by selecting inputs like "Input Module 3" by right clicking and checking off Input Modules. Then press record and stop. Copy and paste your clip to an Audio Track Module, the "Sampler" or a Side Chain Audio Track Module.
Work in "Master Bounce" to produce audio clips by recording whatever is coming through the system for everyone to hear.
Chop and screw your audio in the sampler with highlight and right click processing effects. Glue your sample together and put it in an Audio Track Module or a Side Chain Audio Track Module.
Use the "Threshold Setter" to perform long linear modulation. Right click any parameter and select "Adjust to Threshold". The parameter will then adjust its minimum and maximum values over the length of time described in the "Threshold Setter".
The "Execution Engine" is used to make sure all changes happen in sync with the music.
IE>If you selected a subdivision of 2, and a length of 2, then it would take four quarter beats(starting from the next quarter beat) for the change to take place. So if you're somewhere in the a (1e+a) then you will have to wait for 2, 3, 4, 5, to pass and your change would happen on 6.
IE>If you selected a subdivision of 1 and a length of 3, you would have to wait 12 beats starting on the next quater beat.
IE>If you selected a subdivision of 8 and a length of 3, you would have to wait one and a half quarter beats starting on the next 8th note.
Pduino-based multi-arduino wireless personal midi controller network
Saw your TED video, so maybe you've already solved this problem.
In my limited work with getting arduino and pd to play nice, I've found that things like pduino and firmata work great but can be restrictive. I had to multiplex inputs on my arduino, which doesn't play nice with something like firmata that automatically reads all the pin values.
It might be better to have each arduino on it's own [comport], and differentiate the arduinos that way. Dump pin values over each comport and keep reading it.
Here's the thread explaining what I did:
I'm a fan of your work.
Polyphonic voice management using \[poly\]
Keeping track of note-ons and note-offs for a polyphonic synth can be a pain. Luckily, the [poly] object can be used to take care of that for you. However, the nuts and bolts of how to use it may not be immediately obvious, particularly given its sparse help patch. Hopefully this tutorial will clarify its usefulness. It will probably be easier to follow along with this explanation if you open the attached patch. I'll try to be thorough, which hopefully won't actually make it more confusing!
To start, [poly] accepts a MIDI-style message of note number and velocity in its left and right inlets, respectively...
...or as a list in it left inlet.
The first argument is the maximum number of voices (or note-ons) that [poly] will keep track of. When [poly] receives a new note-on, it will assign it a voice number and output the voice number, note number, and velocity out its outlets. When [poly] gets a note-off, it will automatically match it with its corresponding note-on and pass it out with the same voice number.
By [pack]ing the outputs, you can use [route] to send the note number and velocity to the specified voice. For those of you not familiar, [route] will take a list, match the first element of the list to one of its arguments, and send the rest of the list through the outlet that goes with that argument. So, if you have [route 1 2 3], and you send it a list where the first element is 2, then it will pass the rest of the list to the second outlet because 2 is the second argument here. It's basically a way of assigning "tags" to messages and making sure they go where they are assigned. If there is no match, it sends the whole list out the last outlet (which we won't be using here).
| \ \
[pack f f f] <-- create list of voice number, note, and velocity
[route 1 2 3 4] <-- send note and velocity to the outlet corresponding to voice number
At each outlet of [route] (except the last) there should be a voice subpatch or abstraction that can be triggered on and off using note-on and note-off messages, respectively. In most cases, you'll want each voice to be exact copies of each other. (See the attached for this. It's not very ASCII art friendly.)
The last thing I'll mention is the second argument to [poly]. This argument is to activate voice-stealing: 1 turns voice-stealing on, 0 or no argument turns it off. This determines how [poly] behaves when the maximum number of voices has been exceeded. With voice-stealing activated, once [poly] goes over its voice limit, it will first send a note-off for the oldest voice it has stored, thus freeing up a voice, then it will pass the new note-on. If it is off, new note-ons are simply ignored and don't get passed through.
And that's it. It's really just a few objects, and it's all you need to get polyphony going.
[poly 4 1]
| \ \
[pack f f f]
[route 1 2 3 4]
| | | |
( voices )
Better sounding guitar distortion ... beyond \[clip~\] and \[tanh~\]
You actually should upsample, lowpass, distort, and lowpass again. The spectra of digital signals are periodic; it's technically not limited to the sample rate. The sample rate determines the size of the period. For real signals, you have frequencies from 0 to the Nyquist frequency, and then everything between Nyquist and the sample rate is a mirror image of the spectrum below Nyquist (you could think of them as the aliased frequencies). That defines one period, and it gets repeated further up the frequency range. In other words, you have the spectrum from 0 to SR, and that gets repeated at SR to 2*SR, and again at 2*SR to 3*SR, and so on.
Now, when you upsample, the parts of the spectrum above the original Nyquist will fall below the new Nyquist. And when you send it through [tanh~], those frequencies will produce new frequencies, some of which will alias in the new sample rate, and some of which will fall below Nyquist when you downsample back to the original sample rate. You probably don't want that. So, you'll need to filter after you upsample to remove the repeated spectrum. And the [tanh~] will produce so many partials that you'll need to filter again before you downsample.
I would recommend upsampling by a factor of at least 8. You'll still get some aliasing, but what gets aliased will probably get masked. I think Pd only lets you upsample by powers-of-two, so the next one would be a factor of 16. That should be high enough, though it could be cpu intensive. As for the filters, they should just be far enough below the original Nyquist so most or all of what is above it is filtered out. I would recommend using [lp10_cheb~] for this as it has a very steep roll-off. You could probably set it to about 18kHz without aliasing.
Route : using variable parameters to route data. HOW TO ???
Hi all pd lovers....
so i try to route some messages (list of numbers for example) depending on the first item. But this first item can change.
Say for example i have a variable called "var".
Now var = 22 and i have those messages:
| 11 34 43 23 111 (
| 22 32 1 453 234 (
and i make a route :
[ route 22 ] to take the messages beggining with 22.
but 22 is var, and var can change, so for example var = 10 and now i want to have :
in summary, i want to do this: [route var] where var is a variable. How can i do that ?
thank you !!
Controling from the keyboard
Route can be used to route pairs of data to specific places.
From [keyname] you get two pieces of information. The key label and a up/down action, e.g:
If you think about the first value being the receiver name and the second being the variable you want to pass with it. The [route] object will the output the variable associated with a receiver that matches its arguments. [route] will also strip off the receiver name so that you would be just left with the variable.
// create a list from keyname e.g. 'Up 1'
// route doesn't like lists so we strip off the list prefix
[route Up] // looks for a message with 'Up' in it and pass the next value to its outlet