Multiple mice and keyboards as \[hid\] not for X input
@atarikai said:
I know I can target individual mice and keyboards with [hid] but is there a way to keep the linux Xserver from using them as input?
Old thread, I know, but for anyone else who stumbles upon it...
I think what you want is the `xinput` command. First, find out what devices you have:
$ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Logitech USB Trackball id=8 [slave pointer (2)]
⎜ ↳ Wacom Intuos3 6x8 eraser id=9 [slave pointer (2)]
⎜ ↳ Wacom Intuos3 6x8 cursor id=10 [slave pointer (2)]
⎜ ↳ Wacom Intuos3 6x8 id=11 [slave pointer (2)]
⎜ ↳ Logitech USB-PS/2 Optical Mouse id=12 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Power Button id=7 [slave keyboard (3)]
↳ Unicomp Endura Keyboard id=13 [slave keyboard (3)]
You can detach the device, preventing it from controlling the X pointer/keyboard, by using `xinput float`, e.g.
$ xinput float "Logitech USB Trackball"
(or use its numeric ID.) The control stream will still come through via HID. In fact, you can use `xinput` to access the control stream as formatted text as well:
$ xinput test "Logitech USB Trackball"
motion a[0]=1176 a[1]=607
motion a[0]=1177 a[1]=608
motion a[0]=1177 a[1]=609
motion a[0]=1177 a[1]=610
button press 1
button release 1
The `xidump` command from the Linux Wacom Project can be used in a similar way.
To reattach the floated device and regain control of the X pointer:
$ xinput reattach "Logitech USB Trackball" "Virtual core pointer"
You can detach keyboard devices as well, but watch for the Enter key getting (virtually) stuck!
Set up the path for abstractions
hey,
I don't know but I wonder if maybe there isn't a more general problem going on right now (though it seems simple and wierd so I dont get it)
I am also having trouble loading abstractions even though I have done it quite a lot before, in particular (I posted this problem a few days ago but am posting in response to your problem in hopes that it might be a more global problem - though my abstractions, other than the ones mentioned below, load so I dont understand)
specifically, was the problem with the path not being recognised?
anyway -
Hello everyone,
This is a strange problem because I have loaded libraries and things with the 'Paths' dialog under the file menu before and had no problems.
I am trying to get Chris McCormick's s-abstractions to load. The folder is in the same folder as my other libraries and it is listed in both the "paths" and "startup" areas. It wasn't before but I added it.
I am running windows 7 64 bit. Another oddity that I noticed is that in my Program Files (x86) which is where the 64 bit programs live, I have pd installed and the folder is simply called pd. However in the Program Files folder (missing the x86, where the non-64 bit programs live) I have a folder called "Pd-0.42.5-extended"
I wonder if that couldn't be the problem. The s-abstractions folder is included in both though...
Hopefully somebody has some idea about this...
((anyway sorry to post twice but I hoped maybe there was some common problem there))
BECAUSE you guys are MIDI experts, you could well help on this...
Dear Anyone who understands virtual MIDI circuitry
I'm a disabled wannabe composer who has to use a notation package and mouse, because I can't physically play a keyboard. I use Quick Score Elite Level 2 - it doesn't have its own forum - and I'm having one HUGE problem with it that's stopping me from mixing - literally! I can see it IS possible to do what I want with it, I just can't get my outputs and virtual circuitry right.
I've got 2 main multi-sound plug-ins I use with QSE. Sampletank 2.5 with Miroslav Orchestra and Proteus VX. Now if I choose a bunch of sounds from one of them, each sound comes up on its own little stave and slider, complete with places to insert plug-in effects (like EQ and stuff.) So far, so pretty.
So you've got - say - 5 sounds. Each one is on its own stave, so any notes you put on that stave get played by that sound. The staves have controllers so you can control the individual sound's velocity/volume/pan/aftertouch etc. They all work fine. There are also a bunch of spare controller numbers. The documentation with QSE doesn't really go into how you use those. It's a great program but its customer relations need sorting - no forum, Canadian guys who wrote it very rarely answer E-mails in a meaningful way, hence me having to ask this here.
Except the sliders don't DO anything! The only one that does anything is the one the main synth. is on. That's the only one that takes any notice of the effects you use. Which means you're putting the SAME effect on the WHOLE SYNTH, not just on one instrument sound you've chosen from it. Yet the slider the main synth is on looks exactly the same as all the other sliders. The other sliders just slide up and down without changing the output sounds in any way. Neither do any effects plugins you put on the individual sliders change any of the sounds in any way. The only time they work is if you put them on the main slider that the whole synth. is sitting on - and then, of course, the effect's applied to ALL the sounds coming out of that synth, not just the single sound you want to alter.
I DO understand that MIDI isn't sounds, it's instructions to make sounds, but if the slider the whole synth is on works, how do you route the instructions to the other sliders so they accept them, too?
Anyone got any idea WHY the sounds aren't obeying the sliders they're sitting on? Oddly enough, single-shot plug-ins DO obey the sliders perfectly. It's just the multi-sound VSTs who's sounds don't individually want to play ball.
Now when you select a VSTi, you get 2 choices - assign to a track or use All Channels. If you assign it to a track, of course only instructions routed to that track will be picked up by the VSTi. BUT - they only go to the one instrument on that VST channel. So you can then apply effects happily to the sound on Channel One. I can't work out how to route the effects for the instrument on Channel 2 to Channel 2 in the VSTi, and so on. Someone told me on another forum that because I've got everything on All Channels, the effects signals are cancelling eachother out, I can't find out anything about this at the moment.
I know, theoretically, if I had one instance of the whole synth and just used one instrument from each instance, that would work. It does. Thing is, with Sampletank I got Miroslav Orchestra and you can't load PART of Miroslav. It's all or nothing. So if I wanted 12 instruments that way, I'd have to have 12 copies of Miroslav in memory and you just don't get enough memory in a 32 bit PC for that.
To round up. What I'm trying to do is set things up so I can send separate effects - EQ etc - to separate virtual instruments from ONE instance of a multi-sound sampler (Proteus VX or Sampletank.) I know it must be possible because the main synth takes the effects OK, it's just routing them to the individual sounds that's thrown me. I know you get one-shot sound VSTi's, but - no offence to any creators here - the sounds usually aint that good from them. Besides, all my best sounds are in Miroslav/Proteus VX and I just wanted to be able to create/mix pieces using those.
I'm a REAL NOOOB with all this so if anyone answers - keep it simple. Please! If anyone needs more info to answer this, just ask me what info you need and I'll look it up on the program.
Yours respectfully
ulrichburke
Reactable like .. physical-virtual angle problem ..
Hi,
I'm building a reactable like and got a "virtual-physical" prob ..
I created different "cubes" to put on the table reconized by reactivision,each one associated to a object in PD.
On of these is able to contol 4 parameters with 1 cube, when i put it on the table, 4 button are draw near this cube (in gem) and when i touch these buttons it activate the contol
of the parameters, when i rotate the cube, it change the selected parameter value.
Ex: touch param button 1 and rotate the cube to 120 deg, then touch button param2 and rotate to 90deg. (the value for param 1 will be 120 and for param2 will be 90 ).When i touch again the button param 1 it re-set the value to 120deg.
But my prob is that the stored value (virtual) is re-set to 120 degrees but physicaly the cube is on 90 degrees and when start to rotate it again it take the physical value and start from 90deg. (because the cube is physicaly at this angle)
My idea to resolve this was to add or substract only the variation of the angle to the stored one, then if i rotate the cube 30 deg more it will add it to the intial 120 = 150.
i don't know exactly how to do this (newbie on pd) because for example if i add 40deg to 350 then it should be 30 deg and not 390
i'm sorry for my bad english and it is quite difficult to explain..
if someone know how to calculate in degrees or radian i would be happy ..
Many thanks in advance
Nico
Get Issue for Trigger / Gate / Pass
Hello,
have a big Problem.
Situation:
ReacTIVision give following Data:
Have 3 Datasources X, Y, Angle
Have a Job ID
Have a Item ID
Item ID's are 1, 2, 3, 4
Have 4 Toggles
Have 4 GEM Quad's
Every GEM Quad get it's X,Y,Angle from the 3 Datasources
If i put One Fiducial over the Camera i get a new Job ID a fix Item ID and X,Y, Angle
If Item ID = 1 then Toggle One = True
If Item ID = 2 then Toggle Two = True
If Item ID = 3 then Toggle Three = True
If Item ID = 4 then Toggle Four = True
If a Toggle One is True, Then GEM Quad One is True, also visible
If a Toggle Two is True, Then GEM Quad Two is True, also visible
If a Toggle Three is True, Then GEM Quad Three is True, also visible
If a Toggle Four is True, Then GEM Quad Four is True, also visible
Now comes the Problem:
If i moving Item ID 1 i get its X,Y and Angle and send it to GEM Quad One
If i moving Item ID 2 i get its X,Y and Angle how can i send it to GEM Quad Two and not to GEM Quad One, Three, Four?
I need some Gate with a Trigger-Inlet and some Data Lines tha will be pass through if the Trigger-Inlet is 1/True.
Can someone Help me ?
Greetz Dirk
Latency with midi
I'm looking for some help about a latency problem with midi processing using pd under Windows XP.
Here is the point : I'm using pd to process some midi data coming from electronic drums. These data are then sent to another software (e.g. Nuendo) thanks to virtual midi ports (MIDI Yoke). Assuming '<->' is a midi connection (virtual or physical), it looks like,
drums module <-> Sound Card (Midi Ports) <-> Pure Data <-> Midi Yoke <-> Nuendo
The problem is that pd introduces some latency... and pd is definitely responsible for that... because the following setup does not introduce latency :
drums module <-> Sound Card (Midi Ports) <-> Midi CC <-> Midi Yoke <-> Nuendo
(Midi CC allows to connect virtual ports between them)
The last point is that latency does not seem to be sensitive to the patch. I tried a simple midi thru patch and another patch with complex treatments and nothing changed...
Help 
Loopback devices, virtual audio devices?
i'm looking for a free solution too. i don't think my original idea is going to work out because i don't have the time to implement it. maybe some of you have ideas for the problem i am trying to solve.
the problem is this:
i am helping a professor with some research. for his research he is doing a case study on 3 composers. he is asking them to record a narrative of there thoughts on the composition process as they compose. for this, the composers will be working an a mac studio workstation putting the composition together in logic. a second computer, a pc , running audacity will be used to record there sounds. when the composer reaches what they consider a significant change in the program, we are asking them to save their project to a new file (so we end up with a series of files showing the various stages of the composition). we would like a way, however, to map the timestamps of those files to the 'timeline' of their narrative.
here are a few solutions that are not exactly desirable:
a. do not stop the recording at all and make a note of what time the recording started. this means that you can calculate what time speech is taking place by adding the number of minutes and seconds (and hours) to the time at which the recording started. the problem with this is that it will yield very large files which are not very practical, especially considering that we have to transcribe these files.
b. have the composers start each segment of narration with a timestamp: "it is now 9:15 on tuesday...." as part of the research methodology, this creates problems with the flow of a more natural narrative of the compositional process.
c. have the composers save each segment of narration as a seperate time-stamped file. the problem here is that this takes more time, and could create a lot of files that would be very annoying to work with when it comes to transcribing.
c. my idea was to have, instead of just input from the microphone, 2 streams of audio input,one on the left channel and one on the right channel. on the left, would be the recorded narrative. on the right, would be an audio signal that encodes a time stamp. i was think of simply convert a number such as DDMMHHMM (day, month, hour, minute) into DTMF tones. these could then be translated back into a timestamp. an 8-tone dtmf sequence would be generated every 10 seconds or so. this way, as long as the narrative segment was longer than 10 seconds, it would contain a timestamp. the problem with this is that i have no way to mix such a signal with the input from the microphone.
any suggestions would be greatly appreciated. thanks.
