Some question and problem
Hi all !
I got some problems (just 2 don't worry )
First : I'm trying to use [pix_multiblob] , but everytime I try to link it to [pix_texture] (linked to [pix_video] with my webcam in input ) pd crash litterally (so for the moment i use [pix_blob] but there is some more function in [pix_multiblob] I would like to use)
Second : I'm trying to map different color aera in a videostream, so to do it I link [pix_contrast] to [pix_texture] to clean and saturate a bit my videostream, then I link [pix_colorreduce] to create clear color aeras (and it work pretty good ) but I would like know if it's possible to mask some colors (to turn them to black) to let just one color appearing (for exemple to let just red appear)?
If anyone know something about these stuff, please share with me ^^
Seeking Sompatible Video Capture Card - Composite Video on Linux
hi all.
for a project we're working on at the moment, we've been looking for a usb video composite capture card that would work with pd/GEM on Linux OS. we bought a Pinnacle PCTV USB2 that works fine on Linux with the usbvision driver, but somehow crashes GEM. we've posted the error log on the pd and gem-dev list, with no luck until now. so here it is again in case someone around here can help :
we want to use [pix_video] with a USB video capture device (Pinnacle PCTV USB2). But when we send [1( to [pix_video], pd says :
GEM: Start rendering
pix_videoNEW: starting transfer
cap: name Pinnacle PCTV USB 2 type 3 channels 3 maxw 720 maxh 576 minw 48 minh 32
picture: brightness 32896 depth 21571 palette 8
channel 0 name Television type 1 flags 1
channel 1 name Composite1 type 2 flags 0
channel 2 name S-Video type 2 flags 0
setting to channel 1
closing video
pix_texture: not using client storage
and all that we can see is a white rectangle.
XTerm is saying : VIDIOCGMBUF: Invalid argument
When we stop rendering with [0(, pd crashes.
we're on Ubuntu 6.06 LTS with Pd 0.39-2 / GEM 0.90. The USB video capture device is a Pinnacle PCTV Analog USB2
we had no trouble for getting the box to work with the video4linux drivers.
It works well with TVTime, but not with GEM. It seems everything goes well (using pix_videoNEW, listing the video channels, choosing the good one), and then this message "closing video". Why ?
we've tried the patch on Windows XP and have no problems with [pix_video] with this video capture device.
Any ideas ?
Multiple pix loading
I don't have all of your answers in me at this moment though i'll startup a few. The length of this response is due to its super-thorough nature. Alright, so you can make a [pix_image] object that receives a [gemhead]'s outlet in its inlet. You can add the directory as an attribute, for example [pix_image /Users/Person/Folder/Folder/Picturename.psd], or alternatively make an [open( message with the same attribute. For example: [open /Users/Person/Folder/Folder/Picturename.psd(. This [open( message's outlet goes to [pix_image]'s inlet, even if [pix_image] already has attributes. To find you picture's directory, you can first figure out how PD reads and writes various files to your system. If you save any patch, in PD's console, it will say "saved to: /Users/person/folder/folder/Patchname.pd". It helps to have folder and file names without spaces though there are many ways around this in different scenarios. If you would like some advice on matching up your 40 keys with these [pix_image] picture loaders, i will if you let me know. I can show you ways to hide your already open pictures and you can move them around with [translateXYZ]. For your upside pictures, the object [rotateXYZ] should do this, though I have not tested it. You can use [rotateXYZ] by sending [pix_image]'s outlet to its inlet and [rotateXYZ]'s outlet to [pix_draw]. For the three [rotateXYZ] inlets from right to left, you can add number boxes. Creating this layout should give you an interface similar to the one in the pd help patch. Ill respond if you have any more questions or can share anything else in similar subject areas.
Pix\_mix vertical flip problem
hello
I'm using pd from the installer (version 0.39.2-extended-test5)
gem 0.91-cvs
qt 6.5
winXP
when I'm using pix-mix, pix_add or pix_composite
everytime, the right-inlet video image is vertical flipped
if I put before a vertical pix_flip object, the flip alterning on/off
strange problem not ?
the only solution that I've found for the moment, is to do the composite in 3D
but the openGL window is not recordable in QT (with pix_record)
so, it's that I want to do
does anybody get the same problem? or , better, a solution ?
(not a fancy iSight) But a normal webcam
tx Megale for your answer, im under OSX and using Pd-0.39.2-extended-test1.app and that's why i pose the question since in linux i know how to locate the webcam, simply /dev/video0..but under os x im simply lost.. i did search what you said an got some patches to play with and see but so far i get this when trying to load [pix_videoDS]
... couldn't create
If i take away the DS part it comes up
pix_videoDarwin: height 320 width 240
pix_videoDarwin: could not make new SG channnel
pix_videoDarwin: could not set SG ChannelBounds
pix_videoDarwin: could not set SG ChannelUsage
pix_videoDarwin: set SG NormalQuality
pix_videoDarwin: Using YUV
but when trying to access any help file or documentation i get this:
sorry, couldn't find help patch for "Gem/pix_videoDarwin.pd"
what means that so far the learning curve, withouth dokumentantion, gets really steep..
thanks for you answer, it opened a way..but please if possible (if you are under OS X) could you share a llittle example patch?
tx,
/kad
Using RAM for multiple videos ?
From http://gem.iem.at/manual/ListObjects.html
pix_film - use a movie file as a pix source for image-processing
pix_movie - use a movie file as a pix source and load it immediately into the texture-buffer
I don't use Gem myself, but reading that it seems pix_movie would load the whole video file into RAM, and probably into the video RAM on the graphics card. Not knowing much about Gem, there may be a difference between "pix for image processing" and "pix for texture", which may or may not be important.
Adjusting individual wav source volumes
as far as i know, there is no easy way to adjust audio volumes for different applications. there is nothing more irritating than when you are listening to a quietly recorded mp3 track, for example, and you hit the ceiling after being startled by a critical fault beep or whatever.
although these may not be ideal solutions, here are a few things to remedy the situation. one would be to disable systems beeps. the other would be to edit the sound files that contain these sounds and lower the levels permanently.
it would be nice if software designers would always include an instance-specific volume control on their softwares. however it has to control the volume the right way. one thing i have noticed is that in windows media player, if i decrease the volume, it doesn't effect other audio applications. however, if i adjust the volume in real player, it adjusts the master volume for all applications.
hilbert~
Loopback devices, virtual audio devices?
i'm looking for a free solution too. i don't think my original idea is going to work out because i don't have the time to implement it. maybe some of you have ideas for the problem i am trying to solve.
the problem is this:
i am helping a professor with some research. for his research he is doing a case study on 3 composers. he is asking them to record a narrative of there thoughts on the composition process as they compose. for this, the composers will be working an a mac studio workstation putting the composition together in logic. a second computer, a pc , running audacity will be used to record there sounds. when the composer reaches what they consider a significant change in the program, we are asking them to save their project to a new file (so we end up with a series of files showing the various stages of the composition). we would like a way, however, to map the timestamps of those files to the 'timeline' of their narrative.
here are a few solutions that are not exactly desirable:
a. do not stop the recording at all and make a note of what time the recording started. this means that you can calculate what time speech is taking place by adding the number of minutes and seconds (and hours) to the time at which the recording started. the problem with this is that it will yield very large files which are not very practical, especially considering that we have to transcribe these files.
b. have the composers start each segment of narration with a timestamp: "it is now 9:15 on tuesday...." as part of the research methodology, this creates problems with the flow of a more natural narrative of the compositional process.
c. have the composers save each segment of narration as a seperate time-stamped file. the problem here is that this takes more time, and could create a lot of files that would be very annoying to work with when it comes to transcribing.
c. my idea was to have, instead of just input from the microphone, 2 streams of audio input,one on the left channel and one on the right channel. on the left, would be the recorded narrative. on the right, would be an audio signal that encodes a time stamp. i was think of simply convert a number such as DDMMHHMM (day, month, hour, minute) into DTMF tones. these could then be translated back into a timestamp. an 8-tone dtmf sequence would be generated every 10 seconds or so. this way, as long as the narrative segment was longer than 10 seconds, it would contain a timestamp. the problem with this is that i have no way to mix such a signal with the input from the microphone.
any suggestions would be greatly appreciated. thanks.