Processing to communicate with Pd
hey Arif, thanks for the reply. I have input what I think is right. Still no joy. Now it is disagreeing with CoordsCalc. I have included the code. I already had some of the things from the example. I think I just need to add integers but I dont know how or what to put.
import processing.video.*;
import oscP5.*;
import netP5.*;
OscP5 oscP5;
NetAddress myRemoteLocation;
Capture video;
int numPixels; // number of pixels in the video
int rectDivide = 4; // the stage width/height divided by this number is the video width/height
int vidW; // video width
int vidH; // video height
int[][] colouredPixels; // the different colour references for each pixel
int[][] colourCompareData; // captured r, g and b colours
int currR; //
int currG; //
int currB; //
int[][] squareCoords; // x, y, w + h of the coloured areas
color[] colours; // captured colours
int colourRange = 25; // colour threshold
int[][] centrePoints; // centres of the coloured squares
color[] pixelColours;
boolean isShowPixels = false; // determines whether the square and coloured pixels are displayed
int colourMax = 2; // max amount of colours - also adjust the amount of colours added to pixelColours in setup()
int coloursAssigned = 0; // amount of cours currently assigned
CoordsCalc coordsCalc;
void setup()
{
size(640, 480);
vidW = width / rectDivide;
vidH = height / rectDivide;
video = new Capture(this, vidW, vidH, 30);
noStroke();
numPixels = vidW * vidH;
colouredPixels = new int[vidH][vidW];
colourCompareData = new int[colourMax][3];
squareCoords = new int[colourMax][4];
colours = new color[colourMax];
centrePoints = new int[colourMax][2];
color c1 = color(0, 255, 0);
color c2 = color(255, 0, 0);
pixelColours = new color[colourMax];
pixelColours[0] = color(0, 255, 0);
pixelColours[1] = color(255, 0, 0);
coordsCalc = new CoordsCalc();
oscP5 = new OscP5(this, 12000);
myRemoteLocation = new NetAddress("127.0.0.1", 12000);
}
void captureEvent(Capture video)
{
video.read();
}
void draw()
{
noStroke();
fill(255, 255, 255);
rect(0, 0, width, height);
drawVideo();
coordsCalc.update();
for (int i = 0; i < coloursAssigned; i++)
{
if (isShowPixels) drawSquare(i);
}
background(0);
}
void mousePressed() {
/* in the following different ways of creating osc messages are shown by example */
OscMessage myMessage = new OscMessage("/test");
myMessage.add(colourLocation); /* add an int to the osc message */
myMessage.add(colourLocation);
/* send the message */
oscP5.send(myMessage, myRemoteLocation);
}
/* incoming osc message are forwarded to the oscEvent method. */
void oscEvent(OscMessage theOscMessage) {
/* print the address pattern and the typetag of the received OscMessage */
print("### received an osc message.");
print(" addrpattern: "+theOscMessage.addrPattern());
println(" typetag: "+theOscMessage.typetag());
}
}
void drawVideo()
{
for (int i = 0; i < coloursAssigned; i++)
{
fill(colours_);
rect(i * 10, vidH, 10, 10);
}
image(video, 0, 0);
noFill();
stroke(255, 0, 0);
strokeWeight(2);
rect(vidW - 4, vidH - 4, 4, 4);
}
void drawSquare(int
{
int sqX = squareCoords_[0];
int sqY = squareCoords_[1];
int sqW = squareCoords_[2];
int sqH = squareCoords_[3];
noFill();
stroke(0, 0, 255);
strokeWeight(3);
rect(sqX, sqY, sqW, sqH);
//stroke(0, 0, 255);
//strokeWeight(4);
rect(sqX * rectDivide, sqY * rectDivide, sqW * rectDivide, sqH * rectDivide);
line(sqX * rectDivide, sqY * rectDivide, ((sqX * rectDivide) + (sqW * rectDivide)), ((sqY * rectDivide) + (sqH * rectDivide)));
line(((sqX * rectDivide) + (sqW * rectDivide)), sqY * rectDivide, sqX * rectDivide, (sqY * rectDivide + sqH * rectDivide));
}
void keyPressed()
{
println("key pressed = " + key);
color currPixColor = video.pixels[numPixels - (vidW * 2) - 3];
int pixR = (currPixColor >> 16) & 0xFF;
int pixG = (currPixColor >> & 0xFF;
int pixB = currPixColor & 0xFF;
if (key == 'p')
{
isShowPixels = !isShowPixels;
}
if (key == '1')
{
coloursAssigned = 1;
colourCompareData[0][0] = pixR;
colourCompareData[0][1] = pixG;
colourCompareData[0][2] = pixB;
colours[0] = color(pixR, pixG, pixB);
}
if (colourMax < 2 || coloursAssigned < 1) return;
if (key == '2')
{
coloursAssigned = 2;
colourCompareData[1][0] = pixR;
colourCompareData[1][1] = pixG;
colourCompareData[1][2] = pixB;
colours[1] = color(pixR, pixG, pixB);
}
if (key == '0')
{
coloursAssigned = 0;
}
}_____
Interaction Design Student Patches Available
Greetings all,
I have just posted a collection of student patches for an interaction design course I was teaching at Emily Carr University of Art and Design. I hope that the patches will be useful to people playing around with Pure Data in a learning environment, installation artwork and other uses.
The link is: http://bit.ly/8OtDAq
or: http://www.sfu.ca/~leonardp/VideoGameAudio/main.htm#patches
The patches include multi-area motion detection, colour tracking, live audio looping, live video looping, collision detection, real-time video effects, real-time audio effects, 3D object manipulation and more...
Cheers,
Leonard
Pure Data Interaction Design Patches
These are projects from the Emily Carr University of Art and Design DIVA 202 Interaction Design course for Spring 2010 term. All projects use Pure Data Extended and run on Mac OS X. They could likely be modified with small changes to run on other platforms as well. The focus was on education so the patches are sometimes "works in progress" technically but should be quite useful for others learning about PD and interaction design.
NOTE: This page may move, please link from: http://www.VideoGameAudio.com for correct location.
Instructor: Leonard J. Paul
Students: Ben, Christine, Collin, Euginia, Gabriel K, Gabriel P, Gokce, Huan, Jing, Katy, Nasrin, Quinton, Tony and Sandy
GabrielK-AsteroidTracker - An entire game based on motion tracking. This is a simple arcade-style game in which the user must navigate the spaceship through a field of oncoming asteroids. The user controls the spaceship by moving a specifically coloured object in front of the camera.
Features: Motion tracking, collision detection, texture mapping, real-time music synthesis, game logic
GabrielP-DogHead - Maps your face from the webcam onto different dog's bodies in real-time with an interactive audio loop jammer. Fun!
Features: Colour tracking, audio loop jammer, real-time webcam texture mapping
Euginia-DanceMix - Live audio loop playback of four separate channels. Loop selection is random for first two channels and sequenced for last two channels. Slow volume muting of channels allows for crossfading. Tempo-based video crossfading.
Features: Four channel live loop jammer (extended from Hardoff's ma4u patch), beat-based video cross-cutting
Huan-CarDance - Rotates 3D object based on the audio output level so that it looks like it's dancing to the music.
Features: 3D object display, 3d line synthesis, live audio looper
Ben-VideoGameWiiMix - Randomly remixes classic video game footage and music together. Uses the wiimote to trigger new video by DarwiinRemote and OSC messages.
Features: Wiimote control, OSC, tempo-based video crossmixing, music loop remixing and effects
Christine-eMotionAudio - Mixes together video with recorded sounds and music depending on the amount of motion in the webcam. Intensity level of music increases and speed of video playback increases with more motion.
Features: Adaptive music branching, motion blur, blob size motion detection, video mixing
Collin-LouderCars - Videos of cars respond to audio input level.
Features: Video switching, audio input level detection.
Gokce-AVmixer - Live remixing of video and audio loops.
Features: video remixing, live audio looper
Jing-LadyGaga-ing - Remixes video from Lady Gaga's videos with video effects and music effects.
Features: Video warping, video stuttering, live audio looper, audio effects
KatyC_Bunnies - Triggers video and audio using multi-area motion detection. There are three areas on each side to control the video and audio loop selections. Video and audio loops are loaded from directories.
Features: Multi-area motion detection, audio loop directory loader, video loop directory loader
Nasrin-AnimationMixer - Hand animation videos are superimposed over the webcam image and chosen by multi-area motion sensing. Audio loop playback is randomly chosen with each new video.
Features: Multi-area motion sensing, audio loop directory loader
Quintons-AmericaRedux - Videos are remixed in response to live audio loop playback. Some audio effects are mirrored with corresponding video effects.
Features: Real-time video effects, live audio looper
Tony-MusicGame - A music game where the player needs to find how to piece together the music segments triggered by multi-area motion detection on a webcam.
Features: Multi-area motion detection, audio loop directory loader
Sandy-Exerciser - An exercise game where you move to the motions of the video above the webcam video. Stutter effects on video and live audio looper.
Features: Video stutter effect, real-time webcam video effects
VisualTracker - request for participants
Here is some more info:
This is development info of VisualTracker for pd (pre alpha 100404)
What is it:
VisualTracker is sample sequencer – it triggers loaded samples in times defined in time line in editor window. Samples can be played in their default speed / length or can be fitted in tempo. In “fit mode” you can define number of bars to fit in and also multiplication of file. All changes have visual interpretation in sample canvases.
VisualTracker for pd (pre alpha 100404) was developed in Pd version 0.41.4-extended, on Windows XP
How to make it work:
- Open VisualTracker_(pre_alpha_100404).pd in pd.
- Editor and Samples window are automatically opened.
- Preset currentstate.vtp is automatically loaded – this preset is saved before closing the patch.
- After first run there are 3 empty sample boxes in samples window and 3 corresponding sample canvases in editor window.
- Load any wav (44100 Hz) by pressing „load“ in selected sample box. No space characters in path or filename are allowed. Loading of these files is aborted and error message appears for several seconds. Name of the successfully loaded sample including full path appears in sample box and also sample canvas in editor window. Size of sample canvas is changed according to the sample length. Try to change global BPM – size of sample canvas is recalculated.
- Check „fit“ to fit sample in current BPM. Set length of sample in bars and multiplication.
- Add another sample by creating object [sample] or just copy existing sample box in samples window.
- Switch to editor window, press CTRL+E to switch to pd edit mode and drag and drop sample canvases to desired position on time line. Sample canvases are automatically snapping to bar columns and rows.
- Switch back to normal mode by CTRL+E
- Press PLAY to replay your sample composition. Samples are played only if corresponding sample canvas is placed in track 1-6. If sample canvas is above the tracks sound is muted.
- Press „save“ in main VisualTracker window to save current preset to a text file. Any name and extension with no spaces is allowed.
- Before closing VisualTracker press „save state & close“. It saves current state to preset named currentstate.vtp and delete all sample boxes from samples window and all sample canvases from editor window. Now you can turn off and eventually save VisualTracker patch. This is important to avoid double appearance of sample canvases because information about samples and composition is stored independently from pd patch and should not be saved inside patch. If there are still some „orphaned“ sample canvases hanging delete them manually.
Components:
- Sample window: place for unlimited amount of [sample] abstractions. Once abstraction is created (by copying or creating the object) corresponding sample canvas is created in editor window. [sample] abstraction is sending data to sample canvas (name, color, size, snapping) and receiving back position. Triggering of sample playback is based on position of sample canvas.
- Editor window: place for sample canvases composition on the top of timeline grid. Sample canvases can be moved by mouse in pd edit mode (CTRL+E). Timeline grid will be extended and improved in next versions.
- Preset save/load: saves and loads presets to/from a text file using [coll] object. Preset contains global values (number of samples in composition, bpm) and local values for each sample box (filename, position, track, multiplication, number of bars, fit switch, color and two unused values).
- Sequencer: located in program subpatch. Very simple - functionality will be extended and improved in next versions
- Other: located in program subpatch. Contains some other patches as colortable, BPM manager, output etc.
How to Enjoy Your Favorite Videos on Portable Devices at Will For Mac
Are you a Mac user?
Do you still feel frustrated that you can't enjoy your favorite videos on portable devices at will?
Now, a professional software---Aiseesoft Video Converter for Mac(http://www.aiseesoft.com/video-converter-for-mac.html)
can help you to solve all the problems. With it, you can convert between all popular video and audio formats with super fast conversion speed and high output quality, such as AVI, MP4, MOV, MKV, WMV, DivX, XviD, MPEG-1/2, 3GP, 3G2, VOB Video, MP3, AAC, and AC3 Audio etc. In addition, the best video converter for Mac can also extract audio from video file and convert video to MP3, AC3, and AAC...as you want.
OK, let's move to how to use the amazing software.
Step 0: Download and install Aiseesoft Video Converter for Mac.
After a while, you can use the following interface:
http://www.aiseesoft.com/images/guide/dvd-converter-suite-mac/video.jpg
Step 2. Load Video
You can load your video by clicking "Add File" button or clicking "File" button, you can choose "add file" on a drop-down list.
Step 3. Output format and Settings
From the "Profile" drop-down list you can find one format that meets your requirement.
After doing the 3 steps above, you can click "start" button to start conversion.
Wait a minute, the conversion will be soon finished.
Tips:
1. Trim
"Trim" function is for you to select the clips you want to convert.
There are 3 ways that you can trim your video.
a. You can drag the buttons(1) to set the start and end time
b. You can preview the video first and when you want to start trim click the left one of the pair buttons(2) when you want to end click the right one.
c. You can set the exact start and end time on the right side of the pop-up window.
It is for you to select the clips you want to convert.
http://www.aiseesoft.com/images/guide/dvd-ripper-for-mac/trim.jpg
2. Crop
Cut off the black edges of the original movie video and watch in full screen using the "Crop" function.
There are 3 ways that you can crop your video.
a. We provide 7 modes on our "Crop Mode"(1)
b. You can set your own mode on the right side of the pop-up window(2)
c. You can drag frame to set your own crop mode(3)
You can cut off the black edges of the original movie video and watch in full screen using the "Crop" function.
http://www.aiseesoft.com/images/guide/dvd-ripper-for-mac/crop.jpg
3. Snapshot and merge into one file
If you like the current image of the video you can use the "Snapshot" option. Just click the "Snapshot" button the image will be saved and you can click the "Open" button next to "Snapshot" button to open your picture.
If you want to make several files output as one you can choose "Merge into one file".
If you are windows users, you can go to Aiseesoft Total Video Converter(http://www.aiseesoft.com/total-video-converter.html) to get more information.
HELP: "shot" & translate pixels
this sounds intersting, although i don't understand quite a lot of it.
first the easy part:
"Let's take a gemwin 9*9 pixels. So, 9 columns of pixels. In this gemwin we see the webcam or a video. Every column of pixels changes frame by frame (because video or webcam go on)."
you want to display a video or cam, with a resolution displayed in the gemwin of only 9x9 pixels?
so you get 9x9 coloured squares, that somewhat change colour if something happens in the video?
for that, you could use a gemframebuffer. i accidentally wrote a pixelizer-thingy recently while trying to figure out framebuffering.
in the attached patch i more or less just a modified the framebuffer.readback help patch.
- then you could use a pix_snap2tex with a sufficient size and offset to snap the row of pixels you want an texture them back onto a rectangle the size and position of the lefthand row of pixels. with each frame of the video, those rectangles need to move further to the left to make space for a new rectangle.
i don't quite understand, where the video is shown, when one axis (the "central", horizontal or vertical?) is being snapped.
would the video not be 9x9 pixels large?
is the video only shown on the right part of the screen?
the left part would be filled up by the shots frame by frame?
Problem opening video files with libavcodec
Hi everybody,
I'm trying to write a pdp external using the libavcodec library, to expand the video decoding capabilities of the system. I'm following Stephen Dranger's tutorial (available at http://www.dranger.com/ffmpeg/) cause it's very well organized and clear. Still I'm not able to open video files inside pd, while the same piece of code works perfectly in the sample standalone program.
This is the code of my external:
#include <stdlib.h>
#include <stdio.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include "pdp.h"
typedef struct pdp_in_struct {
t_object x_obj;
t_outlet *x_outlet0;
int x_packet0, x_queue_id;
u32 x_width, x_height;
} t_pdp_in;
static void pdp_in_sendpacket(t_pdp_in *x)
{
/* unregister and propagate if valid dest packet */
pdp_packet_pass_if_valid(x->x_outlet0, &x->x_packet0);
}
static void pdp_in_process(t_pdp_in *x)
{
}
int pdp_in_open(t_pdp_in *x, t_symbol *s)
{
const char *filename = s->s_name;
AVFormatContext *pFormatCtx;
av_register_all();
// Open video file
if(av_open_input_file(&pFormatCtx, filename, NULL, 0, NULL)!=0)
{
post("Can't open file");
return -1; // Couldn't open file
}
// Retrieve stream information
if(av_find_stream_info(pFormatCtx)<0)
return -1; // Couldn't find stream information
// Dump information about file onto standard error
dump_format(pFormatCtx, 0, filename, 0);
return 0;
}
t_class *pdp_in_class;
void pdp_in_free(t_pdp_in *x)
{
t_pdp_procqueue *q = pdp_queue_get_queue();
pdp_procqueue_finish(q, x->x_queue_id);
pdp_packet_mark_unused(x->x_packet0);
}
void *pdp_in_new(void)
{
t_pdp_in *x = (t_pdp_in *)pd_new(pdp_in_class);
x->x_outlet0 = outlet_new(&x->x_obj, &s_anything);
x->x_packet0 = -1;
x->x_queue_id = -1;
x->x_width = -1;
x->x_height = -1;
return (void *)x;
}
#ifdef __cplusplus
extern "C"
{
#endif
void pdp_in_setup(void)
{
pdp_in_class = class_new(gensym("pdp_in"), (t_newmethod)pdp_in_new,
(t_method)pdp_in_free, sizeof(t_pdp_in), 0, A_DEFFLOAT, A_DEFFLOAT, A_NULL);
class_addmethod(pdp_in_class, (t_method)pdp_in_open, gensym("open"), A_SYMBOL, A_NULL);
}
#ifdef __cplusplus
}
#endif
Am I missing something?
Thanks
Stefano
Semi-Automatic Video Remixer
I feel like this patch is almost done and I'm looking for some feedback now. It's my first real attempt at making something in PD so I'm sure I took some very roundabout ways to do things. I should also say that this forum was one of my main resources for help, though I never asked anything myself the past discussions were often very useful. Thanks.
The basic idea with this patch started with a joke about how there is a video remix for just about everything on youtube. I suggested that someone should just make an automated program to make the remixes. Then I realized I could probably make something like that in PD. This is what I made.
Some basic instructions: (I should write a readme for it)
1. Load a video or two using the appropriate buttons. (.mov)
2. Load a song, or not. A song that is beat oriented looks best in my opinion. (.wav) Wait for it to load in the array below.
3. Select whether you are using one or two videos.
4. Adjust the volume sliders for the videos and music.
5. Press 'Play' to start the music and video. (X11 opens here for me for the video, I imagine this will vary on another OS)
6. Tap the spacebar in time to the music to set the tempo for the remixer. The tempo is visible at the top left.
7. Once you have the tempo set as you want it press 'Remix'.
8. The video will remix itself to the music.
Adjust the VideoFX Sensitivity to have the flashes occur more frequently. Adjust Reverse Playback to set how often the video plays backwards.
Also you can press 'Record A/V' to save the playback to a .mov file that should appear in the HD directory.
The remix is all random which limits how effective the cuts can be but it is also very satisfying when something ends up syncing perfectly in an unexpected way. I had considered making the user be able to scrub the video to choose points in the timeline to jump to specifically to allow more control over the cuts but I decided it wasn't what I was going for. I may do another version that enables this but I'm leaving it out of this one.
That's about all there is to it. I'm interested in any kind of feedback. One thing I haven't been able to figure out is how to stop the remixer function along with everything else. It's not a big deal but if anyone has ideas please let me know.
Thank you.
http://www.pdpatchrepo.info/hurleur/Remixer_Dual_Video_Extra.pd
How to populate 1 array with 4 incomming number streams
hhh, I was wrong about how I thought [poly] assigned voices - i misthought it would always try the lowest voice first, so you could do voice 1->pad, voice 2->perc, voice 3->bass etc to get what you want (which is: each blob gets one sound until it dies, with some sounds preferred when there aren't many blobs - as far as i understand - i might have misinterpreted though)"
this is exactly what I want!!!!!!!!!!!!!!!!
I am probably over complicating things. To trigger the tracks (and they are full studio tacks not notes) does requires a counter which convets bangs to numbers 1 2 3 etc in the order they were recieved
Which then tells [line] to activate a fader in Reason. Every ID can be a separate number
The problem is keeping track of which ID triggered which track, so when that ID sends its OFF message (pull the fader down), it only sends it to the track it triggered and doesnt cut of some other users sound.
so i think I need to do the following: set up 2 arrays
new array Tracks[ 12 ] ; // holds the ID values currently assigned to the track indexes and is written to each time an incomming ID appears -
Tracks [0] = 4 means ID4 has been assigned to the first track
new array IDs[12] ; //here the IDs are the index and the track number they are assigned to is the value
and then swap their values
for ( i = 0; i > IDs.length; i++) {
IDs [ i ] = Tracks _; or something .......................
}
but I cant even get [tabread] to get the correct information out of the first array. if you open the subpatch "assign machine IDs to on messages"
you can write ro the first array, but it doesnt output the correct values
thanks a million for your help man,
looking forward to checking out your patch
http://www.pdpatchrepo.info/hurleur/keeping_track_of_IDs.pd_
How to populate 1 array with 4 incomming number streams
Hi all,
This should be the easiest thing in the world, but I cant for the life of me figure it out.
I need to populate an array with input from four different number streams were the order of appearence of numbers in the stream puts them into an queue to bang messages from 0,1,2 etc.
A brief explanation
I presume is pretty
easy when you know how, but A brief explanation of the project might be in order
The idea is to back project onto a series of screens and give people
IR LED "paintbrushes" so they can paint with procedural graphics and
sound.
We're using "touchlib" blob tracking software (and webcams)to
differentiate between the blobs. the software assigns each blob a
numbered ID for the length of its lifetime, based on the order in
which they come into existence : so the first blob in existence is "ID
0" (until it dies, when it takes its pace in the queue), the second
is "ID 1" etc.
These IDs allow us to assign specific graphics to different blobs in
Processing, and also to give each an individual piece of audio.
Its easy with just one machine sending these messages as each ID
corresponds nicely to the order of tracks to be triggered in the
sequencer,
but we're using four separate modular machines each running touchlib
and we want the sound to be global.
We have networked the machines and each of the four graphics modules
can talk to the machine running the sound. The sound module is running
PD which receives messages from the other machines and then sends MIDI
messages to the sequencer. So PD is getting four streams of numbers -say from zero to three- which correspond to the order in which touchlib blob IDs pop into existence - (each stream local to its own machine)
these numbers trigger a fade in/out of a
mixer track in say for example Reason). Ideally the first person who enters the space will
trigger some pad sounds (fader 1 in reason say) regardless of which
screen they paint on.
that way it will work if there is only one person in the space. The
next person would trigger some percussion, and the full track would
build naturally. The alternative is to have every ID locked to a
sound, meaning it would really only work for four people in the space.
So to the question. There are 4 data streams coming into PD, literally
numbers 0 - 3 in each number box. as you can see in the
"four_machine_dilemma"patch attached.
what I need to do is fix it so that if (and only if) computer A has
sent a message triggering track 1 that computer B, or the next stream,
when it sends its own "ID number 1" is converted to ID number 2 , that
is, it occupies the next position in the global array, triggering
track two (because track 1 is occupied) even though it thinks it is
"ID number 1", and so on down the chain.
is there some way to store a boolean for the track's on state and use
it to reassign a value to the next incoming message?
Or just to fill positions in an array with the incoming messages in
the order they are received. It seems like it it should be
straight-forward but I'll be buggered blind if I can figure it out.
Hope this is not to long winded for a simple question.
Thanks in advance,
wadeorz
Seeking Sompatible Video Capture Card - Composite Video on Linux
hi all.
for a project we're working on at the moment, we've been looking for a usb video composite capture card that would work with pd/GEM on Linux OS. we bought a Pinnacle PCTV USB2 that works fine on Linux with the usbvision driver, but somehow crashes GEM. we've posted the error log on the pd and gem-dev list, with no luck until now. so here it is again in case someone around here can help :
we want to use [pix_video] with a USB video capture device (Pinnacle PCTV USB2). But when we send [1( to [pix_video], pd says :
GEM: Start rendering
pix_videoNEW: starting transfer
cap: name Pinnacle PCTV USB 2 type 3 channels 3 maxw 720 maxh 576 minw 48 minh 32
picture: brightness 32896 depth 21571 palette 8
channel 0 name Television type 1 flags 1
channel 1 name Composite1 type 2 flags 0
channel 2 name S-Video type 2 flags 0
setting to channel 1
closing video
pix_texture: not using client storage
and all that we can see is a white rectangle.
XTerm is saying : VIDIOCGMBUF: Invalid argument
When we stop rendering with [0(, pd crashes.
we're on Ubuntu 6.06 LTS with Pd 0.39-2 / GEM 0.90. The USB video capture device is a Pinnacle PCTV Analog USB2
we had no trouble for getting the box to work with the video4linux drivers.
It works well with TVTime, but not with GEM. It seems everything goes well (using pix_videoNEW, listing the video channels, choosing the good one), and then this message "closing video". Why ?
we've tried the patch on Windows XP and have no problems with [pix_video] with this video capture device.
Any ideas ?