Getting DHT-22 sensor data to PD/PD-l2Ork RPi3
Hello, first post here as a PD newbie (but Max veteran)
I have managed to get PD working on my RPi3 (well that was easy as it is included in the Rasbian repository!!) and I am currently installing PD-l2Ork. I hate to ask stuff without having had a good stab at it myself already but I just wanted to ask for tips on getting sensor information into PD.
I am going to take part in Tiree Tech Wave in a couple of weeks and I want to make some kind of reactive bio feedback sample player, trying to scrub local weather data looks like it might be quite hard so I thought of doing something a little more micro scale and use something like the DHT-22 to get local ambient data and to influence a generative system that plays back samples from file.
I am fairly competant with Arduino style C and have done a little work in Python (but Python does give me a headache!!) - I have managed to get DHT-22 collecting and feeding back data on the RPi with Python and on the Arduino individually. I have no idea right now how to go about getting that data into PD...
The DHT-22 has a basic microprocessor on it that sends the collected ambient readings as words on a single wire data line, People have made nice Arduino and Python libraries already to handle that painlessly...
I have installed pd-comport and I have the impression the PD-l2Ork has some good included GPIO handling, I think that what I could do quite easily right now is bring an Arduino along as well and get the Arduino to handle all the DHT-22 stuff, and also react to the data - this could result in something like triggers activating on several GPIO pins, all easy stuff. But it would be nicer and I would learn more if I could send the actual numbers based on temperature and humidity to PD and do stuff there... I have no idea how easy that would be, in Max/MSP I would use the serial object to get data from the Arduino, and it's pretty easy to filter what you see at either end.
I guess another possible benefit of serial is perhaps using Bluetooth and having the sensor located more remotely.
Of course, it's probably totally unnecessary to use an Arduino and it can all be handled by the RPi but using Python makes my blood pressure go up
Anyway, woops I have written an essay... any thoughts, tips or words of encouragement would be welcomed!
Rapid multiple triggering of tabread~ from vline~
what's the best way of preventing glitch artefacts when triggering a sample from a vline~ object multiple times in rapid succession?
For example, if a metro~ object is being used to trigger the vline~, which in turn is playing a sample from a tabread~ object, the length of the vline~ can be set to match the metro speed so that the amplitude drops to zero before the next trigger, thus avoiding any nasty glitch.
However, if the metro~ is accelerating the length of the vline~ will be slightly longer than the elapsed time between one metro~ trigger and the next, since the length of vline~ can only be set before it's triggered, not during playback. Hence the tail of one vline~ will overlap with the start of the next, resulting in an audible step.
I have played about with adding a short fade out upon each trigger and delaying the start of vline~, but this seems cumbersome and doesn't work when you get to very high flamming speeds.
I'm thinking about using multiple vline~'s and tabread~'s, and distributing the metro~ bangs between them consecutively, to allow the vline~ from one trigger (bang) to complete its cycle before it gets banged again.
But if there is a way to prevent level discontinuities (glitches) while using a single vline~/tabread~ set up, I'd love to hear it!!
A collection of GLSL effects?
Hello everybody,
it's been a long time since i started wondering about getting some advanced visual effects out of pd. I know "advanced visuals" could mean a lot of different things, but let's say I am thinking of pixel stuff like depth of field, bloom, glow, blurring. I kind of tried everything, from basic pix effects to freeframe fx and gridflow convolutions but no matter what I do, since these effects are cpu based the resulting patch is always dead slow.
My first question is: as far as i know pd is born as an audio software, does it make sense to keep pushing it into the domains of visuals?
Don't get me wrong, I love pd and I know the amazing stuff you could get out gem and gridflow. Let's think of all these kind of 3d manipulations, sound visualization, video mixing, opencv stuff, pmpd physics simulation, just to name a few. You could just get some wonderful visuals by only using geos and simple texturing. But, sometimes, I find myself in front of limitations, like the ones about pixel effects I said before, and I wonder if I should just leave pd to what it's good for and move to video driven software like vvvv or "classic" programming environment like Processing.
I know a lot of stuff I've been talking about could be achieved with an irrelevant cpu cost by leaving calculations to the gpu. I think GLSL potential is extremely huge and I got to work some basic blurring, glowing and blooming effects I found on the web, but still seems a little workaroundy for me (especially multipass rendering).
Here is the second question: could opengl and glsl scripting be the solution to my first question? and what do you guys think about having a place where we can host a (hopefully growing) collection of ready to use GLSL effects along with example patches? maybe with a standard framework of objects for multi texture effects and general GLSL handling?
Ok, that's all. Any feedback will be extremely appreciated.
Here follows a simple GLSL blooming effect applied to gem particles (works on macosx 10.5, pd extended, gem 0.92.3)
Auto Volume
Hi Stephen,
You can use the [line] or [vline~] object to automate volume changes.
The [line] object doesn't cue up commands, so you'll have to use delays to sequence the automation.
[vline~] is an audio-rate object, is sample-accurate and can cue up commands.
Have a look at the difference in the attached patch.
Hope that helps.
I have a huge project. So, I begin with the kick.
You don't need [t3_bpe] or [t3_line~] anymore. That article and those externals were written before [vline~], which now comes with Pd-vanilla and is much more widely used. You should definitely use [vline~] for percussion sounds whenever possible because the timing is more accurate and you can get finer control of attacks.
A couple of envelope tips:
1. Squaring the output of the envelope will give you an exponential decay, which is much more natural sounding.
[vline~]
| \
[*~ ]
2. Squaring the output also gives you an exponential attack, which may not be so natural sounding and actually kind of sucks for percussion. However, I've been using a nice little trick to get better attacks from this. You can use a low-pass filter to smooth out the envelope (or any signal). By giving the envelope an instant attack and sending it through a [lop~], you can use the cut-off frequency to define you a short, quick attack that has a more natural rise while keeping the exponential decay:
[1 0, 0 100 0( <--instantly jump to one, go back to zero after 100 ms
|
[vline~]
| \
[*~]
|
[lop~ 100]
Tarmtott - ep by swamps up nostrils
This lil' thing just got released on the great netlabel Control Valve, it is 6 tracks of whacky looping made with PD. Download @:
inforz:
"ctrlvlv#007 IS NOW READY FOR DOWNLOADING!!!!!!!!!!!!!!!!‏ ctrlvlv#007 IS NOW READY FOR DOWNLOADING!!!!!!!!!!!!!!!!‏ ctrlvlv#007 IS NOW READY FOR DOWNLOADING!!!!!!!!!!!!!!!!‏
artist: swamps up nostrils title: tarmtott
artist statement: "Swamps Up Nostrils is a spatiotemporal mishap again and again focusing on both experimental wrongdoings and ancient traditional musical structures like beats and harmonies. What you hear is not what you get but it will however seem pretty close anyway, so why bother? Swirling the eternal wormholes between the familiar and the unknown, we hope to entertain but admit that to most people it must seem like meaningless idiocy, but then again most people seem like meaningless idiots too, including ourselves, so I guess it balances out. Only by admitting to being an idiot yourself will you understand what it means to be one, and understand just how many idiots there are around here. We, the failed abhorritions of monkey-like beings of ancient times, will not let us be controlled by our biological shortcomings, although we admit to them causing us both irritation and confusion. This irritation and confusion is not the source of this music. This music was made by utilizing magick and computers, if you believe in such stuff. If you do not, this music was made by utilizing science and computers, if you believe in such stuff. If you do not, this music was made by utilizing faith and computers, if you believe in such stuff. If you do not, this music was made by utilizing computers and computers, if you believe in such stuff. If you do not, this music was made by utilizing music and music, if you believe in such stuff. If you do not, this mucus was made by utilizing mucus and mucus, if you believe in such stuff. If you do not, this made not was believe and unbelief is by whom was finalized as not more. If you do not, please ignore all above statements as they are irrelevant to the audial experience anyway. There appears not more than what vibrates in your ear, and how your brain interprets that on the basis of your own very personal framework of reference. Anyone telling you otherwise is either trying to highjack your brain or may be lying, or may be convinced of otherwise and acts on a compulsion of good faith, although faith can never exist as something good outside someone's subjective defenition of the matter so the statement is meaningless. Now stop reading this nonsense and listen to the music instead, because, as implied in this body of textual represented idiocies, the point is not to read about this music it is to listen to it. Get it?"
swamps up nostrils is arnfinn killingtveit from trodheim, norway. no one can ever be sure what will come out of the speakers when playing a swamps up nostrils release. the first time i heard one it was some sort of drum and bass mixed with circuit bent electronics, with just a tad of field recordings. you might be getting some sort of techno, drone, noise, minimalism, analog, digital, ect...... whatever it might be, it is always top notch sound work, great composition, and a highly enjoyable listen. killingtveit also runs the superb cd-r label Krakilsk
6 tracks of looping-layered sound composition 320kbps. mp3 cover image"
Strange envelope..
I don't mind answering your questions at all, it is only from asking others that I know what I do anyway!
a) the current message tells vline~ to transition back to 0 at the end, yes, but vline~ doesn't know that that will be the only message it will receive in it's lifetime it may receive a different message that doesn't want it to go to 0 and then when it comes to providing your percussive attack, it'll be lost on the fact that you didn't tell it to start at zero.
vline~, line~ and line isn't intended solely for amplitude modulation (or envelope generation) it simply ramps numbers, for whatever purpose, however large or small your numbers. That's why it accepts the rather counter intuitive message format. - for functionality. If you want a percussive envelope generator, you would do well to stick to [ead~]
yeah, i still think that has to do with tiny timing differences, you see, metro is sending a bang to tabwrite at the same time as sending 3 sequenced messages to the vline~ object. the change in the x-axis dissapears when you modify the [t b b b] object to properly sequence everything.
Strange envelope..
yes 5ms is probably too much. [vline] is best for percussion
so construct a message to vline that says
goto 0 in 1ms (from whatever level)
goto 1 in 1ms after a delay of 1ms
goto 0 in 80ms after a delay of 2ms
[0 1, 1 1 1, 0 80 2(
|
[vline~]
arif's patch will show you how to make variable messages to send to vline
Vline~ and delta time (variable rate vline)
@hardoff said:
theoretically, this approach should also work well with changing the
pitch while playing, if i am not totally mistaken. i never tried to
implement it myself yet, but since you send a message to [vline~] you
also know at any time, where [vline~] actually is. the idea is to
measure the time between the inital message to [vline~] and the moment,
where you want to change the pitch. with the timevalue and the values
from the initial message you could calculate [vline~] actual position.
with taking into account [vline~]'s actual position and the new pitch,
you could generate a new message for [vline~]. like that, it should be
possible to change the pitch at any time with (sub-?)sample accuracy and
without having jumps in the playback
Would anyone care to elaborate on how this might work. I can get my walnut of a brain round how it wouldn't just restart everytime.....
Timbase for audio looping
[mod 1] [mod 2] [mod 4] [mod 8]
[sel 0] [sel 0] [sel 0] [sel 0]
// the above generates bangs every 1/2/4/8 beats
// then the below converts bangs into a phasor-like that scans the length of the sample tables, taking 1/2/4/8 beats to do so
"0, 1 500" "0, 1 1000" "0, 1 2000" "0, 1 4000"
[vline~] [vline~] [vline~] [vline~]
[*~ len1] [*~ len2] [*~ len4] [*~ len8]