97 result(s) matching "diy2-effects-sample-players-synths-sound-synthesis", (0.07 seconds)
DIY2 - Effects, Sample players, Synths and Sound Synthesis.
here is the updated version of my DIY library. it is a collection of various useful audio modules for music production. unfortunately my pd computer packed it in last week, so it is in a bit of a 90% completed state, but there should be enough stuff to play with until i get a chance to get in and get it all 100% done. open 000-introduction.pd to get started. [http://www.pdpatchrepo.info/hurleur/DIY2.zip] : http://www.pdpatchrepo.info/hurleur/DIY2.zip
Simple sound-file player
Hi, Is there a better way to construct the message for the \[line~\] object in this patch? Till now I was just using the \[phasor~\] to playback, so this issue never came up... Thanks [http://www.pdpatchrepo.info/hurleur/simple\_player.pd] : http://www.pdpatchrepo.info/hurleur/simple_player.pd
Hi I have some newbie questions about Pd. I wanted to write a Patch which is based on this one: [http://puredata.hurleur.com/sujet-643-sample-player] The mentioned Sample Player has 2 Sliders which control the Start-/End-Loop position which is the exact thing what i was looking for. What i want to do? I want to make a patch, a sample player. When i press a button i want to loop the actual sample position according to the key i have pressed. I give you an example. I load a loop which is 120 BPM fast. I set somewhere my tempo. When i press "a" it starts looping 1/4th at the actual play position. For this sort of thing. The Sample Player seems to be perfect. But now there are the difficulties. How can i set the tempo right? I figured out how to calculate the tempo for any note length. How can i get my keyboard entries into the software? And lust but not least. How can i get the Patch to act how i would like. I know...dumb question you know too. But maybe there is someone who knows one answer. Many, many thanks. : http://puredata.hurleur.com/sujet-643-sample-player
Very simple noob effect....but how?
Hello all, I have a very simple question but I cannot figure out how to make it work. Basically what I want to do can be thought of as a ball bouncing up and down. First its bouncing high and takes a relatively long time to hit the ground again, after a few seconds it does not bounce as high but it hits the ground faster. I think it is called the vibrato-effect in music terms? However how can I create this vibrato-effect in combination with osc~ or phasor~ ? The sound of the ball bouncing up and down is basically the effect that I am aiming for. Please give me suggestions or maybe even post a quick example. Kind regards, Jelle
EngineSound (sample-based) inkl. sample needs window funktion or fades
here i&\#180;ve tried a sample based method to generate engine Sound: [https://dl.dropboxusercontent.com/u/55618665/EngineSampler.zip] Concept: motorengine-sound with sampling I&\#180;ve used only one sample from standing to max rpm... within a loop/loop-centerpoint is dynymicly controled via incoming engine speed... --\> start patch and move the red fader I think it works very if the patch gets a windowfunction or to generate fades while moving the centerpoint... I&\#180;ve atached the patch including sample. Do you have a tip? : https://dl.dropboxusercontent.com/u/55618665/EngineSampler.zip
Two DIFFERENT midi notes in PD send the SAME note to synth
Hi folks, Making a sequencer. Everything was going fine until suddenly this problem showed up, and I'm not exactly sure when or how, but now it is persistent. I have cut out nearly everything in the patch, just to show you the bare problem. I have PD going through IAC bus to Massive (also tried with other external synths), but MIDI note 52 and 53 sound the same. I have noticed this occurring with other notes too that are a semitone apart, like 57 and 58 sometimes. Anyway, if I manually put these numbers in instead of the sequencer, the external synth can suddenly play the two independent notes, but not if the sequencer sends the notes. Sending these MIDI notes to a PD oscillator works just fine. Any idea? This realllllly has me stumped... File as attached. Cheers lads! [http://www.pdpatchrepo.info/hurleur/NoteoutSequencerBetaXXX\_copy.pd] : http://www.pdpatchrepo.info/hurleur/NoteoutSequencerBetaXXX_copy.pd
March sound effects
New machines [http://obiwannabe.co.uk/html/toys/machineomatic/machine-o-matic.html] And I was playing about with different short energy impulses for explosions, so Fresh Guns [http://obiwannabe.co.uk/html/toys/gunsulike/gunsulike.html] Bombs [http://obiwannabe.co.uk/html/toys/bombfactory/bombfactory.html] and Fireworks [http://www.obiwannabe.co.uk/html/toys/fireworks/fireworks.html] I think you need \[ead~\] for some, sorry. Andy : http://obiwannabe.co.uk/html/toys/machineomatic/machine-o-matic.html : http://obiwannabe.co.uk/html/toys/gunsulike/gunsulike.html : http://obiwannabe.co.uk/html/toys/bombfactory/bombfactory.html : http://www.obiwannabe.co.uk/html/toys/fireworks/fireworks.html
A simple sound file recorder.
This abstraction is very simple but i use it so often that i thought it might seem useful to anyone else. I use it instead of \[dac~\] as a master output, sending all audio to its inputs. When i need to record 32-bit stereo sound file from what i hear, i press record button, choose where to save file and its name (there's no need to add .wav extension). Recording starts immediately and goes on until stop button is pressed. It is quite easy to modify for your needs. For instance, change "open -bytes 4 $1, start" message text to "open -bytes 2 $1, start" for 16-bit recording, or delete right inlet~ and change \[writesf~ 2\] to \[writesf~ 1\] for mono recording, etc. I think, most people use only one recording format in most cases, so it has no mono/stereo or 8/16/24/32-bit switches, but you can add your own ones. :) [http://www.pdpatchrepo.info/hurleur/recorder.pd] : http://www.pdpatchrepo.info/hurleur/recorder.pd
Filters for classic analogue synth sounds
Hi, I've been learning Pure Data and as a learning exercise I thought I'd recreate the classic Roland SH101 synth. Now I think I've nailed most of it; the oscillators, the LFO, the ADSR, the main audio routing etc. The problem is ... it doesn't sound much like a 101\. Well it sort of does, but the issue is the filter. I'm using \[resofilt~\] as my filter as it seemed the closest to what I needed, but it's sadly lacking. It's hasn't got that buzzy self-oscillation "bite" on the resonance, and a lot of the overall warmth and punch of the 101 is gone. Is there a good alternative to \[resofilt~\]? (I looked through the list of filters and was baffled!) or should I maybe be using a series of filters? Anyone else got tips on making Pure Data sound better? Or am I simply asking too much of it? Thanks!
Effect chain : order of effects
Hi Guys, I'm building an android application, and I want to find the good balance in my output sound so I need a few advices. I've implemented this signal flow : \[source~\] | \[distortion~\] | \[+~\] | \[compressor~\] | \[reverb~\] | \[soft leveler~\] | \[soft clipper~\] except that before the \[+~\] object I 've got 8 sources coupled to eight disto. The source is actually an oscillator controlled by a spring simulation so dynamics have their importance in this app. the speaker from my phone has a high tendancy to saturation when I engage many springs with wide oscillations (wether it's fine with headphones). I want to keep the sound as clean as possible even with the built-in speakers ... does anyone has advices, could mention things I haven't done right ? Should I use a compressor or more of a limiter ? I'm also experiencing some strong Fletcher effect with those speakers, does anyone know of a vanilla dynamic eq being able to correct this on the fly ? Any advice or tips about mastering on pd may be valuable too. :) cheers
Horse sound effects
prejudice to all of them are new to the forum and just use this program. I have to create a sound that simulates the sound of the hooves of a horse. Someone can help me? I do not know where to start I need help for a university exam. Thanks =)
Horse sound effects
prejudice to all of them are new to the forum and just use this program. I have to create a sound that simulates the sound of the hooves of a horse. Someone can help me? I do not know where to start I need help for a university exam. Thanks =)
Sample playback sound issue
just wondering if anyone can help. this patch plays and rewinds a sample. The problem is the sample sound is 'wobbly', like it is being shifted or effected by some sort of granular process, however none in the patch... it has been adapted from the scubber patch. any thoughts, ideas how to fix, or a better option 'preciated drew [http://www.pdpatchrepo.info/hurleur/sample\_play-rewind.pd] : http://www.pdpatchrepo.info/hurleur/sample_play-rewind.pd
Hi everyone - I'm new to the forum & to pd so...... What I'm trying to do is make a simple audio player that will play and loop files and samples and also possibly vary the speed and/or other manipulations. I know this is a pretty basic patch, but the examples in the manual leap straight into scratching effects, etc without really covering the basics enough for me. Can anyone suggest ways in which I may go about making such a patch\>? I've had a stab at a patch which just plays files from a location on your hard drive...but then quickly get a bit lost with making it do anything more interesting :) All suggestions welcome.. Thanks!
Simple multi-samples player needed ( newbe...)
Hi all, I'm an absolute beginner on Pd, but I've done the big mistake to promise to a friend of mine a simple "gift" for an exposition he's doing right now. What I'm trying to achieve: 1) a trigger start a sound, choose from a bunch of them; 2) every time the trigger "bang" a different sound is played; 3) if triggers overlaps also sounds overlaps; All done on a Pc with Vista, Pd-extended, internal audio interface, Eowave Eobody2 with a movement detector, via Max Runtime. As said, I'm an absolute beginner on Pd, I've read a lot about it here on the forum and on manuals, but still can find the easiest way to achieve this apparently simple task... Can someone introduce me, slowly and step by step, in this amazing world? Many thanks :-)
GEM synth/effect/sequencer layouts
Is GEM actually being used a lot to do custom synth gui's ? Or anything else ? I hardly find examples of PD synth patches using GEM for GUI.
Simple video player with a playlist option
Hi guys, I would like to make a simple patch which takes a bunch of videos from the same folder and plays it one after each other. when the first ends, starts the 2nd and till the end of folder then starts from the first. I am trying to use the playlist object but without any success. Could anyone of you give me a direction, where to go? how to start? playing one video with a loop on the second screen is already solved just do not know how to load the next videos in order. Thanks a lot in advance: Popesz
Adding a simple video and sound file to a patch
Im a very basic basic user of pure data. I know how to add music and video using Bang Openpanel etc etc but when I close and open the file I have to go open panel and search for the video/music location all over again. Was told to add a message box and put in the file location, and it will need a bang. but what do I attach it to?? Do I go bang attached to message box w/location attached to open panel or do i get rid of openpanel?? Help!! [http://www.pdpatchrepo.info/hurleur/111.jpg] : http://www.pdpatchrepo.info/hurleur/111.jpg
Simple Synth (aka the "Minimoog")
[http://en.flossmanuals.net/pure-data/ch017\_simple-synth/] The intention is not to recreate a Minimoog in Pd but to build the individual components which make it up, including filter, oscillators, amplifier, envelope, etc. Each section is separated into the "Audio tutorials" menu on the left-hand menu. I believe they were written in sequential order. : http://en.flossmanuals.net/pure-data/ch017_simple-synth/
Problem with sound / sound card
I hope anyone can help me out here.. I am by the way a newbie in PD, but have used Max/MSP for some time. I have a couple of problems with PD when it comes to the sound part. I am running OSX on a G5, with a Digidesign digi002 rack as main sound card. The thing is, that i can´t get sound even when I run the "Test Audio and MIDI" patch from "Media", nor can I see anyhting happening in the boxes under "pd ------ audio-----"; it 0 all the way. This is with the digi002 soundcard. The same thing happens when I run the digi002 sound driver through JackOSX (so, the sound card i select in other words is Jack). On the other hand, MIDI in and out seems to work with digi002\. Of course, I have also tried with the built-in sound card, and I have also tried with another sound card I got (M-Audio Ozone), and I get sound with both! Thanks a lot if anyone can give a hand on trying solving what´s wrong and how to fix it! Also, I have another problem, when I open a new patcher, and want to create a sound object, it won´t let me write ~ ! I have tried to write the same, the same way in TextEdit, Max/MSP, and it works fine there, but not in PD. Also, it won´t let me copy and paste! If anyone knows anything about this issues, thanks! Hanstein
Sound-on-sound looper with clear option
Couple of days ago I wanted to make a simple sound-on-sound looping delay. Can be done with \[delwrite~\] and \[delread~\]. Then it turned out that \[delwrite~\] has no option to clear the delay buffer! ??krnchkrnch..@%!!\#etc So here is a patch using some Pd-extended objects like \[count~\] and \[poke~\], doing sound-on-sound-looping and with clear button to erase the loop content at once. Katja Happy days to you all [http://www.pdpatchrepo.info/hurleur/soslooper.pd.zip] : http://www.pdpatchrepo.info/hurleur/soslooper.pd.zip
Effect Send and Return
I'm trying to make a bank of sends and returns for effects. The sends and returns are all in one patch, and I'm using the send~ and receive~ objects to send the original signal from a drum machine to another patch containing the effects, and back again. I want to be able to switch the relative positions of each effect module in the signal path so have tried to use the \[makefilename\] object to alter the ID of the send and return coming into and going out of the effect module. However, this doesn't seem to work ~ i get all sorts of unwanted feedback delay effects and whining when changing the position of the effect. If i set the send~ name manually it's ok, it's just when I try to change the position automatically. Possibly something to do with not being allowed multiple send~s to a single receive~? Pls refer attached patches [http://www.pdpatchrepo.info/hurleur/panner.zip] : http://www.pdpatchrepo.info/hurleur/panner.zip
Sound stamp/timbre/formant (getting voices to all sound the same)
Hi guys, Was just wondering if anybody knows of any patch that allows recorded audio to appear to sound the same, some kind of filter control that would make anybody's voice sound the same. Would appreciate any help no matter how big or small, Cheers Seán
Lamixette - samples player for Reactable
Hi there, I made a patch for the reactable that plays samples and allows to tune pitch and playback speed. It's great fun to play with this patch on a reactable, but if you don't have one, you can try it with the reactivision TUIO Simulator. It's over here : [http://git.tetalab.org/index.php/p/puredatareactable/] in "Source" The latest version is in 0\_lamixette\_current\_latest or in lamixette\_06\. There is a zip file that will download everything you need. Have a look at the readme files. There is a screenshot at the Source root directory. There are some videos on lucas' channel on vimeo : [http://www.vimeo.com/tetalab/videos/] I am quite new to pd so you might find the code not that smart. If you have good ideas to improve this patch, please tell me. I'd like to thank the people who share patches here as I have reused parts of some. : http://git.tetalab.org/index.php/p/puredatareactable/ : http://www.vimeo.com/tetalab/videos/
Latest pd vanilla on osx 10.5.8 / no sound devices, hence no sound
dear forum, i successfully installed the latest version of pd vanilla on my mini mac. unfortunately there are no sound input/output devices to select in the configuration, hence testing the sound in/outputs fails. am i missing something here? any help would be greatly appreciated!
Loading sound in a table, sound is muted during the loading
Hi everybody, I made a big and nice patch for live sessions. The only problem I'm encountering concerns the sounds loading. As I'm playing several samples, I want to load other files and each time as I'm loading the new sound, all the other sounds stop to play. In fact, I think there's some kind of loose of memory during the process and the other sounds can't continue to play? Does anybody have an idea of the problem and how to solve it? Many thanks guys;)
Sample loop Player
Hi everbody! i´m looking for a good way to create a special sample player. the player gets his input via java-midi (or any midi controler). the idea is simple, i´ve got 10 samples of music, all about 2 seconds long, and i want them to be played in a loop until there is a note off command. so for example i press down the c on my midi-keyboard (note 60) and the sample 5 starts playing over and over until i release the key. i´ve got the ctlin to work to listen so a specific controler, eg \[ctlin 50 6\] gives me the output of controller 50 von chanel 6, that works fine, but i cant get \[notein 0 60\] to just give me the output for the velocity (to control the volume of an infinit loop of sample 5). has anybody got a quick and dirty idea how i can create a simple pd patch to make an easy keyboard sampler? thanks in advance! Jan
Simple FM synth
very simple fm synth with sequenced random sounds [http://www.pdpatchrepo.info/hurleur/snter.rar] : http://www.pdpatchrepo.info/hurleur/snter.rar
Sample table showing up outside of abstraction upon sample load
I followed the excellent tut/explanation linked here: [http://puredata.hurleur.com/sujet-1187-abstraction-why-use-etc]. However, when I load a sampler as an object and load a sample, the sample table appears in the main patch, outside of any graph-on parent. Just kind of pops out to the right of the object. Here is the sampler object. First post. I am new. Holla. : http://puredata.hurleur.com/sujet-1187-abstraction-why-use-etc
Synthesis metal bars sound
HI, i'm working on an installation based on this apllication made in java [http://www.vimeo.com/993580] i communique with pd via OSC for each collision pd receive a bang with two parameters height tube position tube i'm looking for synthesis metal bars sounds to transform this "thing" into a musical instrument there is samples here [http://obiwannabe.co.uk/html/sound-design/sound-design-audio.html] [http://obiwannabe.co.uk/sounds/effect-clonk-002-bar.mp3] [http://obiwannabe.co.uk/sounds/effect-clonk-002-bar.mp3] [http://obiwannabe.co.uk/sounds/effect-clonk-004-iron.mp3] [http://obiwannabe.co.uk/sounds/effect-clonk-006-bar.mp3] What kind of simple patch should i have to make for this goal? au revoir Denis : http://www.vimeo.com/993580 : http://obiwannabe.co.uk/html/sound-design/sound-design-audio.html : http://obiwannabe.co.uk/sounds/effect-clonk-002-bar.mp3 : http://obiwannabe.co.uk/sounds/effect-clonk-004-iron.mp3 : http://obiwannabe.co.uk/sounds/effect-clonk-006-bar.mp3
That userfriendly, loopable sound player, multi output kind of patch
Hi again :) I work with some friends on a horror tunnel like live roleplaying game, as sound designer+sound technician. This means i go to a house, hide speakers around and take long cables to a sound control room, where my Motu 828 Mk2, my Macbook, me and Puredata are ready to scare the shit out of those poor participants. I need to program a patch that lets me: - select sounds from my pool in a userfriendly, fast, non-glitchy way - play them, stop them, loop them, control their volume. - select the output (room1, room2, room3) in a usefriendly, fast, non-glitchy way It would be good to be able to use this patch as an abstraction, with various instances of it. So, the thing is that i already did this with maxmsp, but with Puredata is being a bit difficult. Could someone please tell me wich objects would be the most appropiate to use for: - the sound selecting part (some kind of menu... popup?... that i can populate with my sounds) - the sound player part (w loop on/off, play, stop, gain...) - the output selection part (in maxmsp i was using a menu for this too) I already did a search on this but can't find the answers i need. Btw, i have Pd-0.39.3-extended-macosx104-i386 but i can't create a \[sfplay~\] i thought it came already on the extended package ? Thanks to all of you for your help.
Simple problem, simple answer (probably).
I'm trying to fing the distance between two points, which would seem to be simple enough: sqrt(x^2 + y^2). But the 'pow' object in pd doesn't support float exponents! So how can I get a square root? Cheers
TR808~ sample player
\[url=[http://puredata.info/Members/claudiusmaximus/tr808/] \][http://puredata.info/Members/claudiusmaximus/tr808/] !(http://puredata.info/Members/claudiusmaximus/tr808/tr808-20050123.png) Some samples (13MB worth) played with \[tabplay~\], some sounds have more variants than others. Uses a callback system where a &\#036;0-receive-name plus parameters is sent to a receive within an abstraction with the same &\#036;1 as the caller, which sends back the name of the table to play using \[ ; ... ( . This enables many players to share the same sample tables (thus saving memory) or not (can't think of a reason why at the moment, but hey). : http://puredata.info/Members/claudiusmaximus/tr808/
Does PD work with cent as unit for pitches?
I'm working with just intonation and wonder if PD is able to digest "cent", the logarithmical conversion of frequencies mesured in "Hz"? I'm used to it and for many applications the only sensible thing. Most important for me is to have a high resolution for pitches. The higher the better. It can't be high enough... What is the limit of precision in PD? Thank you very much for your answers in advance. 2357matrix
Hi If you are interested in sound synthesis algorithms, you may like this evolving online book: Digital Sound Generation, [www.icst.net/dsgdownload]. Regards bf : http://www.icst.net/dsgdownload
\[writesf~\] sample rate bug?
Hello all- So...my soundcard is set to use a sampling rate of 96kHz, and my PD startup preferences are set to "-r 96000". When I send an "open" message to \[writesf~\], I include a "-rate 96000" flag, but the file always gets created at 44.1kHz for some reason. (The sound is annoyingly pitched down an octave or so.) Is this a bug, or am I just missing something? I have included a copy of the subpatch below, any help would be greatly appreciated! -Tim \#N canvas 224 44 449 382 10; \#X msg 129 286 start; \#X msg 198 286 stop; \#X obj 43 336 writesf~ 2; \#X obj 153 71 time; \#X obj 16 173 pack s s s s s s; \#X obj 16 96 makefilename %u; \#X obj 26 147 makefilename %u; \#X obj 153 96 makefilename %u; \#X obj 161 122 makefilename %u; \#X obj 169 147 makefilename %u; \#X obj 16 71 date; \#X obj 21 122 makefilename %u; \#X obj 43 260 inlet~; \#X obj 100 260 inlet~; \#X obj 129 336 print record; \#X text 123 173 time & date stamp; \#X text 10 4 \[diskRecorder~\]; \#X text 13 18 writes an audio file to disk.; \#X text 43 244 main-l; \#X text 101 244 main-r; \#X obj 16 50 bng 15 250 50 0 empty empty empty 0 -6 0 10 -262144 -1 -1; \#X text 231 223 open 24-bit/96kHz file; \#X msg 16 224 open -rate 96000 -bytes 3 -aiff \\$1; \#X obj 16 198 makesymbol %s-%s-%s\_%s-%s-%s; \#X text 36 48 push me; \#X connect 0 0 2 0; \#X connect 0 0 14 0; \#X connect 1 0 2 0; \#X connect 1 0 14 0; \#X connect 3 0 7 0; \#X connect 3 1 8 0; \#X connect 3 2 9 0; \#X connect 4 0 23 0; \#X connect 5 0 4 0; \#X connect 6 0 4 2; \#X connect 7 0 4 3; \#X connect 8 0 4 4; \#X connect 9 0 4 5; \#X connect 10 0 5 0; \#X connect 10 1 11 0; \#X connect 10 2 6 0; \#X connect 11 0 4 1; \#X connect 12 0 2 0; \#X connect 13 0 2 1; \#X connect 20 0 3 0; \#X connect 20 0 10 0; \#X connect 22 0 2 0; \#X connect 23 0 22 0; \#X connect 23 0 14 0;
Using Pd to edit external synths like Oberheim Xpander
I'd like to develop a control structure to edit/control my Oberheim Xpander using Pd, so that I can avoid splashing out a whopping $399 for such universal editors as Midiquest or Unisyn. It seems that sysex is required to control the parameters, and I have come across various files with info about what sysex numbers control what parameters, but nothing about how to actually implement it. I don't really know anything about sysex, so could anyone point me in the right direction for tutorials on this? Better still, if anyone knows of any freeware editor for Xpander on the mac that would be great. I have tried to use Soundiver but the version I downloaded did not come with a manual and it scares me to use it in case it wipes out all my patches forever!!
Kids these days and their sampling devices
Since this subforum doesn't get much traffic I thought I'd post up some tracks that I've made with my pd lovechild/teaching aide, the Womanipulator(TM) It's all sample manipulation, live and controlled by the computer keyboard without any sequencing. I'm working on a much cleaner, better, modular version of the same patch that I'll be using to build up to a live performance rig, but for now... hope you enjoy! ----- [!(http://www.tindeck.com/audio/image/b/cels)] 1st track is an ambient number I did last winter [!(http://www.tindeck.com/audio/image/b/azeb)] 2nd track is a simple hiphop beat I threw together recently after getting my grubby paws on some new CDs : http://www.tindeck.com/audio/my/cels/nov26-badluck : http://www.tindeck.com/audio/my/azeb/May19_JustLikeBaby
Store message to be sent later?
Hi, Is there something like the "float" object (right inlet stores, left inlet can send), but for messages? I have an array that I'm sending "set newarray" into, but I don't want it to actually send it until it gets a bang. The reason is I have one MIDI note that picks the array to send, but I want the second MIDI note to trigger it actually being sent. First note picks it, second note needs to send it, therefore I need to store it first. Basically I am wondering if there's an object that can store messages, and will send them when it gets another bang. Any thoughts? It would be better if it wasn't an extended object also :-\\
Some synth and other things
At this address: [http://solipse.free.fr/site/puredata/patches/index.html] you will find patch and abstractions, play with them, it's for you! kaos. 07/10/07 New version of "drummer", some improvements and addition of "memo.patterns", to memorize up to ten patterns. kaos. : http://solipse.free.fr/site/puredata/patches/index.html
Looping a Sample
Hello, I am making this patch that plays loops... i can get the loops to load and play once fine but i cant get them to loop or repeat... can anyone help me attached is the loop player i modified from drumpad.... [http://www.pdpatchrepo.info/hurleur/drumPad\_loop\_player.pd] : http://www.pdpatchrepo.info/hurleur/drumPad_loop_player.pd
how to send commands to terminal with shell
Hello everybody, I'm fairly new to PD and this is my first question in the forum. I hope somebody can help me, and sorry if this has been covered before (at least I haven't been able to find it). I'm trying to send messages to the OSX terminal from PD; what I want to do is basically copying a certain file into a new path with a new name. The text in my message is something like this: cp ./jpeg/001.jpg ./output/NEW01.jpg I have this connected to a shell object and I have a print STDOUT and print DONE objects to the left and right outlets just as I have seen on the PD help examples. I checked my code and it and it works if I enter it directly on the Terminal, no problems here (being on the right directory in the first place of course). I also tried to send a "cd" message to the shell object just before I send the other message, just in case PD wouldn't tell the Terminal wich folder we are in the first place. The only noticeable difference is that in the console I see "DONE: 0" when I send the "cd" message and "DONE:1" when I send the "cp" message. but no files are copied in my system whatsoever. What can I do? I hope I made myself clear enough. Thanks in advance! :)
visualizing ampliude of partials in additive synthesis
Hello, I have made a little visualization of the amplitude of the partials used to produce Risset 's Bell (based on a patch of Puckette about additive synthesis : http://msp.ucsd.edu/techniques/v0.11/book-html/node71.html ). The graphic is made with Processing ; OSC is used to export data. sound is here : http://m.larri.eu/RissetsBell_f369.wav ![visuRissBell_329HzCaptFreq50.png](/uploads/upload-a1714eb6-a696-4543-a92c-a6aa435f3898.png)
6th Annual Sound Travels Intensive with Pauline Oliveros
For anyone in Toronto (or able to make the trip here), the 6th Annual Sound Travels Intensive is coming up on August 19th, 2014. There is an introductory track on Max baked into the session, but expands further with other areas like Arduino and Field Recording. Unfortunately I don't know if there is anything Pure Data specific, but there's a good chance that some fellow participants use it (like myself). "The Sound Travels Intensive is an opportunity for artists from across Canada and around the world to create and present new work in Toronto, exchange ideas with others, and hone electroacoustic skills with the guidance of a diverse group of instructors including world renowned artists Pauline Oliveros, Emilie LeBel, Darren Copeland, Ian Jarvis and Hector Centeno. Five intense days of workshop sessions, private instruction and creative activity culminate in a public concert presentation at Toronto’s Artscape Wychwood Barns." [Registration](http://naisa.ca/festivals/sound-travels/intensive/) [Video from last year with Barry Truax](https://www.youtube.com/watch?v=7zNhwpK21kA)
Cleaning up recorded sound
Hi all I'm newly registered here, but have actually been using this forum as a resource for months now, so feel like I know most of you already. I wanted some advice about cleaning up sound on PD. I've been building a sampler which has the ability to skip from one part of the sample to another, but I find that this often results in clicks. Is there any way of getting around this? I've tried using a [lop~], but it doesn't seem to have helped. If a filter is the solution, can someone recommend the right settings for it? My patch can also re-record small parts of the sampler, using a [start $1( message for [tabwrite~]. But I find that this sometimes records dirty sound, almost as if there were a trace of the original recording still there, as with magnetic tape. Has anyone else experienced this? I'd be interested to hear how people deal with these, and any other techniques for achieving clean, high quality recorded sound on PD. This topic has probably been covered before, so feel free to point me towards other posts.
13 sample-based compositions
![Cardboard City Cassette medium.jpg](/uploads/upload-7c45f1b9-38b0-443d-8c3a-2f424b4228bb.jpg) hi everyone. thought i'd share some sample-based compositions i created using PD [.mf - Cardboard City](http://capillarywaves.bandcamp.com/album/cardboard-city)
sort of yamaha OPL2 sounding greek/african kalimba ensemble ;-)
i just did this piece where im emulating the obscure opl2 2-op fm chip. the fm sounds are run through an analog filter and the bassdrum is from an 808. all the patterns are based on 5 versus 12. all recorded on tascam 644 cassette deck. there is an extensive description on muffwiggler: http://www.muffwiggler.com/forum/viewtopic.php?p=1595749#1595749] high res audio: http://tindeck.com/listen/idwc soundcloud: https://soundcloud.com/lilakmonoke/helike-23 the patch (there is no way to include imgs?): http://googledrive.com/host/0B1beo8lTIeKlOVlESHpWbkpIR2c/helike.png
Techtar: a guitar-synth based on pd and arduino
Hi, Id' like to share my techtar-project with you. I've just made a new video showing one of it's main features: the ability to play itself:) [http://frankpiesik.info/2014/07/20/techtar-diary3/] greetings, frank : http://frankpiesik.info/2014/07/20/techtar-diary3/
Simple Logic Question
Well I have to say I'm a bit of a beginner with PD so this may be a very simple answer. I'm working on building a MIDI/OSC controller with a RPi running PD but that doesn't really apply here. What I need to figure out is how to change where the signal from the buttons goes. I want to have the buttons bang one thing if a master switch is set to one thing but bang another if the switch is set to something else. I hope that makes sense. I'm going to have the buttons bang a part of the patch that outputs a MIDI message and then when the switch is flipped I want them to bang a part of the patch that outputs OSC. This seems really simple but for some reason I can't wrap my brain around which objects/combinations of objects I need to use. I included a couple of ideas I had in the attached patch. I marked the places where I need the switch. Make sure to ask if you don't get something. Thanks in advance, Reuben [http://www.pdpatchrepo.info/hurleur/Switch\_Problem\_Example.pd] : http://www.pdpatchrepo.info/hurleur/Switch_Problem_Example.pd
Touch osc + ipad + macbook + hardware synth
how can i : ipad + layout in touch osc + touch bride + mac book + ? =\> hardware synth ?
below you'll find my lsmod info. echomixer, the alsa-toolkit utility for echo audio products did work after doing \[ \# alsaconf \] however, I tried to test my config simply by doing this; \# aplay -vv \* ALSA lib confmisc.c:670:(snd\_func\_card\_driver) cannot find card '0' ALSA lib conf.c:3500:(\_snd\_config\_evaluate) function snd\_func\_card\_driver returned error: No such device ALSA lib confmisc.c:391:(snd\_func\_concat) error evaluating strings ALSA lib conf.c:3500:(\_snd\_config\_evaluate) function snd\_func\_concat returned error: No such device ALSA lib confmisc.c:1070:(snd\_func\_refer) error evaluating name ALSA lib conf.c:3500:(\_snd\_config\_evaluate) function snd\_func\_refer returned error: No such device ALSA lib conf.c:3968:(snd\_config\_expand) Evaluate error: No such device ALSA lib pcm.c:2143:(snd\_pcm\_open\_noupdate) Unknown PCM default aplay: main:550: audio open error: No such device So therer is still a missing piece. Module Size Used by snd\_layla24 36356 0 snd\_seq\_oss 40084 0 snd\_seq\_midi 9792 0 snd\_seq\_midi\_event 8160 2 snd\_seq\_oss,snd\_seq\_midi snd\_seq 60456 5 snd\_seq\_oss,snd\_seq\_midi,snd\_seq\_midi\_event snd\_rawmidi 28992 2 snd\_layla24,snd\_seq\_midi snd\_seq\_device 9708 4 snd\_seq\_oss,snd\_seq\_midi,snd\_seq,snd\_rawmidi firmware\_class 11744 1 snd\_layla24 snd\_pcm\_oss 52032 0 snd\_mixer\_oss 20704 1 snd\_pcm\_oss snd\_pcm 91396 2 snd\_layla24,snd\_pcm\_oss snd\_timer 26500 2 snd\_seq,snd\_pcm snd 65908 9 snd\_layla24,snd\_seq\_oss,snd\_seq,snd\_rawmidi,snd\_seq\_device,snd\_pcm\_oss,snd\_mixer\_oss,snd\_pcm,snd\_timer soundcore 11204 1 snd snd\_page\_alloc 11304 2 snd\_layla24,snd\_pcm
Midi in on linux
@Gimmeapill said: > do you have the alsa module snd-seq loaded ? lsmod|grep snd\_seq snd\_seq\_dummy 4996 2 snd\_seq\_oss 36480 5 snd\_seq\_midi 9984 2 snd\_rawmidi 27264 3 snd\_usb\_lib,snd\_mpu401\_uart,snd\_seq\_midi snd\_seq\_midi\_event 8960 2 snd\_seq\_oss,snd\_seq\_midi snd\_seq 59120 6 snd\_seq\_dummy,snd\_seq\_oss,snd\_seq\_midi,snd\_seq\_midi\_event snd\_timer 25348 3 snd\_rtctimer,snd\_pcm,snd\_seq snd\_seq\_device 9868 5 snd\_seq\_dummy,snd\_seq\_oss,snd\_seq\_midi,snd\_rawmidi,snd\_seq snd 58372 16 snd\_usb\_audio,snd\_hwdep,snd\_mpu401,snd\_mpu401\_uart,snd\_seq\_oss,snd\_intel8x0,snd\_ac97\_codec,snd\_rawmidi,snd\_pcm\_oss,snd\_mixer\_oss,snd\_pcm,snd\_seq,snd\_timer,snd\_seq\_device Thanks, Gimmeapill, but it has been loaded all along and of course I am not getting midi.
"Morphine" - fx-morphing engine
for some reason, the list of error messages became even longer...... :( mix.switch.nice 101 ... couldn't create init.post.dollarg ... couldn't create flow.receive ... couldn't create flow.send ... couldn't create list.split 1 ... couldn't create flow.send ... couldn't create flow.send ... couldn't create flow.receive ... couldn't create \[makesymbol\] part of zexy-2.2.3 (compiled: Feb 25 2009) Copyright (l) 1999-2008 IOhannes m zmölnig, forum::für::umläute & IEM flow.@parse ... couldn't create flow.receive ... couldn't create init.dollar.zero.top ... couldn't create flow.receive ... couldn't create flow.receive ... couldn't create list.build ... couldn't create \[demultiplex\] part of zexy-2.2.3 (compiled: Feb 25 2009) Copyright (l) 1999-2008 IOhannes m zmölnig, forum::für::umläute & IEM init.make.unique ... couldn't create flow.receive ... couldn't create flow.receive ... couldn't create flow.send ... couldn't create list.split 1 ... couldn't create list.split 1 ... couldn't create list.length ... couldn't create flow.send ... couldn't create flow.send ... couldn't create wahwah~: an audio wahwah, version 0.1 (email@example.com) expr, expr~, fexpr~ version 0.4 under GNU General Public License mix.switch.nice 101 ... couldn't create init.post.dollarg ... couldn't create flow.receive ... couldn't create flow.send ... couldn't create list.split 1 ... couldn't create flow.send ... couldn't create flow.send ... couldn't create flow.receive ... couldn't create flow.@parse ... couldn't create flow.receive ... couldn't create init.dollar.zero.top ... couldn't create flow.receive ... couldn't create flow.receive ... couldn't create list.build ... couldn't create init.make.unique ... couldn't create flow.receive ... couldn't create flow.receive ... couldn't create flow.send ... couldn't create list.split 1 ... couldn't create list.split 1 ... couldn't create list.length ... couldn't create flow.send ... couldn't create flow.send ... couldn't create mix.switch.nice 101 ... couldn't create init.post.dollarg ... couldn't create flow.receive ... couldn't create flow.send ... couldn't create list.split 1 ... couldn't create flow.send ... couldn't create flow.send ... couldn't create flow.receive ... couldn't create flow.@parse ... couldn't create flow.receive ... couldn't create init.dollar.zero.top ... couldn't create flow.receive ... couldn't create flow.receive ... couldn't create list.build ... couldn't create init.make.unique ... couldn't create flow.receive ... couldn't create flow.receive ... couldn't create flow.send ... couldn't create list.split 1 ... couldn't create list.split 1 ... couldn't create list.length ... couldn't create flow.send ... couldn't create flow.send ... couldn't create mix.switch.nice 101 ... couldn't create init.post.dollarg ... couldn't create flow.receive ... couldn't create flow.send ... couldn't create list.split 1 ... couldn't create flow.send ... couldn't create flow.send ... couldn't create flow.receive ... couldn't create flow.@parse ... couldn't create flow.receive ... couldn't create init.dollar.zero.top ... couldn't create flow.receive ... couldn't create flow.receive ... couldn't create list.build ... couldn't create init.make.unique ... couldn't create flow.receive ... couldn't create flow.receive ... couldn't create flow.send ... couldn't create list.split 1 ... couldn't create list.split 1 ... couldn't create list.length ... couldn't create flow.send ... couldn't create flow.send ... couldn't create error: inlet: expected '' but got 'symbol' ... you might be able to track this down from the Find menu. error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'list' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'list' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'list' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'list' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'list' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'list' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'list' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'symbol' error: inlet: expected '' but got 'list' i'd really love to check out, what you have build there! ;)
Interaction Design Student Patches Available
Greetings all, I have just posted a collection of student patches for an interaction design course I was teaching at Emily Carr University of Art and Design. I hope that the patches will be useful to people playing around with Pure Data in a learning environment, installation artwork and other uses. The link is: [http://bit.ly/8OtDAq] or: [http://www.sfu.ca/~leonardp/VideoGameAudio/main.htm\#patches] The patches include multi-area motion detection, colour tracking, live audio looping, live video looping, collision detection, real-time video effects, real-time audio effects, 3D object manipulation and more... Cheers, Leonard [http://www.VideoGameAudio.com] -------------- Pure Data Interaction Design Patches ------------------------------------ These are projects from the Emily Carr University of Art and Design DIVA 202 Interaction Design course for Spring 2010 term. All projects use Pure Data Extended and run on Mac OS X. They could likely be modified with small changes to run on other platforms as well. The focus was on education so the patches are sometimes "works in progress" technically but should be quite useful for others learning about PD and interaction design. NOTE: This page may move, please link from: [http://www.VideoGameAudio.com] for correct location. Instructor: Leonard J. Paul Students: Ben, Christine, Collin, Euginia, Gabriel K, Gabriel P, Gokce, Huan, Jing, Katy, Nasrin, Quinton, Tony and Sandy ---- GabrielK-AsteroidTracker - An entire game based on motion tracking. This is a simple arcade-style game in which the user must navigate the spaceship through a field of oncoming asteroids. The user controls the spaceship by moving a specifically coloured object in front of the camera. Features: Motion tracking, collision detection, texture mapping, real-time music synthesis, game logic GabrielP-DogHead - Maps your face from the webcam onto different dog's bodies in real-time with an interactive audio loop jammer. Fun! Features: Colour tracking, audio loop jammer, real-time webcam texture mapping Euginia-DanceMix - Live audio loop playback of four separate channels. Loop selection is random for first two channels and sequenced for last two channels. Slow volume muting of channels allows for crossfading. Tempo-based video crossfading. Features: Four channel live loop jammer (extended from Hardoff's ma4u patch), beat-based video cross-cutting Huan-CarDance - Rotates 3D object based on the audio output level so that it looks like it's dancing to the music. Features: 3D object display, 3d line synthesis, live audio looper Ben-VideoGameWiiMix - Randomly remixes classic video game footage and music together. Uses the wiimote to trigger new video by DarwiinRemote and OSC messages. Features: Wiimote control, OSC, tempo-based video crossmixing, music loop remixing and effects Christine-eMotionAudio - Mixes together video with recorded sounds and music depending on the amount of motion in the webcam. Intensity level of music increases and speed of video playback increases with more motion. Features: Adaptive music branching, motion blur, blob size motion detection, video mixing Collin-LouderCars - Videos of cars respond to audio input level. Features: Video switching, audio input level detection. Gokce-AVmixer - Live remixing of video and audio loops. Features: video remixing, live audio looper Jing-LadyGaga-ing - Remixes video from Lady Gaga's videos with video effects and music effects. Features: Video warping, video stuttering, live audio looper, audio effects KatyC\_Bunnies - Triggers video and audio using multi-area motion detection. There are three areas on each side to control the video and audio loop selections. Video and audio loops are loaded from directories. Features: Multi-area motion detection, audio loop directory loader, video loop directory loader Nasrin-AnimationMixer - Hand animation videos are superimposed over the webcam image and chosen by multi-area motion sensing. Audio loop playback is randomly chosen with each new video. Features: Multi-area motion sensing, audio loop directory loader Quintons-AmericaRedux - Videos are remixed in response to live audio loop playback. Some audio effects are mirrored with corresponding video effects. Features: Real-time video effects, live audio looper Tony-MusicGame - A music game where the player needs to find how to piece together the music segments triggered by multi-area motion detection on a webcam. Features: Multi-area motion detection, audio loop directory loader Sandy-Exerciser - An exercise game where you move to the motions of the video above the webcam video. Stutter effects on video and live audio looper. Features: Video stutter effect, real-time webcam video effects : http://bit.ly/8OtDAq : http://www.sfu.ca/~leonardp/VideoGameAudio/main.htm#patches : http://www.VideoGameAudio.com
Building a modular synth
I'm not sure I entirely understand your first question. Why exactly would plugin support help your modular design in Pd? As for routing audio, I use jackOSX with Pd all the time. Here's how it works for me: - Open JackPilot. - In the preferences, set your buffer size and such to your liking. The virtual ins and outs are important. These are the number of connections that will be allocated to each app. - Start the server. - Open Pd and, under the media menu, select jack mode. This will open the audio setting dialog with jack set as the input and output device. Adjust the channels to whatever you need. - Open whatever other app you're using and set the input and output devices to jack server or whatever it says. - Now, go back to JackPilot and click the Routing button. In the send ports (which are your apps' outputs) unfold Pd's and select and output. Under receive ports, unfold the other app and double click the input you want to connect to. Or vice versa if you're feeding into Pd. Also, make sure that you're routing the app with the mixed signal (probably the DAW, if you're using that) to the sound card's inputs. This is always labeled "system." Yes, there will always be latency, but on OSX it seems to be minimized with jack. Even if you're using Pd on its own, you can get lower latency if you route through jack and then into the sound card. You will get added latency every time you go in and out, so try to minimize that. And multiple cores won't make a difference if you're only using Pd, as it will only run on a single core. If you're using other apps, then it might help as they may run on a different core than Pd. As for the standardization, I know there are some who would like to see everything scaled in a 0-1 range. But I don't think it's that big of a deal in Pd when it comes to sharing patches, as Pd gives you what you need to convert ranges if you need to. I would just go with what works best for your particular needs. Sometimes standards set by others can get in the way. :-) If you haven't yet, check out hardoff's DIY2 library. It might give you some ideas. [http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis] Hope that answers some of your questions. : http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis
Inside on a rainy day
Something to share that combines a few different models in a linked way. Start with a wind model based on turbulence, objects in the path vary their signals according to wind speed and their size and texture. [http://www.obiwannabe.co.uk/sounds/effect-wind3.mp3] And a rain model with carefully distributed droplets that make little clicks according to a range of textures they hit... [http://www.obiwannabe.co.uk/sounds/effect-plainrain.mp3] Next is a window pane built around a square lamina with glass-like character Here's a few knocks on the virtual window with a virtual stick. [http://www.obiwannabe.co.uk/sounds/effect-knockonwindow.mp3] and finally I combine them all in the same auditory scene with causal linkage, so the rain lashes against the window... [http://www.obiwannabe.co.uk/sounds/effect-rainywindow.mp3] (Total object count 80 operators) Andy : http://www.obiwannabe.co.uk/sounds/effect-wind3.mp3 : http://www.obiwannabe.co.uk/sounds/effect-plainrain.mp3 : http://www.obiwannabe.co.uk/sounds/effect-knockonwindow.mp3 : http://www.obiwannabe.co.uk/sounds/effect-rainywindow.mp3
Simple noise gate
there's one in the DIY2 library check out mono-gate or st-gate [http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis] : http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis
I have a huge project. So, I begin with the kick.
my kick i made a couple of years ago still sounds ok to me, and i have not needed to modify it much. it's basically modelled on the 909 and has a bit more editability than the real thing. [http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis] : http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis
Live Looping Patch
Hi, Try this pack (DIY2 by hardoff), i saw some 808\_state there .. [http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis] xray303 : http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis
Wanted: pure pd sounds / tracks / patches
details: 'pure data' means pure data vanilla or pure data extended. it excludes the use of any other sound processing application to make or sequence sounds, and excludes the use of VST, ladspa, audio unit etc plugins within pure data itself. triggering and controlling sounds with external hardware such as midi knobs and faders, arduino, etc, is totally fine. using an audio editor to record and do simple edits (crop, normalize, etc) of pd material is also fine, as long as the sound is not resequenced and no effects are added. basically the sound needs to start and end in pd. this doesn't mean it all has to happen realtime. i have been having lots of fun lately with running sampled pd sounds through various sampling devices and effects) my goal at the moment is to make some 'sample packs' of pure pd sounds which i will then release here and in other places under the creative commons license, so any submissions would need to be covered by an equally free agreement. after that, what i do with those samples, and what anyone else does with those samples is up to us, and would be covered by creative commons. my final goal is to make an album worth of music and sounds using only pure data, i can be contacted by email at firstname.lastname@example.org
Dunno what snd\_pcm is returning there, but you should see a separate driver for the Layla24 like this $ lsmod snd-seq-midi 5152 0 (unused) snd-virmidi 2080 0 snd-seq-virmidi 5128 0 \[snd-virmidi\] snd-seq-midi-event 6240 0 \[snd-seq-midi snd-seq-virmidi\] snd-seq 48784 0 \[snd-seq-midi snd-seq-virmidi snd-seq-midi-event\] snd-layla24 149732 3 <--------\*here\* snd-pcm 85860 2 \[snd-layla24\] <---and pcm is using it You don't have to recompile kernel or anything, find a driver and use insmod Apparently there's a utils package at Alsa Project website for the Echo Layla24 that sets up everything. Have you tried that one? also, there's an ALSA Wiki up now that may help you
@daisy said: > I have read some where that "if a voice is at same pitch and same loudness and still if one recognize that two voices are different , it is becuase of TIMBRE (tone quality)". (I agree there are other features as well who need to consider). Timbre is another word for spectrum. The spectrum of a sound is the combination of basic sine waves that are mixed together to make it. Every sound (except a sine wave) is a mixture of sine waves. You can make any sound by adding the right sine waves together. This is called synthesis. @daisy said: > First Question: > So how we can calculate the TIMBRE of voice? as fiddle~ object is used to determine the pitch of voice? what object is used for TIMBRE calculation?. \[fft~\] object splits up the spectrum of a sound. Think of it like a prism acting on a ray of light. Sound which is a mixture of sines, like white light, goes in. A rainbow of different colours comes out. Now you can see how much red, blue, yellow or green light was in the input. That's called analysis. So the calculation that gives the spectrum doesn't return a single number. Timbre is a vector, or list of numbers which give the frequencies and amplitudes of the sine waves in the mixture. We sometimes call these "partials". If you use sine wave oscillators to make a bunch of new sine waves and add them together according to this recipe you get the original sound back! That's called resynthesis. @daisy said: > Second Question: > And how one can change TIMBRE? as pitch shifting technique is used for pitch? what about timbre change? > > Thanks. Many things change timbre. The simplest is a filter. A high pass filter removes all the low bits of the spectrum, a bandpass only lets through some of the sine waves in the middle, and so on... Another way to change timbre is to do analysis with \[fft~\] and then shift some of the partials or remove some, and then resynthesise the sound. @daisy said: > I have a kind of general idea (vcoder). but how to implement it? and how to change formant?. A vocoder is a bank of filters and an analysis unit. Each partial that appears in the analysis affects the amplitude of a filter. The filter itself operates on another sound (often in real time). We can take the timbre of one sound by analysing it and get it to shape another sound that is fed through the filters. The second sound takes on some of the character of the first sound. This is called cross-synthesis. /doc/4.fft.examples/05.sheepgoat.pd Help -\> 7.Stuff -\> Sound file tools -\> 6.Vocoder
Live altering sounds
[http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis] : http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis
ISO Rad Synth
Check out DIY 2 by Hardoff: [http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis] : http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis
Freeverb~ but cheaper?
@hardoff said: > (search this forum if you need to find it) [http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis] : http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis
check; [http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis] : http://puredata.hurleur.com/sujet-1982-diy2-effects-sample-players-synths-sound-synthesis
Pd/rjdj skillshare @ Eyebeam, NYC, Dec 5th
[http://eyebeam.org/events/rjdj-skillshare] December 5, 2009 12:00 -- 1:30 PM : Introductory workshop on Pd with Hans-Christoph Steiner 2:00 -- 6:00 PM : SkillShare w/Steiner and members of RjDj programming team Free, capacity for up to 30 participants RSVP HERE: [http://tinyurl.com/ykaq3l3] Hans-Christoph Steiner returns to Eyebeam with members of the RjDj programming team from Europe to help turn your iPhone or iPod-Touch into a programmable, generative, and interactive sound-processor! Create a variable echo, whose timing varies according to the phone's tilt-sensor or an audio synthesizer that responds to your gestures, accelerations and touches. Abuse the extensive sound capabilities of the Pure Data programming language to blend generative music, audio analysis, and synthy goodness. If you're familiar with the awesome RjDj, then you already know the possibilities of Pure Data on the iPhone or iPod Touch (2nd and 3rd generation Touch only). Creating and uploading your own sound-processing and sound-generating patches can be as easy as copying a text file to your device! In this 4-hour hands-on SkillShare, interactive sound whiz and Pure Data developer Hans-Christoph Steiner and several of the original RjDj programers will lead you through all the steps necessary to turn your phone into a pocket synth. How Eyebeam SkillShares work Eyebeam's SkillShares are Peer-to-Peer working/learning sessions that provide an informal context to develop new skills alongside leading developers and artists. They are for all levels and start with an introduction and overview of the topic, after which participants with similar projects or skill levels break off into small groups to work on their project while getting feedback and additional instruction and ideas from their group. It's a great way to level-up your skills and meet like-minded people. This SkillShare is especially well-suited for electronic musicians and other people who have experience programming sound. Some knowledge of sound analysis and synthesis techniques will go a long way. We'll also take a lunch break in the afternoon including a special informal meeting about how to jailbreak your iPhone! Your Skill Level All levels of skill are OK as long as you have done something with Pd or Max/MSP before. If you consider yourself a beginner It would help a lot to run through the Pd audio tutorials before attending. NOTE: On the day of the SkillShare we will hold an introductory workshop from 12:00 until 1:30 PM, led by Steiner, for those who want to make sure they're up-to-speed before the actual SkillShare starts at 2:00\. The introductory workshop is for people who have some done something in Pd or Max/MSP but are still relative beginners in the area of electronic sound programming. What You Should Bring You'll need to bring your iPhone or iPod Touch (2nd or 3rd generation Touch only), your own laptop, a headset with a built-in mic (especially if using an iPod Touch) and the data cable you use to connect your device to your laptop. Owing to a terrific hack, you won't even need an Apple Developer License for your device! More Information RjDj is an augmented reality app that uses the power of the new generation personal music players like iPhone and iPod Touch to create mind blowing hearing sensations. The RjDj app makes a number of downloadable scenes from different artists available as well as the opportunity to make your own and share them with other users. RjDj.me Pd (aka Pure Data) is a real-time graphical programming environment for audio, video, and graphical processing. Pd is free software, and works on multiple platforms, and therefore is quite portable; versions exist for Win32, IRIX, GNU/Linux, BSD, and MacOS X running on anything from a PocketPC to an old Mac to a brand new PC. Recent developments include a system of abstractions for building performance environments, and a library of objects for physical modeling for sound synthesis. ---------------------------------------------------------------------------- kill your television : http://eyebeam.org/events/rjdj-skillshare : http://tinyurl.com/ykaq3l3
Yep. Things are going that way. Making it a standard is just my wishful vision! :) Not everyone is going to settle on Pd. There are other dataflow type interfaces to different unit generator sets like CPS, but there's a distinct movement in the direction of dataflow as a method for building procedural audio code...as it should be ;) I started to advocate this years ago as many here know, but found out only recently that EA have indeed ported Pd into some games, mainly for generative music scoring. Sony have something in R&D for the PS3 and certain game audio engine manufacturers have certainly considered it. I continue to knock on their doors, thump my bible and try to convince them to accept the good news into their hearts :) It would be wonderful to establish Pd as the main audio component in games for runtime production because it's the correct tool to break down the barrier between sound designer and audio programmer, that's the way to push things forwards. If you want to support this direction, the title to run out and buy is Spore. Brian Eno and others wrote procedural music scores using a cut down version called EAPd, which Mark Danks (GEM author, now at Sony) led the charge to embed as the audio engine. More than one chapter of the book I'm working on is devoted to designing patches for game applications, how to do dynamic level of detail and interface to event streams from world controllers and physics engines. I'd say dataflow programmers, whether audio or visual, have a good future ahead for commercial employment (but then I'm (very) biased ;) Here's some types of things in dev, these are components for planes, sort of thing you'd use in air combat games or whatever. One of them is developed as a practical example in the book. (I'm trying to get an accurate Supermarine Spitfire working at the moment...) [http://obiwannabe.co.uk/sounds/effect-jetengine.mp3] [http://obiwannabe.co.uk/sounds/effect-three-synthetic-jets-flypast.mp3] [http://obiwannabe.co.uk/sounds/effect-singleprop-cockpit.mp3] : http://obiwannabe.co.uk/sounds/effect-jetengine.mp3 : http://obiwannabe.co.uk/sounds/effect-three-synthetic-jets-flypast.mp3 : http://obiwannabe.co.uk/sounds/effect-singleprop-cockpit.mp3
Frank told me to make this tutorial to figure out [http://lists.puredata.info/pipermail/pd-list/attachments/20070528/967bc319/attachment-0001.bin] Thanks for his help. If anybody has the same problem. Here is the message he wrote me: > \> I have some newbie questions about Pd. I wanted to write a Patch > \> which is based on this one (maybe): > \> [http://puredata.hurleur.com/sujet-643-sample-player] > \> The mentioned Sample Player has 2 Sliders which control the Start-/ > \> End-Loop position which is the exact thing what i was looking for. > > Attached is a slightly different sampler, actually not a sampler > itself, but a tutorial on how to build your own sampler. > > \> What i want to do: > \> I want to make a patch, a sample player. When i press a button i want > \> to loop the actual sample position according to the key i have pressed. > \> > \> I give you an example. I load a loop which is 120 BPM fast. I set > \> somewhere my tempo. When i press "a" it starts looping 1/4th at the > \> actual play position. > \> > \> For this sort of thing. The Sample Player seems to be perfect. But > \> now there are the difficulties. > \> > \> - How can i set the tempo right? I figured out how to calculate the > \> tempo for any note length. > > If you work through attached tutorial, maybe some of the neccessary > calculations (as: duration(smps) =\> duration(msec) etc.) become clearer. > > \> - How can i get my keyboard entries into the software? > > Use the \[keyname\] or \[key\] objects. > > \> - And lust but not least. How can i get the Patch to act how i would > \> like. > > Just build it! If you get stuck, try to make a patch that illustrates > where you got stuck and send it here. > > You may want to start with empty subpatches, that divide your > problem/approach into smaller problems/steps. Like first do a > completely empty patch and put some empty subpatches in there: > > \[pd load\_file\] > > \[pd get\_duration\_in\_msec\] > > \[pd convert\_duration\_to\_BPM\] > > \[pd get\_keypresses\] > > \[pd play\_sample\] > > or similar. Then give your subpatches inlets and outlets and connect > them in order. And last fill in these subpatches with the real patches > one by one, always checking if every subpatch does what it should do. > > \> I know...dumb question you know too. > > It's not a dumb question at all. While playing samples isn't exactly > magic, it's also not trivial to do, especially for the first time. : http://lists.puredata.info/pipermail/pd-list/attachments/20070528/967bc319/attachment-0001.bin : http://puredata.hurleur.com/sujet-643-sample-player
Playing with wavetables
This thread reminded me of two things: 1\. Walsh transforms and binary synthesis, where the fourier transform is turned on its head and now everything is based on squarewaves. I read about this in "Computer sound synthesis" by Eduardo Reck Miranda - a most fascinating book that takes you on a whirlwind tour of the many angles people have taken for synthesising sounds. Sidetracking for one moment: _ A great way to teach yourself PD is to take a "sound synthesis method" and try to build it in PD. But the real fun comes when you try (thanks to the flexibility of PD) to COMBINE synthesis methods, how about a Wavetable based Spectral modeling synth, or a Waveshaping Additive Sampler with FM? _ Synthesising things with squarewaves alone sounds delicious to me! But then I am a fan of chiptunes and bleepy sound chips, so I guess i\`m biased! Talking of bleeping and chiptunes, trackers, but most importantly **wavetables**... 2\. Klaar Jaytrax - see \[url=[http://www.klaar.com/cms/] \][http://www.klaar.com/cms/] Screenshot: \[url=[http://www.sonicspot.com/jaytrax/jaytrax2.gif] \][http://www.sonicspot.com/jaytrax/jaytrax2.gif] Now freeware, but unfortunately closed source - this is a tracker (think ProTracker, FastTracker, OctaMED, ImpulseTracker) style interface with wavetables instead of samples as the instruments... But the cool thing is that any wavetable can modulate any other pair of wavetables through a complicated effect matrix of upsamplers downsamplers pixel distortions, transforms. The wavetables are "draw with the mouse-able" "create mathematically-able" and all the modulation can be done in realtime - whilst the wavetable is being played back. He's made a PocketPC version too - called Syntrax, which is Crippled Sharewave, but its so damn flexible I had to buy it! There is a version he made called the "desktop edition" or "desktop standalone version" or similar that allows you to use all the new modulation techniques and improved interface of the pocket PC version but on your desktop - that version is free. Pocketpc Syntrax screenshot: \[url=[http://www.clickapps.com/products/syntrax/screenshots/ppc/large/s] creen1.gif \][http://www.clickapps.com/products/syntrax/screenshots/ppc/large/scree] n 1.gif I've yet to try this, i guess it didnt occour to me until I read this thread, but it would be fantastic to emulate **and EXTEND** the behaviour of all this in puredata... Both jaytrax and syntrax are lightning fast to calculate realtime changing wavetables then play back 16 channels of polyphonic data even on tiny underpowered machines - i\`m doubtful that the way puredata works would allow this, but we\`ll see... : http://www.klaar.com/cms/ : http://www.sonicspot.com/jaytrax/jaytrax2.gif : http://www.clickapps.com/products/syntrax/screenshots/ppc/large/s : http://www.clickapps.com/products/syntrax/screenshots/ppc/large/scree
Some new bird sounds
[http://www.obiwannabe.co.uk/sounds/effect-rainforestbirds.mp3] [http://www.obiwannabe.co.uk/sounds/effect-riverbirds.mp3] [http://www.obiwannabe.co.uk/sounds/effect-seabirds.mp3] Background reading and inspiration [http://ccrma.stanford.edu/~tamara/publications/] [http://www.acoustics.hut.fi/research/avesound/pubs/akusem04.pdf] [http://www.csounds.com/ezine/winter2000/realtime/] [http://www.obiwannabe.co.uk/tutorials/html/tutorial\_birds.html] [http://www.indiana.edu/~songbird/pubs/publications\_index.html] [http://web.mit.edu/fee/Public/Publications/Fee\_etal1998.pdf] : http://www.obiwannabe.co.uk/sounds/effect-rainforestbirds.mp3 : http://www.obiwannabe.co.uk/sounds/effect-riverbirds.mp3 : http://www.obiwannabe.co.uk/sounds/effect-seabirds.mp3 : http://ccrma.stanford.edu/~tamara/publications/ : http://www.acoustics.hut.fi/research/avesound/pubs/akusem04.pdf : http://www.csounds.com/ezine/winter2000/realtime/ : http://www.obiwannabe.co.uk/tutorials/html/tutorial_birds.html : http://www.indiana.edu/~songbird/pubs/publications_index.html : http://web.mit.edu/fee/Public/Publications/Fee_etal1998.pdf
\> Andy, \*astonishing\* sounds. \> 100% Pure Pure Pure PureData? :-) \> (allowed answers: YES!!!). Thanks very much Alberto, as you surmise...yes indeed. Not just pure Pd but very efficient Pd. One tries to re-factor the equations and models, transforming between methods and looking for shortcuts, boiling each one down to the least number of operations. There are nicer sounds, but these ones are developed to use low CPU and run multiple instances in real-time. \> About EA: which games? Truth be told, I don't know. If I did I would probably have to observe NDA anyway. Which is one reason I'm not working on them, because I am going to publish all my methods in a coherent and structured thesis - it's the best strategy to push procedural audio forwards for all. Maybe it will be personally rewarding later down the line. But I do talk to leading developers and R&D people, and slowly working towards a strategic consensus. All the same, I'd be rather cautious about saying who is doing what, games people like to keep a few surprises back :) \> So this means designing an audio engine which is \> both responsive to the soundtrack/score, as well as \> to the actual action and human input of the game? \> Why wouldn't PD be the natural choice? Pd \_would\_ be the natural choice. Not least of all, it's BSD type license means developers can just embed it. But it has competitors, (far less capable ones imho) that have established interests in the game audio engine market including a vast investment of skills (game sound designers are already familiar with them). So rather than let Pd simply blow them out of the water one needs a more inclusive approach by saying "hey guys..you should be embedding Pd into your engines" Many hard decisions are not technical, but practical. For example you can't just replace all sample based assets, and you need to plan and build toolchains that fit into existing practices. Games development is big team stuff, so Pd type procedural audio has to be phased in quite carefully. Also, we want to avoid hype. The media have a talent for siezing on any new technological development and distorting it to raise unrealistic expectations. They call it "marketing", but it's another word for uninformed bullshit. This would be damaging to procedural audio if the marketers hyped up a new title as "revolutionary synthetic sound" and everyone reviewed it as rubbish. So the trick is to stealthily sneak it in under the media radar - the best we can hope for with procedural audio to begin with is that nobody really notices :) Then the power will be revealed. \> Obi, I've noticed that a lot of your tutorials and \> patches are based on generative synthesis/modelling, \> rather than samples. Is this the standard in the game world? No. The standard is still very much sample based, which is the crux of the whole agenda. Sample based game audio is extremely limited from an interactive POV, even where you use hybrid granular methods. My inspiration and master, a real Jedi who laid the foundations for this project is a guy called Perry Cook, he's the one who wrote the first book on procedural audio, but it was too ahead of the curve. Now we have multi-core CPU's there's actually a glut of cycles and execs running around saying "What are we going to use all this technology for?". The trick in moving from Perrys models to practical synthetic game audio is all about parameterisation, hooking the equations into the physics of the situation. A chap called Kees van den Doel did quite a lot of the groundwork that inspired me to take a mixed spectral/physical approach to parameterisation. This is how I break down a model and reconstruct it piecewise. \> Is this chiefly to save space on the media? Not the main reason. But it does offer a space efficiency of many orders of magnitude!!!! :) Just as a bonus :) I don't think many games developers have realised or understood this profound fact. Procedural methods \_have\_ been used in gaming, for example Elite was made possible by tricks that came from the demo scene to create generative worlds, and this has been extended in Spore. But you have to remember that storage is also getting cheaper, so going in the other direction you have titles like Heavenly Sword that use 10GB of raw audio data. The problem with this approach is that it forces the gameplay to take a linear narrative, they become pseudo-films, not games. \> Cpu cycles? No, the opposite. You trade off space for cycles. It is much much more CPU intensive than playing back samples. \> Or is it simply easier to create non-linear sound design \> this way? Yes. In a way, it's the only way to create true non-linear (in the media sense) sound design. Everything else is a script over a matrix of pre-determined possibilities. oops rambled again... back to it... a.
Anyone working with chiptunes or console emulation?
Well for now I'm concerned about getting an authentic sound without worrying too much about emulating the specific operation of the hardware. I also want to add a few extras that the original didn't have, like vibrato, sweeping of the triangle channel, and maybe some other small odds and ends. The pulse channel sound is simple to emulate, especially if you aren't concerned about timing their length & envelope data against other components, like the frame counter or interrupt lines. The triangle is a bit trickier to get authentic. The noise channel is particularly difficult to emulate, at least for the inexperienced like me. The NES noise sound in itself is easy to reproduce as a sample using 4bit level quantized noise. The 2A03 actually uses a long shift register and a XOR gate to generate a new pseudo-random bitstream for noise samples. Rather than use my very own enveloping like I did for pulse and triangle channels, I will have to reproduce the native specs of the counters/timers and decay envelope modes, especially to get the looped-decay noise channel mode to sound authentic. So I guess I will be using some of the same dataflow and control logic that the hardware uses, but I want to cut as many corners as I can right now, especially where I can easily provide userdata through the GUI instead of poking 6502 assembly. Then I can use my own, simpler methods for programmable manipulation of all of the inputs, but ideally get the same-sounding output as I would programming the actual hardware. Right now I'm going through this document to try and get a full picture of the hardware: [http://nesdev.parodius.com/NESSOUND.txt] I believe that has everything needed to directly emulate the channels, I just gotta keep studying the hell out of it until I can determine all of the specifics on timing, mode switching, sample sizes and such. The zenpho patch looks similar to what I want to use eventually for making real music, I'll probably refer to that a few times. I see he uses a completely diferent PWM routine than I do. Once I get the NES channels sounding properly, I plan to keep adding voices from other old sound chips I enjoy along with more extras and use it as my main synth. Thanks alot for that headlessbarbie link. Really amazing stuff. I've had thoughts about later on trying to emulate the 2A03 hardware directly, so that I possibly could put pd on a board with a fast CPU (maybe a SuperH) that would be small enough to fit in a NES cart. Then I could use pd as just an interpreter between the user and the real live sound hardware. : http://nesdev.parodius.com/NESSOUND.txt
I think this is a cultural question, not a technical one Shankar, and there's a cultural answer. For almost 15 years in "the west" there's been an industry pushing quick and easy solutions to making music. That quick and easy approach is "buy our sample libaries". I could write you a PhD thesis type essay on why this sucks, why the sample peddlers have triumphed over the possiblity of human programmable synthesisers, why the cult of emulation and "hip hop" producers sampling loops of other peoples records has brought music down to the lowest common denominator. But I can sum it up in one... creativity is hard. (And nobody wants to pay for it any longer, or invest time in cultivating it) I'm trying, in my own gentle way, to spread a little understanding and fresh enthusiasm for what I think has become a hidden art. Really understanding sound and synthesis is orders of magnitude (if there were such a scale) more difficult than grabbing a breakbeat from a record or going out with a microphone to collect material. Music making with preset tools has become so easy, and producers so lazy, that even the top paid studio producers do little except arrange other peoples work, and many lack even the most basic engineering skills to do recording and preproduction work on live material. Everyone wants to be a musician these days and put their "original" creations up on MySpace, and they can be - with Acid, FruityLoops and Reason you can just audition a few loops, press the "good" button and voila! Except your "original creation" is just a permutation on the same sound everybody else is making. That's why much music is so dreary, predictable and stale these days I think. The mainstream tools have become so rigid that it's impossible to subvert their use, and subversion is the essence of creative art. Anyway, it would be arrogant to judge other peoples approach to music making this way. I myself spent many years hooked into the cult of sampling and making music from other peoples work - it just became a boring creative cul-de-sac. However, I would argue as a professional producer who has seen the industry go though many changes that the easy route to music making with sample libraries, combined with the mainstream medias greed for fast and cheap products has basically killed off a generation of really creative musicians and producers. I've revised my paraphrasing of Miller about "undoing the sampling revolution". There never was a sampling revolution. Sampling is the status quo, and the synthetic revolution is still waiting to happen. I say, stick with Pd, put in the effort to really understand manipulating and creating sound from first principles and you will harvest the fruits of its power and let your genuine creativity shine through.
"Frozen reverb" is a misnomer. It belongs in the Chindogu section along with real-time timestretching, inflatable dartboards, waterproof sponges and ashtrays for motorbikes. Why? Because reverb is by definition a time variant process, or a convolution of two signals one of which is the impulse response and one is the signal. Both change in time. What you kind of want is a spectral snapshot. 1) Claudes suggestion above, a large recirculating delay network running at 99.99999999% feedback. Advantages: Sounds really good, its a real reverb with a complex evolution that's just very long. Problems: It can go unstable and melt down the warp core. Claudes trick of zeroing teh feedback is foolproof, but it does require you to have an apropriate control level signal. Not good if you're feeding it from an audio only source. Note: the final spectrum is the sum of all spectra the sound passes through, which might be a bit too heavy. The more sound you add to it, with a longer more changing sound, the closer it eventually gets to noise. 2) A circular scanning window of the kind used in a timestretch algorithm Advantages: It's indefinitely stable, and you can slowly wobble the window to get a "frozen but still moving" sound Problems: Sounds crap because some periodicity from the windowing is always there. Note: The Eventide has this in its infiniverb patch. The final spectrum is controllable, it's just some point in the input sound "frozen" by stopping the window from scanning forwards (usually when the input decays below a threshold). Take the B.14 Rockafella sampler and write your input to the table. Use an \[env~\]-\[delta\] pair to find when the input starts to decay and then set the "precession percent" value to zero, the sound will freeze at that point. 3) Resynthesised spectral snapshot Advantages: Best technical solution, it sounds good and is indefinitely stable. Problems: It's a monster that will eat your CPUs liver with some fava beans and a nice Chianti. Note: 11.PianoReverb patch is included in the FFT examples. The description is something like "It punches in new partials when theres a peak that masks what's already there". You can only do this in the frequency domain. The final spectrum will be the maxima of the unique components in the last input sound that weren't in the previous sound. Just take the 11.PianoReverb patch in the FFT examples and turn the reverb time up to lots.
Swept sine deconvolution
Hi Guys, I was referred to this thread by Serafino Di Rosario, and I will test this PD patch for performing ESS measurements. Everything is very interesting for me, and it seems that Katja did a very good job! Regarding the problems encountered, I give you these infos: 1) sine-phase-matched sweep. This method is very useful when performing distortion measurements, or computing multiple-order IRS to be used in not-linear convolution processor (for emulating the nonlinearities of a device). For the method to work, it is mandatory that the sine sweep is sine-phased not only at the beginning, but also at the end of each octave. This way, each harmonic-order IR will be phase matched with the linear IR. The provided formulation solves this problem, and it is very good to see it explained here so simply. The importance of using a phase-synced exponential sweep was first discovered by Antonin Novak, a Ph.D. student of the universities of Prague and Le Mans. 2) The ripple at low frequencies can be controlled by proper fade-in. The choice of the "optimal" fade law is still a big subject under scientific discussion. Hann windowing is just a very initial, suboptimal approach. I plan to investigate further the choice of the optimal fade law, and publish something on this topic, soon. 3) The concept of cutting away everything before the arrival of the direct sound is wrong, in my opinion. The "silence" before the arrival of the direct sound has a very important physical meaning, it is the "time-of-flight" of the sound, and provides an accurate measurement of the distance between the source and the receiver. Furthermore, it contains "background noise", which is a very important quantity to know, for example when deriving STI from the IR measurement. So PLEASE, do not cut away this initial silence! If the IR has to be used as a filter for a convolution-based reverb plugin, the plugin must be intelligent enough for analyzing the IR, and giving the user the possibility to keep this initial silence or cut it away. For example, IR-1 from Waves gives these possibilities. In any case, a measured IR of a room should always contain the time-of-flight... Publishing "pre-cutted" IRs is wrong, and in the long run will cause a lot of troubles... 4) The "Fractional delay", for the same reasons, should NOT be corrected! If the time-of-flight is fractional, good, let's stay with this fact. As pointed out, cutting (time-shifting) improperly the measured IR can alter its spectrum. So, please, keep every measured IR as it comes out from the convolution with the inverse sweep... If the higher-order distortion products are not needed, it make sense to only keep the linear part only, but always starting from the true "zero time". Let's make an example: I generate a 20s-long IR at 48 kHz, that is 960,000 samples. The inverse sweep will also be 960,000 samples long. I play the sweep, and record the room response for, say, 1,200,000 samples, for being sure of capturing the complete reverberant tail even at the higher frequencies. Now I convolve the recorded signal (1,200,000 samples) with the inverse sweep (960,000 samples), and I get a convolved signal which is long 2,159,999 samples. If I want to keep a 4s long IR, containing only the linear response, you should throw away the first 959,999 samples, and keep the following 192,000 samples. As this signal starts from the true "zero time", the main peak will not be at the very beginning, but delayed of an amount corresponding to the source-receiver distance. If it was 10m, it will be 10/340=0.0294 seconds... 5) For performing efficiently convolution of very long filters (in the example above, the Inverse Sweep was nearly 1 million points) it is advisable to employ a partitioned convolution scheme. That is, the filter is splitted in a number of blocks, so that instead of performing a single, very long FFT , a number of shorter FFTs is performed instead. On my web site you find a couple of papers explaining the partitioned convolution algorithm. This is the same algorithm employed in the well-known BruteFIR open-source program by Anders Torger. Bye! Angelo Farina
PD audio recognition
\[fiddle~\] gives you the main pitch of the sound incoming. I do a project with sound analysis and sound production related to this analysis. For that moment, I've been inspired by a book " interactive musical systems" by Robert rowe. I split the patch in two parts : LISTENER (analysis) ans PLAYERS (sound production). Now I've focus 8 differents styles of incoming sound, very primitive with 3 couple of parameters : CHAOS / REGULARITY LONG / SHORT STRONG / LOW It has to be appreciate for different situations. But I think it's a good way to start. Now, I'm a little disturb by the actions of the PLAYERS. I'm split between two position : use a lot of different sounds (like audio files from crowd, weather, voices, drums and synthesis and live recording and playing )and focus just a limited range of sounds and use them to the death. if you like to see the project, it's in french sorry : [http://impala.utopia.free.fr/projets/index.php?mode=plus&id=1] : http://impala.utopia.free.fr/projets/index.php?mode=plus&id=1
\[block~ 64 1 0.25\] the '64' is the amount of samples pd will process in one 'block'. that is the default blocksize...you could set it to any power of 2 if you wanted to. i'm not too sure, but i think that by processing a block of samples at a time, rather than processing sample by sample, pd can cut cpu usage. the '1' is the oversampling (overlap) amount. if it's just 1, then samples will be processed one block at a time, but if you set it to 2 or 4, or whatever, then the sound will be processed 2 or 4 times in parallel, with an offset. sorry,,,,it's hard for me to explain how that works. but importantly here, the '0.25' is the downsampling rate. because pd defaults to 44100hz sampling rate, your 11025hz samples will be played 4 times too fast, therefore you need to downsample to get the correct rate. but what object are you using to playback your sounds? \[tabread4~\] / \[tabplay~ \] / \[readsf~ \] ??? i think there are ways for all of these to playback samples at the correct rate without changing \[block~\] settings.
Hey guys, Wikipedia these , ad netsearch - I had for my memoire a few years ago... Walt Disney- Fantasia\_ they toured with live mixers in surround for that film. They used the pan pots developed by Blumlein. (Brit)Blumlein was the guy who said mono can imitate spac eby being panned between two distribution sources (speakers). There was Fletcher (Am)- Mr WALL OF SOUND - each source, a speaker... Too expensive - and impossible for mass distrib. Blum wins... Nowadays, with PD... Surround dac 1 2 3 4 5 6 7 8 then link with adat 9 10 11 12 etc.... (Soundcards - Protools digi01, motu 828 now sell for under 300 used!! - Motut drivers are current, but ot for linux. RME is linux/win/mac ready) (Sound on Sound has great articles on the following!!!) if you follow dolby... 123 1 and 3 ambient front, 3 center voice 4 5 - left right behind, mix of front with 5-20 ms delay, depending on image, with high and low cuts, so it imitates how we hear sound when it is behind us. 6 Bass - dumped to woofer below 110-80 hz. Haven't read up on DTS (check sound on sound) all the numbers are different depending on your setup of your program of choice, and sound card..But the distro is l/r front, l/r rear with delay ad filter cut. Center front, and bass. This creates a uniform distro method for cinema - good to know if you have mixed a film stereo, and do not want the dolby effect in place!!! or, when you get a dolby mixer to work on your film, she or he is preparig your film for this in the cinema... test it: just do a surround mix, saving your work. play it back on multitrack through a dolby amp from a surround dvd with coaxial... then connect to the amp from a dvd reader with each track connected by a coaxial cable... then connect a multitrack sound card directly to your amp, with each track linked to an out..... you should hear some differences... But the man at IEM, Mr PD and GEM, has been working on their surround for years - and has distributed the patch recently. Plus ambison is around for max...and I thought, there was a PD port.... But it begs the question of mass distro versus unique design and experiences of sound....
[musical] -- Music Scale
Creates music scales using simple lists or by sending new pitches to individual notes. The more creation arguments there are, the more inlets there will be, similar to \[pack\] in that regard. It sends a frequency value from the first outlet and the unaltered midi value from the second. \[musical 40 4 7\] <-- sending 0, 1, and 2 to this object will give you respectively 40, 44, and 47\. It assumes you want the scale to continue to higher/lower octaves, so 3, 4 returns 52, 56 and -1, -2 returns 35, 32\. if you send \[musical 40 4 7\] a list like \[5 9(, 5 and 9 would replace 4 and 7\. if you send a message with 'k' at the beginning, like \[k 45(, it changes the root note(40) to 45\. you could also send a full message like \[k 45 5 9( by default, \[musical\] reserves space for 12 intervals in a scale but if you specify more than 12 creation arguments, your object will reserve more space. you can also send it an 'oct' message to change the octave. The default is oct value at creation is 12\. Say you have \[musical 40 4 7\] and you send \[oct 5(, the first value after 47 instead of being 52 would be 45(40 + 5). I've included some patches to showcase the object. [http://www.pdpatchrepo.info/hurleur/musical.zip] : http://www.pdpatchrepo.info/hurleur/musical.zip
Building a modular synth
I have been working a lot with PD lately and have built some synths that I think make some pretty interesting sounds. I have also built some sequencers that produce complex patterns. I have lately been researching modular analogue sequencers, the big rack type things. This has been giving me ideas about building a complete modular system with PD. I have been thinking I could use my synths and sequencers together much more easily if they were modular. Also if a modular standard was developed, like standard CV ranges in analogue gear, we could trade module patches back and forth and build huge systems. I have a few things I have been trying to figure out. - since this system is modular would I be better off with something with vst au rtas ladspa etc. support? like a daw with reaktor? I would be able to patch back and forth between the effects and sound generators. - does anyone know a good way to run the output from a program like reason or a daw into PD on OSX? I can't figure out how to do it with jack. I have been running out to my sound card and then back into it with a patch cable(completely ridiculous) - it appears there will always be some type of delay/lag/buffer/latency with PD, maybe with all software synths? In my masterplan giant modular system with a computer and some external analogue effects / sound generators I could patch out of the computer and then back into it. Will the latency kill me doing this? will a crazy quad core tons of ram computer deal with super low latency? I know that is a bunch of questions in one post, but if anyone has any ideas let me know. Thanks
PD SYNTH - PLEASE HELP!
Hello, my name is Martin. I am doing a simple PD patch but I have a lot of doubts and problems with it. What I need is a simple synth to control ADSR , have 4 presets, simple modulation (AM, FM), and include a free improvise composition. What I have done is this (Please see my pd patch synth.pd). What I really want to have is something like this: [http://music.ucsd.edu/~tre/] - a simple 6 voice synth with presets Anyone can help? Thank you very much, Martin. [http://www.pdpatchrepo.info/hurleur/synth.pd] : http://music.ucsd.edu/~tre/ : http://www.pdpatchrepo.info/hurleur/synth.pd
Variable delay patch - need help
If you're interested, I'm working on a DSP system for wave field synthesis. WFS is an audio rendering technique where an array of loudspeakers is used to reproduce the sound field within a region. [http://en.wikipedia.org/wiki/Wave\_field\_synthesis] You can simulate Doppler effects etc using a standard variable delay, but it is not an entirely accurate way to simulate sound emitted from a moving source. For example the effects of a change in delay will be heard instantly for a standard variable delay, whereas for a real moving source a change in position will only be heard after the transit time delay. You are right of course about the skipping of samples. My MATLAB tests have shown me that this gives rise to a type of aliasing when the sound is slowed down by a lot. Thankfully, with the extrapolation for writing to fractional positions, the effect is rather subtle although by no means inaudible. This could be gotten around by looping through every sample between the position change, but I'm pretty sure that would be impossible in pd. I am doing this because I want to compare various methods of realising moving sources. : http://en.wikipedia.org/wiki/Wave_field_synthesis
Guitar multi-effects rig
This is my live guitar effects right as of Feb 14, 2009\. Please let me know if you find it useful or have any ideas for effects or other improvements. If you make some music with it, I'd love to hear it! Once I have some more time to program it, my next effect will probably be a Vocoder. Run effectsrig.pd to load it up. A midi expression pedal is recommended for the best experience - but it's not required. It contains the following effects: whammy~ ------- Digitech whammy style pitch shifter. Allows for smooth changes to the pitch shift amount. Based on the one posted by "kenn" on the puredata.info forums (which in turn is based on the pd example code). shimmer~ -------- A "shimmer" synth-like effect. This is done with a pitch shift in a feedback loop of a very short delay. octfuzz~ -------- Octave-up distortion like you can obtained with the classic transform and 2 diodde rectifier circuit. Basically it just full-wave rectifies the audio signal. This one really brings out the high frequencies (some times a little too much!). leslie~ ------- A stereo leslie (rotating speaker) simulator. This is one of my favorites. If modulation is turned all the way down it becomes tremolo. Take one of the outlets for mono use. Try it in stereo for super-swirley bliss! When using an expression pedal to control the rate, heel down will bypass the effect. Expression pedal control is done by expression.pd. It simply reads in MIDI and scales it to a 0-\>1 range. You can change the midi channel used by editing this file. The preset system is a little hack-ish, but it works for me. If anyone has any better ideas on how to do this, I'd love to hear them. When you load up the main effectsrig.pd file, you will see a bunch of message boxes. This are quick settings buttons - just click one to apply that effect. They are designed so you can click a couple in a row to quickly apply a few different settings. To start over, click the big "default" one on the left. It can also load presets based on midi messages. I use this with my Eventide TimeFactor pedal. When I change presets on the TimeFactor, PD follows along. This is handled by the box in the top right. The symbol box is for song titles, and the number boxes show the current TimeFactor preset. Open this box to see how I've done a couple of example midi controlled presets. "pd your\_love\_never\_fails" is a more complicated example that changes the expression pedal behavior slightly. If you want to use a different midi channel for listening to program changes, just edit preset.pd and presetnum.pd. preset.pd outputs a bang when the preset number supplied as a parameter is chose. presetnum.pd just outputs the number of the selected preset. [http://www.pdpatchrepo.info/hurleur/effectsrig.zip] : http://www.pdpatchrepo.info/hurleur/effectsrig.zip
Sample slicer with user-selectable slices
for a long time, i have been using 'cut everything into 16 equal sized chunks and then rearrange them' sample slicers. however, unless you use really even 4/4 loops, the end result never sounds really good. so today i made a couple of abstractions that let me visually choose which sections of a loop i want to trigger. the first is \[breakpoint-maker\], which you can use to line up your sample cut points and create a message box with all your cut-point data. and then the second is a little player abstraction called \[smp-slice\] that allows for pitched playback and sequencing of the sample sections. much nicer and cleaner sample cutting for sure. (edit - just added a little 5ms envelope to the player to avoid some clicks) [http://www.pdpatchrepo.info/hurleur/sample-slicer.zip] : http://www.pdpatchrepo.info/hurleur/sample-slicer.zip
Using arrays and signals in a same external
Hi, I have been trying to make externals using C and I have some difficulties to find information about a maybe rough subject: I need to use severals arrays within an external which must be modified by current samples and returned once updated. For instance, I am considering the following problem : - I need to pass a signal sample (inlet~) named e\[n\] (current input sample); - I neeed to pass two arrays of the same size N\_tab, tab1 and tab2 (for instance corresponding to whole signals); - I need to output one signal sample (outlet~) s(\[n\] (current output sample); - I need to output a modified version of array tab2\. The core dsp processing could be (for one sample or to be repeated inside a loop if externasl can only take a signal buffer instead of only one sample): tab2 = tab2 + e\[n\] \* tab1; s\[n\] = tab2(0); // In fact, I will use a circular buffer and pass the index of the buffer as inlet, and return // the updated value of this index as another outlet. I would very appreciate any help to cope with my problem (use of \_array, \_garray interesting to plot the arrays or another type of variables ?) as I do not see, for the moment, how to mix signals samples and (maybe) rather huge arrays in a same external. I'm "lost in Pd translation" at this time while I intend to realize several externals, to process sound in realtime using pd and offer them (sources and binaries) by the end of the year, as soon as the whole set of object swill be efficient and documented. Sincerly yours, Laurent Millot
How would I make these sounds?
Thank you all for your help. The samphold~ thing seems to do the trick, producing a sound that sounds much closer to the clip I posted than white noise. It sounds a little bit shakier than the clip, though, but this is a good start. @nestor said: > I think what you're looking at IS the noise function, which seems to be the largest portion of the sample you posted. Perhaps the noise function is just as you described it, a random "flip-flop." Near the end of the clip, there are some noise hi-hat sounds which seem to be actual noise, varying randomly within a range as opposed to most of the sounds which oscillate between two values. It seems strange that sounds in a "noise channel" can either be actual noise or just a random pulse wave. With 44100 Hz white noise I can't tell the difference, though at a lower sampling rate the pulse wave sounds a bit sharper.
Playsound with a sensitive detection ?
Hello, I previously wrote a topic as sort to understand how to detect motion coming from a webcam and play in consequence a soundfile. I wanted to play that sound and not to break its reading until its finished. I got an good answer here : [http://puredata.hurleur.com/sujet-1486-webcam-detection-play-sound-entirely] Now, I have a more complicated issue. Imagine we have four people coming in a room. Each time somebody is passing the door, then a sound is red. If these four individuals entered the room too fast, then the soundfile would be re-played several times in a short time and we couldn't hear it entirely. That's why I'll use a spigot connected to a delay box (answer given above). However, imagine these four people come in the room close together. They'd hear a single sound until it ends. After, there would be nothing unless that a new person would come into the room or quite these people get out the room. To be more simple, the idea is that everyone has a sound given. If 10 individuals are coming, then it'll have 10 sounds played following each. Please, would you have any advice to give me as sort to stock, keep numbers of entries in the room and then when a sound is finished run the next one ? Many thanks ! : http://puredata.hurleur.com/sujet-1486-webcam-detection-play-sound-entirely
Modular signal chain?
\> You could also put \_all\_ effects in a sub patch three times in a row and then from the main patch control what should be on/off inside these patches, which makes it kinda modular...but it would be a cheap hack and perhaps it will consume cpu even if some effects are OFF?< that's what i do. as long as you thoroughly \[switch~\] all audio and \[spigot\] all inputs to control calculations, the cpu won't rise TOO badly. in my modular synth, i currently have: 8 sampler voices, 4 synth voices, 8 drum voices, 9 effects channels.. each with 5 stages of effect functions, with 25 effects in each stage. so that's: (8+4+8+9) \* 5 \* 25 = 3625 effects units!!! all loaded at once, but only switched on when i need them. this uses about 50% of my cpu when all effects are off, so the cpu usage is climbing by loading them...but not intolerably so.
Swings and Roundabouts
I've been working on my first proper piece of Pd-written music. I've learnt a lot about Pd and made some really freaky noises in the process. I'm trying to move away from sequenced-everything into a more organic "live" process of music making. The piece is called Swings and Roundabouts, with the Swings being noises similar to the creaking of an unoiled playground swing and the Roundabouts being rolling gurgling bass. So far I have only made the patches for the Swings, the sound comes from an emulation of the Amiga computer's Paula sound chip (it plays one-shot and looping samples at a discrete set of frequencies). The sound parameters are controlled in real time using a Fostex Mixtab MIDI mixer control tablet - 2 channels for each of the 4 Swings, the left of the pair controlling rate of change of parameter and the right controlling base value of parameter. There are 5 parameters per Swing - volume, sample loop start point, sample loop length, sample playback period, and clock loop length. The parameters can take any value, but when they reach the Paula emulation they are quantized appropriately. A short recording of my friend Trel controlling swingsandroundabouts.pd after about an hour of practice, with the source waveform being a short sample of a 303 type sound: \[url=[http://pure-data.iem.at/Members/claudiusmaximus/copyme/swnrb/swnrb200403301910.ogg] \][http://pure-data.iem.at/Members/claudiusmaximus/copyme/swnrb/swnrb2004] 03301910.ogg More audio demos, the Pd patches, and screenshots: \[url=[http://pure-data.iem.at/Members/claudiusmaximus/copyme/swnrb/] \][http://pure-data.iem.at/Members/claudiusmaximus/copyme/swnrb/] (Audio license: Creative Commons Attribution-NonCommercial-ShareAlike License; Pd patch license: GNU General Public License) (Warning: paulachannel~.pd must be initialised with a non-zero loop length before use, otherwise Pd goes into an infinite loop in control time =\> crash. I should really fix this bug sometime...) Comments please! Especially suggestions for improvement, because it definitely needs improvement ;) : http://pure-data.iem.at/Members/claudiusmaximus/copyme/swnrb/swnrb200403301910.ogg : http://pure-data.iem.at/Members/claudiusmaximus/copyme/swnrb/swnrb2004 : http://pure-data.iem.at/Members/claudiusmaximus/copyme/swnrb/
Using array as a sampler
it sounds like your message box is just reading \[read Files/pd/doc/sound/nikon.wav< but you actually need \[read -resize Files/pd/doc/sound/nikon.wav arrayname< check this patch (cut the text into a txt file, and rename the txtfile with a .pd extension): \#N canvas 65 114 844 381 10; \#N canvas 0 0 450 300 graph1 0; \#X array sample 100 float 0; \#X coords 0 1 99 -1 200 140 1; \#X restore 274 175 graph; \#X msg 10 59 read -resize Files/pd/doc/sound/nikon.wav sample; \#X obj 10 84 soundfiler; \#X floatatom 10 110 15 0 0 0 - - -; \#X text 122 110 <= size of your sample; \#X text 360 59 -resize flag changes the size of your array to fit the sample; \#X text 60 42 click this box to load; \#X connect 1 0 2 0; \#X connect 2 0 3 0;
Fuck i love pd
ha ha , you guys are so nice. come to beijing on july 25th! i'm playing in a pool bar with panda twin. the patch takes care of the sample looping. of course i had to cut all my samples so that they were exactly 4 , 8 , 16 , 32...etc beats long, but the actual looping is controlled by the patch. ....you just do calculations based on the sample length output by soundfiler to determine the speed each sample needs to be played at. the reasons it doesn't glitch: 1) the samples (about 70 samples at a total of 20meg i guess) are preloaded into arrays. this means no glitch from loading new samples mid set. 2) every time a "cut" is made, a line~ object sends the signal to zero for about 10 ms...this stops the pop sound you get when you randomly splice two pieces of audio together. my mac is 800mhz, but the processes in that patch only use a fraction of cpu. by the way zenpho, i have posted other tracks in this forum...older ones...should be in the output: feed ears section still. i just hope i still have them online. thanks for the positive feedback. i'll let you guys know how my china gig goes. matt
PD within external feedback loops
Hi.. Normally i'm using a guitarfx pedals feedback loop to create my sounds and PD to process samples of those sounds. Today i tried something different, unfortunately without the results i was hoping for. i put my laptop WITHIN the feedback chain, (using it as any other pedal). without pd everything sounds and behaves as normaL. But when starting pd the sound and feedback loop characteristics are not as usual. even with the most basic patch: --\> adc~ -\> (volume) -\> dac~ . (i haven't tried any other program, pd is the only audiosoftware i use). normally these kind of feedback loops gives you all kinds of strange ( and unexpected) sound(effect)s, (like when tweaking just a normal volume pot of an FX pedal the pitch and color changes, but with pd these caracteristics are mostly gone ( volume is working as a volume pot again, BUT still there's feedback created). i found out changing the delay time in the audio settings menu alters the sound, so i' was thinking the problem occurs because of not being realtime (although i set realtime priority to pd). am i right? i hope not.. I just use the onboard audio from the laptop (MIC in, line out), running winxp.. anyone can help?? uw, my english is not the best, hope i'm clear enough.
DIY2 - Effects, Sample players, Synths and Sound Synthesis.
@unstable said: > Hey I got a question. in the compression patches I can kind of get my head around the amp factor section. But I don't get any of the att/rel section. Can anyone explain what that sgn~ (signum) is/does ? Secondly I can see the table is effected by the release but not really on the attack. I can hear the attack. The block size is 2\. I'm guessing if the default block size is 64 samples then 2 means 128?? Or is it actually 2 ? This outputs a frequency between 0.019 and 57 into a VCF along withe the original amp factor. Any advice ? ok, so the \[amp-factor\] stage is giving the amplitude of the signal, scaled according to our threshold and ratio settings. This will either be positive if the amplitude is rising, or negative if it's falling. next we go into the \[att-rel-filtering\] section, where we separate the attacks (positive) from the releases (negative) \[block 2\] really does mean that the blocksize is only 2 samples. This is so that the \[tabsend~\] and \[tabreceive~\] objects deal with packets of two samples at a time, giving us a sample delay between the input signal and the signal received by \[tabreceive~\]. If the blocksize were the default of 64, then we would have a 64 sample delay, and our compressor would not work at all well. sgn~ just gives the sign of the signal, so -1 for negative numbers and 1 for positive ones. note that we are dealing now with just the AMPLITUDE of the signal, which has been scaled in the \[amp-factor\] section so that rising amplitude (ie, attack) is positive and falling amplitude (release) is negative. that is then split apart using \[max~ 0\] with attack sent to the left outlet and release sent to the right outlet. The attack and release stages are both scaled separately (attack scaled by 0.019 to 57, and release by, i think, 0.00019 to 5.7) (and i don't know exactly WHY 57 was chosen, i'm sure the patch would work just as well with 50 or 60) then we go through the \[vcf~\]. Although vcf~ is normally used to shape the frequency content of a wavform, in this case, it has a different use. It is smoothing the amplitude signal. So, if we set a fast attack, then the vcf~ will have a cutoff of 57hz, and our compressor will attack within 20ms. if we set a slow attack, then the vcf~ will have a frequency of 0.019hz, and the compressor will take a few seconds to fully attack. finally, the original signal is multiplied by the compression factor, and sent along its way. There are some quick mods you can do to this patch, too. A sidechain compressor, essential for any sort of 'french' electro sound, can be made by adding another inlet~ for a second audio signal, and taking the inverse of the compression factor, like this: \[pd att-rel-filtering\] | \[-~ 1\] | \[\*~ -1\] and then multiply your second signal by that. also, it is fun to take the compression factor output to its own \[outlet~\] and use it as a modulation source for filter cutoffs for synth sounds, etc. anyway, hope that clears things up a bit? have fun!
Granular Cloud Delay
Hello everyone I'm relatively new on PD and working on a Granular Delay. I am stuck and I would appreciate it if someone could help me. So what I've been trying to build is a delay module with which I can scrub through the last five seconds of input and process that input with granular resynthesisis. What I need for that is an array which doesn't record five seconds at once, but constantly records and erases it's content. I tried several things but none really worked. But I think I have a clue what to do but not how to write that in PD. I figured I need to create a array which has 88 samples as the smallest unit (as 88 samples should be the shortest grain length I would like to use). Now the array should record the first 88 samples into position 1 then move those 88 samples to position 2 and write 88 new samples into position 1\. The array should do that constantly and when a package of 88 samples reaches a point after 5 seconds (220500 samples) that package should be erased. (picture 1) So basicly I want to handle the input as a sample that constantly changes A build something similar to this using multiple arrays (example 1), but I don't think using 2505 arrays can be the solution because It might be difficult to code that and to change the grain length. So that is my problem. I would really appreciate it if somebody could help me greetings Raffael [http://www.pdpatchrepo.info/hurleur/P1010219.JPG] : http://www.pdpatchrepo.info/hurleur/P1010219.JPG
Fairly efficient analog drums
For years I've been processing acoustic input sound but last week I wanted to do something with synthetic resonators. A friend suggested to try a recorded attack sample as the resonator input. That was a nice idea, but there was of course an important amplitude effect depending on the sample's frequency content. So I was looking for a way to construct a nice attack sample with 'white spectrum'. After hours spent on noise bursts and sweeps I gave up, it just did not work. Today when loading Obiwannabee's old patch 'efficient-drums' it occurred to me that 'efficient-snare' could be such an almost white spectrum attack. Well it's not completely white but it comes close. So this is indeed the ideal resonator starter, and produces a more interesting resonance than a 1-sample pulse. Attached is a demo patch \[efficient-percussion\] featuring a bateria de samba. All sounds are generated by one monophonic synth. Katja [http://www.pdpatchrepo.info/hurleur/efficient-percussion.pd.zip] : http://www.pdpatchrepo.info/hurleur/efficient-percussion.pd.zip
Different ways of Implementing Delay Loops
I've been trying to emulate a delay pedal I have, but having no luck. The effect I'm after is a stable interval's pitch change when you change the delay time - you can tune the sound going around the delay loop up or down by a stable number interval by shortening or lengthening the delay time - just as you could tune sample playback by changing the rate of a phasor~ reading an array. If I use a vd~ reading from a delay line (as in the rotating tape head pitch shifter example, but with added recirculating feedback) I can get pitch change effects (pretty great pitch change effects tbh) but they're unstable - they keep rising or falling in pitch with each circulation as they get pitch shifted each time they go around the loop. I figured one way to implement the effect would be to change the write speed and the read speed by the same amount (I'd guess this is how the pedal is doing it)- But I have no idea how to change the delay write speed in PD. I thought of changing sample rate using block~ in a subpatch with the delay line but that doesn't seem to give the desired results (or in fact any sound at all - so I'm guessing you can't upsample a delay line). Any ideas???