Ofelia Jump On Click Slider
@cuinjune thanks. i am sure the overlapping problem is solvable, i think i need to change the mouse behaviour of the gui templates (for me) for that case. i am also quite positive with the results so far, and motivated to try further. somehow i needed to change every loadbang to ofWindowLoadBang to make the dynamic patching work (?). now it makes really sense, like you mentioned earlier, to think about a concept i have some ideas, but what i definitely would like is if everybody could put their own modules easily inside kind of a template and connect them with others. also if there should be something like a main mixer or sequencer is a question for me. or if i want a "reason like" rack or more like a modular interface or something else. also saving is a good question. it would be nice if it is possible to save globally and locally, so that there are presets for a global setting that can also save the local settings of the modules. saving them as patches sounds like a good idea. i think about four module categories: instruments, effects, sequencer and visualizer or something between them. and analog to pure data, it would make sense to have audio and control connections seperated. i think more difficult than creating and destroying modules with dynamic patching is creating cord connections with dynamic patching, but perhaps also solvable because there is a fixed number of objects in every module. so maybe, theoretically one just needs the module id and the module and input output number and then it could work with the connect and disconnect message. i am not sure if i think too far with the concept, the initial idea was more to create some modules, connect them and see what happens
Ofelia Jump On Click Slider
@Jona Looks great! Yes it should be possible. But I think you would need to replace [ofTouchListener] with [_locRcv]s to listen to the event from the main patch so the module interface can listen to the mouse click according to the render order. (Just like how it's done with draggableShapes example)
[_locRcv] should be used one level lower from the main patch. This is to use main patch's local variable name and still communicate with other abstractions. Also note if you use [ofMouseListener], it will not handle multitouch on mobile devices. (it will respond to one finger at a time) That's why I used [ofTouchListener] in pdgui abstraction. But if you're targeting desktop only, [ofMouseListener] is enough and easier to handle.
If you really want to build the modular environment using multiple modules, I suggest you first consider how each module should work commonly and try to first build a minimal module that only has common attributes. (e.g. window bar, interface section, render order..) Then it would be easier to maintain and to create other module later on since you only need to add the non-common part on top of your minimal module.
I think creating such large system requires thorough planning and clear idea of how things should work in the first place otherwise it is likely that you will continuously face many unexpected problems and have to rework many times.
P.S.: You probably know this but your patch currently uses left audio channel only.
FM Feedback Hack
@Zygomorph Hi, thanks for the explanation of your method. I'll try and make a patch from your diagrams and see how it sounds, although just by looking at it, and if I understand it correctly, shouldn't the output of [*~] on the right hand side of the first diagram go into [wrap~] and then [+~] before the last [cos~]? I think that in order to sound correct the feedback needs to be summed with the signal from the other operators and then wrap~ everything before adding it to the original phase.
I'll try and explain my method in the linked patch: in the post we both linked there was a patch that used [tabsend~] and [tabreceive~] with [block~ 1] for doing the feedback, and to me that sounds a lot better than doing it with block~ 64. (this is shown in the feedback.fm patch attached).
When I'm modulating one operator with the output of another one, I can't really hear a difference between the two block sizes (you can hear this in the fm.blocksize patch).
However, in a multi-operator situation where there is also feedback, all the phases get added together and if the phase from the other operators is being updated every 64 samples, whereas the feedback works at block 1, it sounds awful so I figured it was best to have all phase modulation work at block size 1.
So, the way I've done it is to have a table for each operator to pull the phases from, and then the output of each operator is sent to each of these tables according to the indexes specified in the matrix. You can see these tables in the "tables" abstraction inside PMops.
Therefore, if for example operator 2 and 1 both send some of their output to operator 1 (one being the feedback), the phase table for operator 1 gets the sum of those two outputs (both updated at block size 1), which is then added to the [phasor~] phase of operator 1.
I think it sounds more complex than it actually is, and of course the patch is not very readable because of the dollar sign values I had to use in order to re-use the same abstraction for each operator.
FFT freeze help
Brace for wall of text:
My patch is still a little messy, and I think I'm still pretty naive about this frequency domain stuff. I'd like to get it cleaned up more (i.e. less incompetent and embarrassing) before sharing. I'm not actually doing the time stretch/freeze here since I was going for a real time effect (albeit with latency), but I think what I did includes everything from Paulstretch that differs from the previously described phase vocoder stuff.
I actually got there from a slightly different angle: I was looking at decorrelation and reverberation after reading some stuff by Gary S. Kendall and David Griesinger. Basically, you can improve the spatial impression and apparent source width of a signal if you spread it over a ~50 ms window (the integration time of the ear). You can convolve it with some sort of FIR filter that has allpass frequency response and random phase response, something like a short burst of white noise. With several of these, you can get multiple decorrelated channels from a single source; it's sort of an ideal mono-to-surround effect. There are some finer points here, too. You'd typically want low frequencies to stay more correlated since the wavelengths are longer. This also gives a very natural sounding bass boost when multiple channels are mixed.
Of course you can do this in the frequency domain if you just add some offset signal to the phase. The resulting output signal is smeared in time over the duration of the FFT frame, and enveloped by the window function. Conveniently, 50 ms corresponds to a frame size of 2048 at 44.1 kHz. The advantage of the frequency domain approach here is that the phase offset can be arbitrarily varied over time. You can get a time variant phase offset signal with a delay/wrap and some small amount of added noise: not "running phase" as in the phase vocoder but "running phase offset". It's also sensible here to scale the amount of added noise with frequency.
Say that you add a maximum amount of noise to the running phase offset- now the delay/wrap part is irrelevant and the phase is completely randomized for each frame. This is what Paulstretch does (though it just throws out the original phase data and replaces it with noise). This completely destroys the sub-bin frequency resolution, so small FFT sizes will sound "whispery". You need a quite large FFT of 2^16 or 2^17 for adequate "brute force" frequency resolution.
You can add some feedback here for a reverberation effect. You'll want to fully randomize everything here, and apply some filtering to the feedback path. The frequency resolution corresponds to the reverb's modal density, so again it's advantageous to use quite large FFTs. Nonlinearities and pitch shift can be nice here as well, for non-linear decays and other interesting effects, but this is going into a different topic entirely.
With such large FFTs you will notice a quite long Hann window shaped "attack" (again 2^16 or 2^17 represents a "sweet spot" since the time domain smearing is way too long above that). I find the Hann window is best here since it's both constant voltage and constant power for an overlap factor of 4. So the output signal level shouldn't fluctuate, regardless of how much successive frames are correlated or decorrelated (I'm not really 100% confident of my assessment here...). But the long attack isn't exactly natural sounding. I've been looking for an asymmetric window shape that has a shorter attack and more natural sounding "envelope", while maintaining the constant power/voltage constraint (with overlap factors of 8 or more). I've tried various types of flattened windows (these do have a shorter attack), but I'd prefer to use something with at least a loose resemblance to an exponential decay. But I may be going off into the Twilight Zone here...
Anyway I have a theory that much of what people do to make a sound "larger", i.e. an ensemble of instruments in a concert hall, multitracking, chorus, reverb, etc. can be generalized as a time variant decorrelation effect. And if an idealized sort of effect can be made that's based on the way sound is actually perceived, maybe it's possible to make an algorithm that does this (or some variant) optimally.
FFT freeze help
Yeah, the running phase is really important. Without it, the [rifft~] will keep resetting the phase at the same value, when really it should be adding the incoming phase-difference to the previous output phase for continuity. This is how phase vocoder is able to faithfully synthesize the original frequencies, despite the fact that it has a finite resolution. Each bin is essentially a bandpass filter, and any frequency that gets through the filter is analyzed for magnitude and phase. But, if you can assume that only one frequency made it through the filter, the difference between the current frame's phase and the previous one will tell you the exact frequency, because only one frequency can advance its phase in that amount of time at that specific interval. So when you modify and resynthesize, you need to take that phase difference and accumulate it so the "oscillators" can continuously fine-tune themselves.
Beatmaker Abstract
http://www.2shared.com/photo/mA24_LPF/820_am_July_26th_13_window_con.html
I conceptualized this the other day. The main reason I wanted to make this is because I'm a little tired of complicated ableton live. I wanted to just be able to right click parameters and tell them to follow midi tracks.
The big feature in this abstract is a "Midi CC Module Window" That contains an unlimited (or potentially very large)number of Midi CC Envelope Modules. In each Midi CC Envelope Module are Midi CC Envelope Clips. These clips hold a waveform that is plotted on a tempo divided graph. The waveform is played in a loop and synced to the tempo according to how long the loop is. Only one clip can be playing per module. If a parameter is right clicked, you can choose "Follow Midi CC Envelope Module 1" and the parameter will then be following the envelope that is looping in "Midi CC Envelope Module 1".
Midi note clips function in the same way. Every instrument will be able to select one Midi Notes Module. If you right clicked "Instrument Module 2" in the "Instrument Module Window" and selected "Midi input from Midi Notes Module 1", then the notes coming out of "Midi Notes Module 1" would be playing through the single virtual instrument you placed in "Instrument Module 2".
If you want the sound to come out of your speakers, then navigate to the "Bus" window. Select "Instrument Module 2" with a drop-down check off menu by right-clicking "Inputs". While still in the "Bus" window look at the "Output" window and check the box that says "Audio Output". Now the sound is coming through your speakers. Check off more Instrument Modules or Audio Track Modules to get more sound coming through the same bus.
Turn the "Aux" on to put all audio through effects.
Work in "Bounce" by selecting inputs like "Input Module 3" by right clicking and checking off Input Modules. Then press record and stop. Copy and paste your clip to an Audio Track Module, the "Sampler" or a Side Chain Audio Track Module.
Work in "Master Bounce" to produce audio clips by recording whatever is coming through the system for everyone to hear.
Chop and screw your audio in the sampler with highlight and right click processing effects. Glue your sample together and put it in an Audio Track Module or a Side Chain Audio Track Module.
Use the "Threshold Setter" to perform long linear modulation. Right click any parameter and select "Adjust to Threshold". The parameter will then adjust its minimum and maximum values over the length of time described in the "Threshold Setter".
The "Execution Engine" is used to make sure all changes happen in sync with the music.
IE>If you selected a subdivision of 2, and a length of 2, then it would take four quarter beats(starting from the next quarter beat) for the change to take place. So if you're somewhere in the a (1e+a) then you will have to wait for 2, 3, 4, 5, to pass and your change would happen on 6.
IE>If you selected a subdivision of 1 and a length of 3, you would have to wait 12 beats starting on the next quater beat.
IE>If you selected a subdivision of 8 and a length of 3, you would have to wait one and a half quarter beats starting on the next 8th note.
http://www.pdpatchrepo.info/hurleur/820_am,_July_26th_13_window_conception.png
JKP - Bangboum
Sorry I'm getting errors, can you help me?
I've installed the montreal mtl library, but still I have too many objects missing
mtl/qompander~ /id compander
... couldn't create
mtl/clkMaster 120
... couldn't create
setting pattern to default: /home/leopard/Documenti/Music/_sperimentazioni/_PURE DATA/Bangboum/./moteur/*
[msgfile] part of zexy-2.2.3 (compiled: Sep 22 2010)
Copyright (l) 1999-2008 IOhannes m zmölnig, forum::für::umläute & IEM
mtl/player~
... couldn't create
mtl/clkSlave 4 16
... couldn't create
mtl/kick808~
... couldn't create
mtl/clkSlave 1 4
... couldn't create
setting pattern to default: /home/leopard/Documenti/Music/_sperimentazioni/_PURE DATA/Bangboum/./moteur/*
mtl/player~
... couldn't create
setting pattern to default: /home/leopard/Documenti/Music/_sperimentazioni/_PURE DATA/Bangboum/./moteur/*
mtl/player~
... couldn't create
setting pattern to default: /home/leopard/Documenti/Music/_sperimentazioni/_PURE DATA/Bangboum/./moteur/*
mtl/player~
... couldn't create
setting pattern to default: /home/leopard/Documenti/Music/_sperimentazioni/_PURE DATA/Bangboum/./moteur/*
mtl/player~
... couldn't create
setting pattern to default: /home/leopard/Documenti/Music/_sperimentazioni/_PURE DATA/Bangboum/./moteur/*
mtl/player~
... couldn't create
setting pattern to default: /home/leopard/Documenti/Music/_sperimentazioni/_PURE DATA/Bangboum/./moteur/*
mtl/player~
... couldn't create
setting pattern to default: /home/leopard/Documenti/Music/_sperimentazioni/_PURE DATA/Bangboum/./moteur/*
mtl/player~
... couldn't create
Recreating analogue valve compressor in PD
It will probably be a waste of time trying to faithfully model the compressor in Pd, it's easy to create a simple compressor, but what you need to do is physically model the circuitry. This will mean all the non-linearities present in the circuit will be recreated, and the compressor will sound like a real analogue compressor. I wanted to model analogue filters in Pd, but it can't be done, what people have done in the past is written them in C and compiled them into externals.
That's if you wanted to do a realistic imitation of it, which for a uni project I assume you would want to do. Analogue modeling is a difficult one, I just completed a uni project on physical modeling of a guitar using Pd and I wanted to model a Moog filter to pass the guitar signal through, but I can't write in C.
Actually, I remember reading a thesis where someone modeled the tone controls of a guitar in C, and to get a difference equation for a filter they built the circuit in SPICE, which is freeware for PC, but there is a Mac version. Once you build the circuit it plots graphs of the output, and you can swap around components. I don't know if that's any help, but it could be a start seeing as I assume you built it yourself so will have the circuit diagrams for it.
\[zerox~\]
Thanks, you two. You both seem to be on the same page as I in terms of what we want Pd to do. I am working about 30 hours a week on building something like a Casio VZ/opl2 chip; having both phase modulation and phase distortion w/ a sync function for the phase accumulator. Also, sine functions will be able to flip shape to halfsine, quartersine, etc much like the 2op Yamaha opl2 chip. LarsXI: I use your [expr~ if($v1<$v2, ($v1*(1/$v2))/2, ((($v1-$v2)*(1/(1-$v2)))/2)+0.50] for PDist DAILY.
THANK YOU!
Also, LarsXI I am well aware of the CZ waveforms you linked me to. I have never owned a CZ but I am assumed they are derived by hardsyncing the phase accumulator (or sine function...for some reason?). Do you have any idea whether the knee adjusting phase distortion occurred before or after the hardsyncing on CZ's? I am just wondering what order of function would be best and I am leaning toward:
[phasor~] --> knee adjustor --> hard-sync --> phase modulation input --> [cos~] --> [dac~]
But really, I am just taking stabs in the dark for most of that.
Any suggestions?
JF
Sympathetic strings, rods, etc implementation
I've just recently been getting into physical modeling myself, so I only have so much to offer here, but your post is also raising some question for me. Mainly, what is rule of thumb modeling? And what makes you think you need an FFT for this? I mean, is there some FFT synthesis method your thinking of for modeling resonant systems?
If you go the physical modeling route, though, sympathetic vibrations or resonance is actually pretty simple as it is inherently part of the system. For example, to create the sympathetic vibrations of a string (as in a sitar or open strings of a guitar) you would simply feed the output of the plucked strings into a model of the open strings, possibly attenuated and filtered first. The model will only resonate with frequencies that are harmonic to it, as it should; other frequencies will quickly die out. If I remember right, the ideal place to tap the plucked string would be at the "bridge" of the model as that is where the coupling would be in a physical instrument, though I'm not certain it is a hard and fast rule (you can definitely get strings to resonate from energy propagating through air).