Beatmaker Abstract
http://www.2shared.com/photo/mA24_LPF/820_am_July_26th_13_window_con.html
I conceptualized this the other day. The main reason I wanted to make this is because I'm a little tired of complicated ableton live. I wanted to just be able to right click parameters and tell them to follow midi tracks.
The big feature in this abstract is a "Midi CC Module Window" That contains an unlimited (or potentially very large)number of Midi CC Envelope Modules. In each Midi CC Envelope Module are Midi CC Envelope Clips. These clips hold a waveform that is plotted on a tempo divided graph. The waveform is played in a loop and synced to the tempo according to how long the loop is. Only one clip can be playing per module. If a parameter is right clicked, you can choose "Follow Midi CC Envelope Module 1" and the parameter will then be following the envelope that is looping in "Midi CC Envelope Module 1".
Midi note clips function in the same way. Every instrument will be able to select one Midi Notes Module. If you right clicked "Instrument Module 2" in the "Instrument Module Window" and selected "Midi input from Midi Notes Module 1", then the notes coming out of "Midi Notes Module 1" would be playing through the single virtual instrument you placed in "Instrument Module 2".
If you want the sound to come out of your speakers, then navigate to the "Bus" window. Select "Instrument Module 2" with a drop-down check off menu by right-clicking "Inputs". While still in the "Bus" window look at the "Output" window and check the box that says "Audio Output". Now the sound is coming through your speakers. Check off more Instrument Modules or Audio Track Modules to get more sound coming through the same bus.
Turn the "Aux" on to put all audio through effects.
Work in "Bounce" by selecting inputs like "Input Module 3" by right clicking and checking off Input Modules. Then press record and stop. Copy and paste your clip to an Audio Track Module, the "Sampler" or a Side Chain Audio Track Module.
Work in "Master Bounce" to produce audio clips by recording whatever is coming through the system for everyone to hear.
Chop and screw your audio in the sampler with highlight and right click processing effects. Glue your sample together and put it in an Audio Track Module or a Side Chain Audio Track Module.
Use the "Threshold Setter" to perform long linear modulation. Right click any parameter and select "Adjust to Threshold". The parameter will then adjust its minimum and maximum values over the length of time described in the "Threshold Setter".
The "Execution Engine" is used to make sure all changes happen in sync with the music.
IE>If you selected a subdivision of 2, and a length of 2, then it would take four quarter beats(starting from the next quarter beat) for the change to take place. So if you're somewhere in the a (1e+a) then you will have to wait for 2, 3, 4, 5, to pass and your change would happen on 6.
IE>If you selected a subdivision of 1 and a length of 3, you would have to wait 12 beats starting on the next quater beat.
IE>If you selected a subdivision of 8 and a length of 3, you would have to wait one and a half quarter beats starting on the next 8th note.
http://www.pdpatchrepo.info/hurleur/820_am,_July_26th_13_window_conception.png
Low pass filter for smooth fade
@Maelstorm said:
I fairly recently was trying to make one of those smoothed-out sample-and-hold LFOs and discovered that [lop~] generates asymmetric curves: logarithmic going up and exponential going down.
Well, it's actually exponential both ways, but it's always exponentially decaying toward the destination value. So going up it's an inverted exponential decay.
But if you're considering something like an amplitude envelope, you may want the rising portions to be an exponential decay (most analog envelope generators work this way, moving to the next stage at a preset threshold). A DX7 has a true exponentially increasing attack (since it's a linear function passed through an exponential lookup table), but it sounds strange and sort of unmusical, like a sound played in reverse.
As suggested in the tutorials, quartic envelopes are a nice compromise. The curves are exponential-like, but they reach the destination values in a finite amount of time.
Pd/rjdj skillshare @ Eyebeam, NYC, Dec 5th
http://eyebeam.org/events/rjdj-skillshare
December 5, 2009
12:00 -- 1:30 PM : Introductory workshop on Pd with Hans-Christoph Steiner
2:00 -- 6:00 PM : SkillShare w/Steiner and members of RjDj programming team
Free, capacity for up to 30 participants
RSVP HERE: http://tinyurl.com/ykaq3l3
Hans-Christoph Steiner returns to Eyebeam with members of the RjDj programming team from Europe to help turn your iPhone or iPod-Touch into a programmable, generative, and interactive sound-processor! Create a variable echo, whose timing varies according to the phone's tilt-sensor or an audio synthesizer that responds to your gestures, accelerations and touches. Abuse the extensive sound capabilities of the Pure Data programming language to blend generative music, audio analysis, and synthy goodness. If you're familiar with the awesome RjDj, then you already know the possibilities of Pure Data on the iPhone or iPod Touch (2nd and 3rd generation Touch only).
Creating and uploading your own sound-processing and sound-generating patches can be as easy as copying a text file to your device! In this 4-hour hands-on SkillShare, interactive sound whiz and Pure Data developer Hans-Christoph Steiner and several of the original RjDj programers will lead you through all the steps necessary to turn your phone into a pocket synth.
How Eyebeam SkillShares work
Eyebeam's SkillShares are Peer-to-Peer working/learning sessions that provide an informal context to develop new skills alongside leading developers and artists. They are for all levels and start with an introduction and overview of the topic, after which participants with similar projects or skill levels break off into small groups to work on their project while getting feedback and additional instruction and ideas from their group. It's a great way to level-up your skills and meet like-minded people. This SkillShare is especially well-suited for electronic musicians and other people who have experience programming sound. Some knowledge of sound analysis and synthesis techniques will go a long way.
We'll also take a lunch break in the afternoon including a special informal meeting about how to jailbreak your iPhone!
Your Skill Level
All levels of skill are OK as long as you have done something with Pd or Max/MSP before. If you consider yourself a beginner It would help a lot to run through the Pd audio tutorials before attending.
NOTE: On the day of the SkillShare we will hold an introductory workshop from 12:00 until 1:30 PM, led by Steiner, for those who want to make sure they're up-to-speed before the actual SkillShare starts at 2:00. The introductory workshop is for people who have some done something in Pd or Max/MSP but are still relative beginners in the area of electronic sound programming.
What You Should Bring
You'll need to bring your iPhone or iPod Touch (2nd or 3rd generation Touch only), your own laptop, a headset with a built-in mic (especially if using an iPod Touch) and the data cable you use to connect your device to your laptop. Owing to a terrific hack, you won't even need an Apple Developer License for your device!
More Information
RjDj is an augmented reality app that uses the power of the new generation personal music players like iPhone and iPod Touch to create mind blowing hearing sensations. The RjDj app makes a number of downloadable scenes from different artists available as well as the opportunity to make your own and share them with other users. RjDj.me
Pd (aka Pure Data) is a real-time graphical programming environment for audio, video, and graphical processing. Pd is free software, and works on multiple platforms, and therefore is quite portable; versions exist for Win32, IRIX, GNU/Linux, BSD, and MacOS X running on anything from a PocketPC to an old Mac to a brand new PC. Recent developments include a system of abstractions for building performance environments, and a library of objects for physical modeling for sound synthesis.
kill your television
Transdetect~ and transcomp~: transient shaping and detection
transcomp~ uses transdetect~ to shape the initial attack and release of a signal.
Requires IEM's FIR~, fexpr~ and dbtorms~ which are provided in PD-Extended.
To work properly the transdetect folder should be added to PD's path.
Start by opening help-transcomp~.pd
01 Implementation:
transdetect~ works by using two pairs of envelope followers. The first pair
subtracts an envelope follower with a slow attack from an accurate follower,
the result of which is a signal containing the initial attack. For the initial
release, the second pair subtracts an accurate envelope follower from one with
a slow release.
An envelope follower measures the mean square power of a signal over time
(see 3.audio.examples/H06.envelope.follower.pd for details on implementing an
envelope follower). To do this we must use a low pass filter at a very low
frequency. In order to achieve an accurate follower a linear phase FIR filter
was used (using IEM's FIR~ external). Unfortunately this introduces a phase
delay.
In order to facilitate the use of different envelope follower implementations,
transdetect~ requires a filter type as a creation argument implemented in
followernameTransDetectEF~.pd. 4 linear phase fir implementations are provided:
181, 251, 451 and 501 taps filters. The 501 taps filter provides the most
accurate filter but with a phase delay of 5.668 ms at 44.1kHz (raise the
sampling rate to lower the phase delay). They were all generating using
http://www.dsptutor.freeuk.com/FIRFilterDesign/FIRFiltDes102.html with a
cutoff frequency between 5 and 10 Hz.
A compromise between accuracy and phase delay might be achieved by using
minimum phase FIR filters. A 5th implementation using PD's native lop~ object
is also provided under the designation iir (FIR~ not required).
Along with different possible envelope follower implementation transdetect~
also requires an attack and hold type implemented in
attacknameTransDetectAttackShape~.pd and holdnameTransDetectHoldShape~.pd
respectively. These implementations dictate the kind of attack and release
curves used on the envelope followers (linear, slow[er|est] and fast[er|est]).
All implementations provided use fexpr~. A more efficient external could be
made to take fexpr~ place.
02 Use
In help-transcomp~.pd patch enable start and pay attention to the snap in the
hit. Disable the green toggle button to disable the compression make the snap
go away. Check out the tables on the left to see the results of the transient
compression.
transcomp~ is useful when used with recorded drums to maximize or minimize
its transient (to make it punchier or to make snare drums less clappy).
transcomp~ uses transdetect~. By itself transdetect~ can be used to synthesis
hits from a recording. For example, take a bass drum recording and use the
signals generated by transdetect~ to shape the frequency and envelope of a
synthesized kick drum.
Would love to have some feedback and some help in turning the linear phase filters into minimum phase filters.
CeC 2009 goes to Kumaon
Dear Friends in the Pd Forum,
We are delighted to report that The 4th annual Carnival of e-Creativity (CeC
- is scheduled to be held February 27 to March 1, 2009 (a
Friday-Saturday-Sunday in springtime/late-winter, as usual) , in the sylvan
spaces of Sattal Estate, just above Bhimtal, near Nainital, in the Lower
Kumaon of the new Himalayan Indian state of Uttarakhand.
A basic webpage on this has now been posted up at
http://www.theaea.org/cec_cac/cec09/index.htm, with just a few images and
overviews of location, venue(s), and also some participant-accommodation,
for planning and general information. Links are provided to the iterations
of 2006-07-08, but the 2009 site itself will be fully updated only just
before, and then just after, the actual incident.
As you'd expect, this migration of The Carnival of e-Creativity has come
about as a result of given circumstances and several months of consultations
with advisors, trustees, past-participants, partners and others, all over
the world.
Sattal was eventually deemed to be the best option out of the 4 outstanding
alternatives that were before us from around India, including Delhi itself,
on account of its immediate promise and also its longer term potential.
As CeC looks to progress into the future, it cannot constructively do so on
an ad hoc year-to-year basis. And, the potential to actually connect with a
lively local community on an ongoing basis, as opposed to featuring once a
year in the unending lists of entertainments and diversions of a
market-driven metropolis, goes a long way towards achieving our broad
general intent that the incident should be *meaningful* to society at large,
rather than to just the participants and some small random coteries.
However, participants will find that the opportunity for networking and
working amongst themselves as well as with select local practitioners, which
emerged to be one of the most beautiful aspects of this incident in the
past, is something that will be stronger in the new situation.
It therefore gives us immense pleasure and satisfaction that the Sattal
Estate was in fact the very first venue we had thought of, and even explored
as a possibility, 3-4 years ago.
Nonetheless, we are very specially grateful to India International Centre
and our past participants in Delhi, for having unstintingly and
enthusiastically given us the three years of robust momentum and credibility
upon which we can now so confidently build this huge, and possibly-risky,
move.
Yes, there are "downsides" ~ for example, it will be cold, and living
conditions inside the cottages are very austere; some potential participants
and partners will prefer city options, for various reasons; others will just
not have the extra time for the extra travel; and so on.
On the upside ~ we can have bonfires, even crowd up in the better cottages
to get cozy; no one will feel alone amongst millions; there are all sorts of
good reasons and lovely nearby places that could make one want to actually
spend more time here; and so on.
What do you think?
We look forward to having your goodwill, association, support, participant
and content, in CeC 2009 and onwards.
Keep well ~ Shankar
Shankar Barua
Managing Trustee ~ The Academy of e-Arts
.... presently at:
The 2nd Last Resort, on The Chill Street
near Nal Damyanti Tal / Sattal
Bhimtal (Uttarakhand) - 236136
INDIA
tel: (91-5942) 247598, mobile: (91) 9868732876, or 9837667355
Gridflow on agnula demudi ?
arrrrrgggg:
This is the GridFlow 0.8.0 configurator within Ruby version 1.8.2
[fast] Compile for speed (and not debuggability): enabled
[gcc3] GNU C++ Compiler 3: missing (undefined method `<' for nil:NilClass)
[stl] C++ Standard Template Library: missing (gcc compilation error)
[gcc64] GNU C++ in 64-bit mode: missing (gcc compilation error)
[libruby] Ruby as a dynamic library: missing (gcc compilation error)
[librubystatic] Ruby as a static library: missing (gcc compilation error)
[pentium] Pentium-compatible CPU: missing (gcc compilation error)
[mmx] MMX-compatible CPU (using NASM): disabled (would need pentium)
[simd] SIMD (MMX/SSE/Altivec) (using GCC): disabled (would need pentium)
[profiler] profiler (speed measurements): disabled (would need pentium)
[usb] USB Library: missing (where is usb.h ?)
[ieee1394] IEEE1394 Libraries for Linux (raw1394/dc1394): disabled (by author)
[x11] X11 Display Protocol: missing (where is X11/Xlib.h ?)
[x11_shm] X11 acceleration through shared memory: disabled (would need x11)
[sdl] Simple Directmedia Layer (experimental support): missing (where is SDL/SDL.h ?)
[objcpp] GNU/Apple ObjectiveC++ Compiler: missing (where is objc/Object.h ?)
[quartz] Apple Quartz/Cocoa Display: disabled (would need objcpp)
[aalib] Ascii Art Library: missing (where is aalib.h ?)
[jpeg] JPEG Library: missing (where is jpeglib.h ?)
[png] PNG Library <libpng12/png.h>: missing (where is libpng12/png.h ?)
[png] PNG Library <png.h>: missing (where is png.h ?)
[videodev] Video4linux Digitizer Driver Interface: missing (gcc compilation error)
[mpeg3] HeroineWarrior LibMPEG3 <libmpeg3/libmpeg3.h>: missing (where is libmpeg3/libmpeg3.h ?)
[mpeg3] HeroineWarrior LibMPEG3 <libmpeg3.h>: missing (where is libmpeg3.h ?)
[quicktimeapple] Apple's QuickTime: missing (gcc compilation error)
[quicktimehw] HeroineWarrior QuickTime4Linux (or LibQuickTime) (try #1): missing (gcc compilation error)
[quicktimehw] HeroineWarrior QuickTime4Linux (or LibQuickTime) (try #2): missing (gcc compilation error)
[xine] Xine movie decoder: disabled (by author)
[puredata] Miller Puckette's Pure Data: disabled (would need libruby or librubystatic)
generating ./config.make
generating config.h
creating Makefile
humm.... it will probably compile but seem will could do not much without jpeg png video4linux X11 etc....
will wait a .deb package...