Building a game sound engine for a mobile app : flexible sampler
Hi, I'm building an audio sound engine for a mobile game using libpd and the rjlib library. I want it to be CPU efficient so as to run on lower end android phones too without glitches
My first assumption is that playing back audio is less costly than using synths : is this a fact or am I wrong from the start ?
The idea is to build a flexible sampler with distinct attack, sustain and release parts. I can then load any synth sound I exported from my DAW and play it almost as if it were the original synth (without all the modulations of course). Kind of like the ableton sampler: I could choose the start, the sustain loop portion, and the release part of the sound. That would allow me to add portamento, amplitude, pitch and filter enveloppes, hold notes forever, instantly change the pitch, with no glitches between parts ... It would have to sound like I was playing the original synth preset from my DAW, inside PD !
However this looks a bit trickier than I thought and seems to require a bit of patching. In the end I'm wondering if it's not going to end up costing more resource from the CPU than using a PD synth ?
So before I start building this thing I'm wondering : is it a good idea, or I am missing something here ? And also, does it not already exist ?
I hope I've managed to be somewhat clear, i'd love to hear your feedback !
Cheers
Simon
String Machine - Granular
And for those who are interested, an alternative version, with iemguts library. It works differently, using "live" dynamic patching. I might not work on all platforms, depending on the library's versions available.
And here is the patch that I used to automatically generate all the granule~ in the string machine V 1.6
samphold-ing a previous signal value
Here's the final version.
At first, it didn't seem to work -- the "old" and "new" [samphold~] objects were outputting the same value. Eventually I figured out that it was because I was trying to clean up the patch's appearance by passing the [phasor~] through a [send~] / [receive~] pair (going into the "old" [samphold~]). Apparently (which I didn't realize at first) this introduces a one-block delay -- so I was reading the "old" value too late.
Using direct signal connections for all of them, this does exactly what I want.
This is part of a multimode LFO abstraction that I'm giving to students. I'll go ahead and attach that -- -1 .. +1 range, LFO shapes are sine, triangle, sawtooth, pulse, random sample-hold and random with linear interpolation.
Thanks for the tips! Learned something.
hjh
waveforms and filters in standard synths
Honestly never really used regular computer synths much, but started doing some daily eartraining with Syntorial and got to trying to remake some of the synth sounds in PD. Two questions:
How are low-pass filters constructed in a standard computer synth? They have a 0-1 value in Syntorial (and presumably other synths). To reproduce that in PD I'm multiplying 0-1 by 30 (arbitrary value so that the sound stays pretty bright when the lop value is at 1) by the frequency of the pitch being filtered and putting it into a chain of [lop~]...
Syntorial's sawtooth and pulse(square) sound pretty clean, but in PD they have a lot of frequencies below the ground tone that make the sound very noisy. Are these filtered out with [hip~]s?
Also, anyone know any nice PD patches that regulate a standard synth setup? Might be interesting to analyze them.
edit: just noticed the thread two lines below this one, so I'll check that out...
Dictionary object in pd-extended?
Btw-- here's a nice article that outlines some of the common hash table terminology in the context of realtime embedded systems:
http://users.cs.northwestern.edu/~sef318/docs/hashtables.pdf
The discussion about performance as the number of keys per bucket increases is particularly relevant. Same with the desire for the ratio between the worst-case and average performance to approach one.
It's also instructive to notice the difference in complexity between the simplistic approach Pd uses and their proposed realtime-safe memory-constrained approach. Pd's algo is simple enough that you can read the code and understand what it does straightaway. In fact I'm not sure anyone has ever done the most basic testing on it, or even checked to make sure that it distributes keys uniformly across the buckets (though I assume it does). Yet people seem to be able to use Pd in performances, even large complex patches, without symbol table growth becoming a performance bottleneck.
Edit: to be fair, most users aren't doing things like text processing of arbitrary input during performance. So the limitations of the core language may work to keep users from ever hitting that limit. But I think if you had a big patch with thousands of abstractions containing lots of "$0-"-prefixed symbols inside them you might be able to experience the problem.
Web Audio Conference 2019 - 2nd Call for Submissions & Keynotes
Apologies for cross-postings
Fifth Annual Web Audio Conference - 2nd Call for Submissions
The fifth Web Audio Conference (WAC) will be held 4-6 December, 2019 at the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. WAC is an international conference dedicated to web audio technologies and applications. The conference addresses academic research, artistic research, development, design, evaluation and standards concerned with emerging audio-related web technologies such as Web Audio API, Web RTC, WebSockets and Javascript. The conference welcomes web developers, music technologists, computer musicians, application designers, industry engineers, R&D scientists, academic researchers, artists, students and people interested in the fields of web development, music technology, computer music, audio applications and web standards. The previous Web Audio Conferences were held in 2015 at IRCAM and Mozilla in Paris, in 2016 at Georgia Tech in Atlanta, in 2017 at the Centre for Digital Music, Queen Mary University of London in London, and in 2018 at TU Berlin in Berlin.
The internet has become much more than a simple storage and delivery network for audio files, as modern web browsers on desktop and mobile devices bring new user experiences and interaction opportunities. New and emerging web technologies and standards now allow applications to create and manipulate sound in real-time at near-native speeds, enabling the creation of a new generation of web-based applications that mimic the capabilities of desktop software while leveraging unique opportunities afforded by the web in areas such as social collaboration, user experience, cloud computing, and portability. The Web Audio Conference focuses on innovative work by artists, researchers, students, and engineers in industry and academia, highlighting new standards, tools, APIs, and practices as well as innovative web audio applications for musical performance, education, research, collaboration, and production, with an emphasis on bringing more diversity into audio.
Keynote Speakers
We are pleased to announce our two keynote speakers: Rebekah Wilson (independent researcher, technologist, composer, co-founder and technology director for Chicago’s Source Elements) and Norbert Schnell (professor of Music Design at the Digital Media Faculty at the Furtwangen University).
More info available at: https://www.ntnu.edu/wac2019/keynotes
Theme and Topics
The theme for the fifth edition of the Web Audio Conference is Diversity in Web Audio. We particularly encourage submissions focusing on inclusive computing, cultural computing, postcolonial computing, and collaborative and participatory interfaces across the web in the context of generation, production, distribution, consumption and delivery of audio material that especially promote diversity and inclusion.
Further areas of interest include:
- Web Audio API, Web MIDI, Web RTC and other existing or emerging web standards for audio and music.
- Development tools, practices, and strategies of web audio applications.
- Innovative audio-based web applications.
- Web-based music composition, production, delivery, and experience.
- Client-side audio engines and audio processing/rendering (real-time or non real-time).
- Cloud/HPC for music production and live performances.
- Audio data and metadata formats and network delivery.
- Server-side audio processing and client access.
- Frameworks for audio synthesis, processing, and transformation.
- Web-based audio visualization and/or sonification.
- Multimedia integration.
- Web-based live coding and collaborative environments for audio and music generation.
- Web standards and use of standards within audio-based web projects.
- Hardware and tangible interfaces and human-computer interaction in web applications.
- Codecs and standards for remote audio transmission.
- Any other innovative work related to web audio that does not fall into the above categories.
Submission Tracks
We welcome submissions in the following tracks: papers, talks, posters, demos, performances, and artworks. All submissions will be single-blind peer reviewed. The conference proceedings, which will include both papers (for papers and posters) and extended abstracts (for talks, demos, performances, and artworks), will be published open-access online with Creative Commons attribution, and with an ISSN number. A selection of the best papers, as determined by a specialized jury, will be offered the opportunity to publish an extended version at the Journal of Audio Engineering Society.
Papers: Submit a 4-6 page paper to be given as an oral presentation.
Talks: Submit a 1-2 page extended abstract to be given as an oral presentation.
Posters: Submit a 2-4 page paper to be presented at a poster session.
Demos: Submit a work to be presented at a hands-on demo session. Demo submissions should consist of a 1-2 page extended abstract including diagrams or images, and a complete list of technical requirements (including anything expected to be provided by the conference organizers).
Performances: Submit a performance making creative use of web-based audio applications. Performances can include elements such as audience device participation and collaboration, web-based interfaces, Web MIDI, WebSockets, and/or other imaginative approaches to web technology. Submissions must include a title, a 1-2 page description of the performance, links to audio/video/image documentation of the work, a complete list of technical requirements (including anything expected to be provided by conference organizers), and names and one-paragraph biographies of all performers.
Artworks: Submit a sonic web artwork or interactive application which makes significant use of web audio standards such as Web Audio API or Web MIDI in conjunction with other technologies such as HTML5 graphics, WebGL, and Virtual Reality frameworks. Works must be suitable for presentation on a computer kiosk with headphones. They will be featured at the conference venue throughout the conference and on the conference web site. Submissions must include a title, 1-2 page description of the work, a link to access the work, and names and one-paragraph biographies of the authors.
Tutorials: If you are interested in running a tutorial session at the conference, please contact the organizers directly.
Important Dates
March 26, 2019: Open call for submissions starts.
June 16, 2019: Submissions deadline.
September 2, 2019: Notification of acceptances and rejections.
September 15, 2019: Early-bird registration deadline.
October 6, 2019: Camera ready submission and presenter registration deadline.
December 4-6, 2019: The conference.
At least one author of each accepted submission must register for and attend the conference in order to present their work. A limited number of diversity tickets will be available.
Templates and Submission System
Templates and information about the submission system are available on the official conference website: https://www.ntnu.edu/wac2019
Best wishes,
The WAC 2019 Committee
Which version for an absolute noob?
@CrouchingPython Pd has midi output...... and you could connect an external synth and select it from the Pd "Media" "Midi Settings".
But if it is a software synth on your computer then you need to loop-back the output of Pd to the input of the software synth.
In Windows you could use loopMIDI........... http://www.tobias-erichsen.de/software.html
Open it and set up a loop (there could well be one opened straight away by loopMIDI..... then open Pd and connect to it....... and open your software synth and connect to it.........
Widows10 might still have "Microsoft GS Wavetable Synth" still built in, but some say it doesn't work and Google will find you plenty of free and better ones.
The Arturia is a controller...... it outputs midi messages the same as Pd....... and still needs a synth (something to make the sounds when triggered by midi messages).....
David.
Final Solution: Anyone looking to control Ableton Live...easily
Hi All
A little bit of work to set up but forget midi mapping...google it if you dont believe me.
After a lot of time spent trying to get a simple but sophisticated way (using a minimal 8 button floorboard) to control Live on w10, I thought I would share this particular solution to possibly help others (especially after the help offered here on this forum). I tried a number of scenarios, even buying Max 4 Live, but it turns out a lot simpler than that. It needs 3 main areas set
FOOT CONTROLLER BEHAVIOURS/GESTURES
Create pd patch that gives you 'behaviours' per switch. Ill be happy to share mine but Im just cleaning them up atm.
eg I have 4 standard behaviours that dont take too much time to master
- Action A: A quick click (less than 500ms) Always the primary action
- Action B: Long click ie 1 click down and pedal up after 500ms. I use this eg always as a negative ramp down for things like lowering volume but if its just held down and released in a natural way, it is the secondary action of the switch
- Action C: 3 Click ie 1 quick down, up and then hold down. I use this for a positive ramp eg as volume up
4 Actiion D: Double click, Always a cancel
These are all mapped to note/ctrl outs that match the 'Selected Track Control' below
PLUGIN
Use PD VST to create a plugin version of your patch. This is loaded into Live as a control track. Live manages the connection of your floor board etc into the actual track so you dont wrestle with the io. I always use track 1 for click (forget Live metronome, this is much more flexible and can have feel/swing etc) so I dedicate track 2 to control.
Use LoopMIDI to create a virtual midi cable that will go from this track and be fed into the remote script.
REMOTE SCRIPT: 'Selected Track Control'
Download latest from http://stc.wiffbi.com/
Install to live and make sure your notes/control conform.
Enable this as a control surface in live and connect midi in from the plugin. Think about giving the guy a donation...massive amount of work and he deserves it!
I use it to control 8 tracks x 8 scenes and is controlled by 3 switches
- Scene control up and down (A = down, B = up)
- Track control same as scene
- Rec/Fire/Undo Volume up and down (A = fire/rec, B = Volume Down, C = Volume Up, D (Dbl Click) = Undo
The scenes and tracks wrap so there isnt too much foot tapping
There is quite a bit more to it of course...its and maybe no one else needs this but it would have saved me a couple of weeks of time so Im happy to help anyone wanting to achieve gigging without a massive floor rig and an easy way to map and remember.
HTH someone
Cheers
mark
Sound distorts when going through send~ and receive~
I'm fairly new to Pure Data, though I'm excited by what I've learned so far. I followed this tutorial -- http://designingsound.org/2013/04/pure-data-wavetable-synth-part-1/ -- up through the second-to-last step (with some adjustments for things that didn't quite work when I followed them in the tutorial), and then began modifying it to add more modulation and other features I wanted. I'm currently trying to implement what I'm thinking of as "suboscillators" that will be able to be tuned in relation to the main oscillator, mixed in with it, modulate pitch and amplitude, and take their own envelopes. I also want the suboscillators to be able to take modulation from either of the two "LFO"s [in quotes since the first LFO actually outputs in the audible range] (adding a second to what's specified in the tutorial, along with additional layers of LFOs below them), so I've been moving things around and redesigning the patch a little bit, breaking it off and on, and trying to get it working again.
While working on this, I noticed a behavior that's baffling me: the signal of the first LFO appears to distort when sent out through send~ and then back in through receive~. If I connect the output of the LFO directly to dac~, it sounds fine. Sent through send~ and back through receive~ and then to dac~ and it sounds louder and there are some other frequencies seemingly present, as if it's maybe clipping a little bit. I thought that send~ and receive~ were functionally the same as just using connector cords, so I'm seriously confused by this. For the purposes of the patch, having the LFO signal distort is not good, so I feel like I need to figure this out before I go forward with the features I want to implement. Being new, I imagine it's something simple I'm overlooking.
Here's a link to part of the patch where the problem is occurring: http://imgur.com/a/GkRUI I haven't tested yet to figure out if other send~ receive~ pairs are causing distortion. Any clarification or ideas about why this is happening would be much appreciated.
[edit: There's also apparently an error message going along with this: "consistency check failed: signal_free 3" I thought it was some other part of the larger patch generating the error, but I copied just the LFO generator section to a separate patch to play with, and that message still appears in the console. The weird distortion with receive~ still occurs too.]
Audio Ideas (AI) Collection (placeholder, currently only links)-effects, controllers, mmp, etc.
Audio Ideas (AI) Collection (placeholder) currently only links
per @LiamG 's kind suggestion I have begun the process of consolidating my abs and patches, etc. into a single location/zip file or for possible upload to github.
Just to get the ball/me rolling and scope the work I got the links for my shares into a single location to later be consolidated into the single AI Collection.
For now at least, please, bare with me (and the links below) as ideas I am more passionate about currently are demanding my attention. (Which funnily enough will probably also be included in the set, where ever they are shared.)
Thanks, for your patience and all you do for the Pure Data Family.
Sincerely,
Scott
abstract~
pushdelay-envelope-env-driven-delay-line-with-both-delay-time-and-feedback-dependent
numpad-abstraction-for-entry-of-large-numbers-via-click-instead-of-sliders-includes-basic-calculator
abs_delay_fbw-feedbackwards-lifo-last-in-first-out-delay
abs_sequences_by_formula-sequences-by-formula-abstraction-ex-collatz
abs_effects_router-60-effects-in-one-abstraction-router-from-diy2-stamp-album-my-abs
visualcontrolsurface-vsl-values-set-by-their-location-on-the-screen-req-ggee-shell
abs_4-8-14_way_toggle-pair-2-toggles-resulting-in-4-8-or-14-states
audioflow-delay-to-forward-backward-looper-using-speed-control
5-band-equalizer-with-bezier-controller-eq5_mey_w_bezier_sv-pd-updated-to-8-band-below
forward-backward-looper-orig-abs-from-residuum-whale-av
abs_rgb2hex-rgb-0-255-colors-to-hexadecimal-values
pseudo-12-string-effect-6-string-guitar-to-sound-like-a-12-string
jack_midi2pd_2sys_connector_sv-jack-midi_out-to-pd-sys_playback-switcher
abs_4to16pads_bin_conv_sv-convert-4-midi-pads-from-a-binary-value-to-a-decimal-for-rerouting
abs_automatedslider_sv-automated-control-changer-pd-and-mobmuplat-editor-versions
idea-for-effects-stack-ing-technique-control-mother
micin-_abs-abstraction-convert-signal-to-notein-ex-using-a-midi-synth-as-a-guitar-pedal
curve_abs-tri-way-curve-switch-to-change-control-values-in-either-linearly-convex-or-concave-manner
a-preset-control-abstraction-for-saving-parameters-presets-to-text-files
4-tap-delay-with-pitch-shifter-per-delay-line-adaptation-of-diy2-patches
patch~
extra
the-15-owl-faust-patches-compiled-as-32bit-linux-externals-attached
libpd
mmponboardeditortemplate-mmp-for-creation-of-mobmuplat-files-directly-on-the-handheld-android-only
3d-synth-webpd-tree-js-webgl_camera_cinematic-html-example
Off topic