[ANN] Diversity tickets for Web Audio Conference 2019
We have a limited number of diversity tickets available to attend the Web Audio Conference 2019 in Trondheim (4-6 December, 2019), thanks to our sponsors (https://ntnu.edu/wac2019/sponsors). This may cover your conference ticket and/or travels as an author or other attender. We have also tried to keep the conference fees as low as possible to encourage interested attendees to come (https://www.ntnu.edu/wac2019/registration).
The diversity tickets will be prioritized for attenders from lower-income countries and backgrounds, and underrepresented communities in web audio.
To be eligible, you need to fill in the following online survey telling us how we can help you, which should take 5-10 minutes: http://tiny.cc/WAC19-call-diversity
An expert diversity committee is constituted who will decide from the candidates by September 15, 2019.
Important dates
- September 8, 2019 (Anywhere on Earth): Deadline diversity tickets application.
- September 15, 2019: Notification of grantees.
- September 22, 2019: Deadline early-bird ticket.
- December 4-6, 2019: The conference.
Please do not hesitate to contact the diversity committee, diversity@wac2019.ntnu.no, if you have questions. You can find more information about the diversity theme here: https://www.ntnu.edu/wac2019/diversity-theme
Also, you can find a preliminary draft of the programme and a list of conference venues on our website:
https://www.ntnu.edu/wac2019/programme
https://www.ntnu.edu/wac2019/venues-and-maps
Best wishes,
The WAC 2019 Diversity Committee
Algorithmic Composition // Catarina
Algorithmic Composition // Catarina
A 100% synthesized algorithmic composition made using Pure Data Vanilla 0.49-1. Made for the upcoming Muff Wiggler Discord Collective Album.
Instruments and sounds used are part of my library: pd-mkmr
https://github.com/MikeMorenoAudio/pd-mkmr
Patch available here: https://patchstorage.com/algorithmic-composition-catarina/
Full version on Youtube
Web Audio Conference 2019 - 2nd Call for Submissions & Keynotes
Apologies for cross-postings
Fifth Annual Web Audio Conference - 2nd Call for Submissions
The fifth Web Audio Conference (WAC) will be held 4-6 December, 2019 at the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. WAC is an international conference dedicated to web audio technologies and applications. The conference addresses academic research, artistic research, development, design, evaluation and standards concerned with emerging audio-related web technologies such as Web Audio API, Web RTC, WebSockets and Javascript. The conference welcomes web developers, music technologists, computer musicians, application designers, industry engineers, R&D scientists, academic researchers, artists, students and people interested in the fields of web development, music technology, computer music, audio applications and web standards. The previous Web Audio Conferences were held in 2015 at IRCAM and Mozilla in Paris, in 2016 at Georgia Tech in Atlanta, in 2017 at the Centre for Digital Music, Queen Mary University of London in London, and in 2018 at TU Berlin in Berlin.
The internet has become much more than a simple storage and delivery network for audio files, as modern web browsers on desktop and mobile devices bring new user experiences and interaction opportunities. New and emerging web technologies and standards now allow applications to create and manipulate sound in real-time at near-native speeds, enabling the creation of a new generation of web-based applications that mimic the capabilities of desktop software while leveraging unique opportunities afforded by the web in areas such as social collaboration, user experience, cloud computing, and portability. The Web Audio Conference focuses on innovative work by artists, researchers, students, and engineers in industry and academia, highlighting new standards, tools, APIs, and practices as well as innovative web audio applications for musical performance, education, research, collaboration, and production, with an emphasis on bringing more diversity into audio.
Keynote Speakers
We are pleased to announce our two keynote speakers: Rebekah Wilson (independent researcher, technologist, composer, co-founder and technology director for Chicago’s Source Elements) and Norbert Schnell (professor of Music Design at the Digital Media Faculty at the Furtwangen University).
More info available at: https://www.ntnu.edu/wac2019/keynotes
Theme and Topics
The theme for the fifth edition of the Web Audio Conference is Diversity in Web Audio. We particularly encourage submissions focusing on inclusive computing, cultural computing, postcolonial computing, and collaborative and participatory interfaces across the web in the context of generation, production, distribution, consumption and delivery of audio material that especially promote diversity and inclusion.
Further areas of interest include:
- Web Audio API, Web MIDI, Web RTC and other existing or emerging web standards for audio and music.
- Development tools, practices, and strategies of web audio applications.
- Innovative audio-based web applications.
- Web-based music composition, production, delivery, and experience.
- Client-side audio engines and audio processing/rendering (real-time or non real-time).
- Cloud/HPC for music production and live performances.
- Audio data and metadata formats and network delivery.
- Server-side audio processing and client access.
- Frameworks for audio synthesis, processing, and transformation.
- Web-based audio visualization and/or sonification.
- Multimedia integration.
- Web-based live coding and collaborative environments for audio and music generation.
- Web standards and use of standards within audio-based web projects.
- Hardware and tangible interfaces and human-computer interaction in web applications.
- Codecs and standards for remote audio transmission.
- Any other innovative work related to web audio that does not fall into the above categories.
Submission Tracks
We welcome submissions in the following tracks: papers, talks, posters, demos, performances, and artworks. All submissions will be single-blind peer reviewed. The conference proceedings, which will include both papers (for papers and posters) and extended abstracts (for talks, demos, performances, and artworks), will be published open-access online with Creative Commons attribution, and with an ISSN number. A selection of the best papers, as determined by a specialized jury, will be offered the opportunity to publish an extended version at the Journal of Audio Engineering Society.
Papers: Submit a 4-6 page paper to be given as an oral presentation.
Talks: Submit a 1-2 page extended abstract to be given as an oral presentation.
Posters: Submit a 2-4 page paper to be presented at a poster session.
Demos: Submit a work to be presented at a hands-on demo session. Demo submissions should consist of a 1-2 page extended abstract including diagrams or images, and a complete list of technical requirements (including anything expected to be provided by the conference organizers).
Performances: Submit a performance making creative use of web-based audio applications. Performances can include elements such as audience device participation and collaboration, web-based interfaces, Web MIDI, WebSockets, and/or other imaginative approaches to web technology. Submissions must include a title, a 1-2 page description of the performance, links to audio/video/image documentation of the work, a complete list of technical requirements (including anything expected to be provided by conference organizers), and names and one-paragraph biographies of all performers.
Artworks: Submit a sonic web artwork or interactive application which makes significant use of web audio standards such as Web Audio API or Web MIDI in conjunction with other technologies such as HTML5 graphics, WebGL, and Virtual Reality frameworks. Works must be suitable for presentation on a computer kiosk with headphones. They will be featured at the conference venue throughout the conference and on the conference web site. Submissions must include a title, 1-2 page description of the work, a link to access the work, and names and one-paragraph biographies of the authors.
Tutorials: If you are interested in running a tutorial session at the conference, please contact the organizers directly.
Important Dates
March 26, 2019: Open call for submissions starts.
June 16, 2019: Submissions deadline.
September 2, 2019: Notification of acceptances and rejections.
September 15, 2019: Early-bird registration deadline.
October 6, 2019: Camera ready submission and presenter registration deadline.
December 4-6, 2019: The conference.
At least one author of each accepted submission must register for and attend the conference in order to present their work. A limited number of diversity tickets will be available.
Templates and Submission System
Templates and information about the submission system are available on the official conference website: https://www.ntnu.edu/wac2019
Best wishes,
The WAC 2019 Committee
A few beginner questions...
Actually, personally I think I'm going to go for the opposite: Supercollider for composition & Pure Data for synthesis, because Supercollider has routines and Patterns, which are more useful for composition than what pure data has to offer imo. There is no such thing as a routine in pure data, the closest thing you get is qlist/text sequencer. Supercollider has conditional branching etc. and it's just easier to do algorithmic composition in text.
I would use Pd for synthesis mainly because the signal-flow representation is easier to visualize in Pd, and because I started on Pd. That being said, Supercollider is the best for creating "voices" on the fly and having something like 500 voices playing at the same time.
Kangtaum - Scriptophonic microtonal metal generator
Kangtaum.zip
_
_
Kangtaum is an attempt at a real-time implementation of the text-to-(Black Metal)music algorithm proposed by Dave Tremblay.
The algorithm was intended for use with Tolkien’s writing; this version will produce a microtonal scriptophonic ‘metal’ composition from any text file that is input. The default is the poem She Bomber by Eliza Gregory, the text currently being converted is output to the Pd window.
_
_
_
Here's how it works:
·The octave is divided in 26 equal steps (26 microtones, one for each letter).
·The ordering of the letters, from low to high, is:
E-O-V-I-Q-C-F-A-J-Z-P-H-B-Y-S-R-K-D-T-L-X-M-N-G-W-U based on their frequency of use in the sample text (Tolkein).
·One letter represents a duration of 1/8th note.
·One comma, parenthesis or semicolon is a ‘tie’ to the last note played (1/8th duration normally) is continued for a ¼.
·A full stop, colon, exclamation or question mark is a pause of 1/8th measure.
·There are three transcription tracks:
- The melody: It is each letter and punctuation in the text played one after the other.
- The chords: A chord lasts as long as a word in the melody (for example, a three-letter word will last 3 eights notes, and a nine-letter word will last 9 eights). It's comprised of all the different notes/letters of the word played at once.
- The bass: The bass track or chord root consists of the first letter of each word played for the duration of the word. For example, the words THAN and NATH have the same notes and length, but the root and melody of the chords will be different).
·Reverberation for the composition is inversely proportional to the size of the current text chunk or sentence being converted into music. The bigger the chunk, the smaller the virtual space.
·The stereo position of the melody is controlled by the length of the word currently being sonified.
·A counterpoint line is generated from a reordering of the current word/chord’s notes from low to high, which is arpeggiated. The balance between melody and counterpoint is controlled by word length. The longer the word the more melody is present.
This isn't a finished patch, i've kind of reached a dead end with it so thought I'd open it up and see if any one else would like to chip in. I'm really interested in developing the synthesis side of things and making it sound more METAL I suppose.
If nothing else, hopefully some of the list processing is useful to someone working to convert text to music. Many thanks to those on the forum that helped me work some of it out or provide solutions.
This is a video of a slightly earlier version:
Here is an example of Dave's music produced with the original algorithm.
I am creating an interactive tutorial for a data flow workshop, anyone care to share ideas?
Hello again Gummi...... good idea..... a few thoughts.
So your (first) workshop will be "data-flow"......
I tried a while ago to build a patch that might demonstrate "what is a dataflow language". When I first came across Pd it reminded me of a very brief lesson at school (long long ago) where I learnt (a little) about the data-flow diagram. But Pd is more sophisticated than that, and I had some trouble demonstrating the possibilities without just chaining a whole load of spigots together. I suppose I tried to make it too simple. I will be very interested to see your patch!
However, I think you should start with this.........a simple drawing.....in traditional "diamond" boxes.....
Is there a good film on?... yes/no
shall we go?.......yes/no
Is it raining?......yes/no
Do we have 2 umbrella?.........yes/no
etc.
There will be no need to resolve the chart..... but it gives a hint about how Pd works. That is the bit I forget every day....... that Pd also deals much of the time with logic...... 1 and 0..... true and false...
Because Pure Data is in fact a formidable system engineering platform........... that is used most of the time for audio and music.
And then I agree with @Liam about all of the points he made, but I think you will need to "whet their appetites" before you deal with "nitty gritty". They should be given http://puredata.info/docs/manuals/pd/x2.htm for absolutely compulsory homework 
I think I would start with an osc control and not an [osc~] sound....... and show them how you can have a touch fader on a tablet sending osc messages that controls a gui in Pd via udp binary packets (maybe not too much detail there at first though?) over a wi-fi connection and then transform that data into some visuals and sounds so as to get them excited.......... because.......
....... if you can see the beginning and the end of the journey before you start it is much easier to understand all of the decisions that were taken on the "rocky road" in between.......
If Pd was a system for growing vegetables I would have said "and then show them some vegetables". I am a fan of the data-flow more than the vegetables..... sorry, more than the audio-visual...... even though I am an audio-visual engineer. Pd does all of the "stuff" that you cannot do elsewhere with audio-visual software.
Make sure as @Liam says that you have trigger and route and other important objects in the chain (unavoidable really), and go through the whole chain with "magic glass" showing them what is going on in the "string" (of course you will need a "slow metro" at the head of the chain for that). Don't bother showing them how you generate the sounds and visuals.... that will be too much for day one....... but explain how you got the "data" to "flow"..............
And then finish up with more of the bin ops and [expr] maybe to show the "power at your fingertips".........
I wish I had better understood list from day one (and float symbol anything pointer).
My favourite tools are the list tools..... split, drip etc. and I cannot live without [cxc_split] and especially [slist] !
David.
Luna - A New Visual And Textual Programming Language
Luna is not "designed for DSP" because it is a general purpose programming language. It would be better to tell that Luna is designed "not only for DSP". It basically means that you can use it to create games, computer graphics, visually program hardware, create batch scripts, system or web applications or process digital signals. In fact it is superior over traditional textual programming in all domains that could be visualised and interactively developed. Luna does not ship with any specialized library at the moment, so there are no DSP or game-specific functions included, but such libraries should appear in the future. Currently there is a startup that uses Luna to make high-performance image compositing software (www.flowbox.io) and we're also collaborating with few others, including visual IOT programming.
Answering your third question - although Luna is a very powerfull anguage and has some rocket science built-in, its visual representation is understandable and usable by non technical people. In fact the visual form lowers the entry level, but not limiting proffesional programmers in any way - and this is told by a group of Haskell geeks 
Did I answer your questions? 
Do you often use the struct objects?
@Guest: the [struct] object and scalars have nearly no graphical properties by default. If you define a [struct foo] and create a scalar from it, that scalar only computes a single pixel bounding box that you can "grab" with the mouse. But that is less than what a normal object has (visible rectangle, text, and possibly inlets and outlets).
There are use cases for [struct] with no visual properties-- for example, data structure arrays can be very powerful (if clunky to use).
Of course, there are draw commands like [filledpolygon] which do associate data structures with visual elements. But even there, it's not necessary to actually have gui code like tcl strings in the C code. You can just send messages to the GUI saying that some data was changed, and let the GUI figure out how to change the visual appearance accordingly. You can even just figure out what visual attributes need to be changed and
target those in the GUI message without being too inefficient.
The only time it starts to become a problem is when you try to compute visual properties-- like bounding
boxes, for example-- in Pd. So if you have a data structure visualized with, say, 20 polygons, Pd needs to
recalculate the bounding box every time the mouse is moved over the patch. That's one of the reasons why
they are so inefficient. (But that goes the same for a canvas filled with lots of object boxes.)
I tried to mitigate this somewhat in Pd-l2ork by caching the bbox. But this stuff gets a lot easier if you just do all the GUI manipulation in the GUI. Data structures don't make that separation harder, but they do get easier to manage with a more sensible design.
I've kind of "doubled down" on data structures in the GUI port of Pd-l2ork-- you can now group drawings together, perform affine transformations, change opacity, and even draw sprites. I also added some classes to receive events and change variables without having to use pointers, which makes it much easier to use.
Last reminder EMERGEANDSEE media arts festival 2011
Last reminder EMERGEANDSEE media arts festival:
submit your SHORT FILMS / ABSTRACTS / INSTALLATIONS and other AUDIO/VISUAL WORKS until MARCH 15! ___ This year's topic is EN DÉTAIL
Call for participation
Since 2000 the EMERGEANDSEE media arts festival offers an independent work showcase of young audio-visual works from all over the globe. Within the interaction of short film competition, audio-visual exhibition and lectures different works and opinions come together. With EMERGEANDSEE 2011 the festival`s focus is turned to “En Détail” and the smallest things on the fringes of our daily perception, thinking and creating, which challenge our attention and sharpen our wits, independent from the connection to the big scheme of things.
For this we are looking for your position!
Artistic and theoretic perspectives on and examinations of the detail, which will be presented in Berlin in June 2011. With pictures, thoughts, sound or gestures you can add your detail to EMERGEANDSEE 2011...
Call for entries/papers:
<http://emergeandsee.org/call-for-entries/>
PDF version:
<http://emergeandsee.org/wordpress/wp-content/uploads/2010/12/Cfe-2011-en.pdf>
You can submit:
- Short films up to 20 minutes long for the big screen
- Audio/visual installations, pictures, loops, performances for the exhibition
- 20-minute contributions leaving some room for discussion and exchange for the lecutres
>> Short films
Within the short film competition, innovative feature films, animations, documentaries and experimental worlds of images from around the globe will explore the topic of the detail in the eye of the jury and the audience.
>> Exhibition
Within the exhibition no boundaries should restrict the artistic medium: performances, sound productions for the “audio-cinema”, visual artistic works, video installations, cross-medial works, etc. all toy around with the detail.
>> Lectures
Within the lectures, thinkers and makers from diverse areas (social & cultural sciences, natural sciences, media and art, etc.) will gather. Here, we are looking for new theories and ideas presented via short lectures.
Submissions for all sections until 15.03.2011 under:
http://submit.emergeandsee.org
For more details on each section please have a look at the webpage:
http://emergeandsee.org
Philipp from the EMERGEANDSEE-team
A collection of GLSL effects?
Hello everybody,
it's been a long time since i started wondering about getting some advanced visual effects out of pd. I know "advanced visuals" could mean a lot of different things, but let's say I am thinking of pixel stuff like depth of field, bloom, glow, blurring. I kind of tried everything, from basic pix effects to freeframe fx and gridflow convolutions but no matter what I do, since these effects are cpu based the resulting patch is always dead slow.
My first question is: as far as i know pd is born as an audio software, does it make sense to keep pushing it into the domains of visuals?
Don't get me wrong, I love pd and I know the amazing stuff you could get out gem and gridflow. Let's think of all these kind of 3d manipulations, sound visualization, video mixing, opencv stuff, pmpd physics simulation, just to name a few. You could just get some wonderful visuals by only using geos and simple texturing. But, sometimes, I find myself in front of limitations, like the ones about pixel effects I said before, and I wonder if I should just leave pd to what it's good for and move to video driven software like vvvv or "classic" programming environment like Processing.
I know a lot of stuff I've been talking about could be achieved with an irrelevant cpu cost by leaving calculations to the gpu. I think GLSL potential is extremely huge and I got to work some basic blurring, glowing and blooming effects I found on the web, but still seems a little workaroundy for me (especially multipass rendering).
Here is the second question: could opengl and glsl scripting be the solution to my first question? and what do you guys think about having a place where we can host a (hopefully growing) collection of ready to use GLSL effects along with example patches? maybe with a standard framework of objects for multi texture effects and general GLSL handling?
Ok, that's all. Any feedback will be extremely appreciated.
Here follows a simple GLSL blooming effect applied to gem particles (works on macosx 10.5, pd extended, gem 0.92.3)



