• chmod

    @chmod

    I ended up ditching the ring buffers and doing it like this — I haven't seen any issues tapping from an mp4 input so far without the use of ring buffers:

    if (context->frameSize && outputBufferSize > 0) {
        if (bufferListInOut->mNumberBuffers > 1) {
            float *left = (float *)bufferListInOut->mBuffers[0].mData;
            float *right = (float *)bufferListInOut->mBuffers[1].mData;
                
            //manually interleave channels
            for (int i = 0; i < outputBufferSize; i += 2) {
                context->interleaved[i] = left[i / 2];
                context->interleaved[i + 1] = right[i / 2];
            }
            [PdBase processFloatWithInputBuffer:context->interleaved outputBuffer:context->interleaved ticks:64];
            //de-interleave
            for (int i = 0; i < outputBufferSize; i += 2) {
                left[i / 2] = context->interleaved[i];
                right[i / 2] = context->interleaved[i + 1];
            }
        } else {
            context->interleaved = (float *)bufferListInOut->mBuffers[0].mData;
            [PdBase processFloatWithInputBuffer:context->interleaved outputBuffer:context->interleaved ticks:32];
        }
    }
    

    posted in libpd / webpd read more
  • chmod

    Hi there, I'm working on a project that involves streaming audio from an AVPlayer video player object into libpd. For the process loop of the tap, I used PdAudioUnits render callback code as a guide; but I realized recently that the audio format expected by libpd is not the same as the audio coming from the tap — that is, the tap is providing two buffers of non-interleaved audio data in the incoming AudioBufferList, whereas libpd expects interleaved samples. Does anyone know of a way I can work around this?

    I think that I need to somehow create a new AudioBufferList or float buffer and interleave the samples in place; but that seems expensive to me. If anyone could give me some pointers I would greatly appreciate it!

    static void tap_ProcessCallback(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
    {
        OSStatus status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, nil, numberFramesOut);
        if (noErr != status) {
            NSLog(@"Error: MTAudioProcessingTapGetSourceAudio: %d", (int)status);
            return;
        }
        
        TapProcessorContext *context = (TapProcessorContext *)MTAudioProcessingTapGetStorage(tap);
        
        // first, create the input and output ring buffers if they haven't been created yet
        if (context->frameSize != numberFrames) {
            NSLog(@"creating ring buffers with size: %ld", (long)numberFrames);
            createRingBuffers((UInt32)numberFrames, context);
        }
        
        //adapted from PdAudioUnit.m
        float *buffer = (float *)bufferListInOut->mBuffers[0].mData;
        
        if (context->inputRingBuffer || context->outputRingBuffer) {
            
            // output buffer info from ioData
            UInt32 outputBufferSize = bufferListInOut->mBuffers[0].mDataByteSize; // * 2 solved faint avplayer issue
            UInt32 outputFrames = (UInt32)numberFrames;
    //        UInt32 outputChannels = bufferListInOut->mBuffers[0].mNumberChannels;
            
            // input buffer info from ioData *after* rendering input samples
            UInt32 inputBufferSize = outputBufferSize;
            UInt32 inputFrames = (UInt32)numberFrames;
            UInt32 framesAvailable = (UInt32)rb_available_to_read(context->inputRingBuffer) / context->inputFrameSize;
                    
            //render input samples
            
            while (inputFrames + framesAvailable < outputFrames) {
                // pad input buffer to make sure we have enough blocks to fill auBuffer,
                // this should hopefully only happen when the audio unit is started
                rb_write_value_to_buffer(context->inputRingBuffer, 0, context->inputBlockSize);
                framesAvailable += context->blockFrames;
            }
            rb_write_to_buffer(context->inputRingBuffer, 1, buffer, inputBufferSize);
            
            // input ring buffer -> context -> output ring buffer
            char *copy = (char *)buffer;
            while (rb_available_to_read(context->outputRingBuffer) < outputBufferSize) {
                rb_read_from_buffer(context->inputRingBuffer, copy, context->inputBlockSize);
                [PdBase processFloatWithInputBuffer:(float *)copy outputBuffer:(float *)copy ticks:1];
                rb_write_to_buffer(context->outputRingBuffer, 1, copy, context->outputBlockSize);
            }
            
            // output ring buffer -> audio unit
            rb_read_from_buffer(context->outputRingBuffer, (char *)buffer, outputBufferSize);
        }
    }
    

    posted in libpd / webpd read more
  • chmod

    It was brought to my attention that the zipfile in my last post was being flagged by different browsers as a virus — here are the files contained in that zip:

    bellkit_test.pd
    bellkit_test.wav
    bellkit_test2.wav

    posted in technical issues read more
  • chmod

    Hi there —

    I'm trying to detect pitches from a Bell Kit instrument for a music education app; and so far I'm having trouble picking up MIDI pitches above the 90s/100s; which is the target range for this instrument.

    Here's a test patch I'm using to determine whether [sigmund~] or [helmholtz~] does a better job at reporting higher pitched content. It seems that sigmund~ is generally better at detecting the higher pitches but it seems to stop picking things up above the MIDI pitch 100 range; which is the range of the second recording in the zip file. Can anyone tell me if there is a limit for these objects for higher frequency detection, or suggest parameter configurations or different methods for tracking this instrument?

    posted in technical issues read more
  • chmod

    @ablue

    Did you make any progress with this issue? I'm in a similar situation myself where I've been asked to build an automated offline testing system for a pitch-tracking rhythm game. Since timing is a really important aspect of my patch (since it tracks not only pitches but rhythms) I'm not sure that this kind of system would be feasible in my case. I'm thinking my best bet would just be a system that automatically goes through each level one by one using a test MIDI file as input (with random variances to test accuracy). The game currently works with instruments as low as the Tuba so I'm not sure if I'd be able to use oversampling in my case; but please let me know if I'm incorrect!

    posted in technical issues read more
  • chmod

    Actually I found a relatively simple solution — if headphones are not present I add a hip~ 1500 object to the synth to filter out the fundamental. Since the iPhone speakers are tinny anyway this does the trick without making the synth seem too faint.

    posted in technical issues read more
  • chmod

    Hi everyone,

    I've run into an interesting problem with a iPhone rhythm game project I'm working on:

    Basically the game has a scrolling score of music that you can play with your instrument at the same time. My patch detects pitches in real-time and then marks notes on the screen as correct or not.

    The problem is that there is a built-in guide synth that plays each note as it scrolls (this is toggleable however). Right now I'm having the problem of the built-in guide synth scoring notes correctly without the user providing any input.

    Obviously this issue goes away once the user wears headphones, but when you are playing audio on the phone's speaker at anywhere above 50% volume notes will get triggered automatically no matter what. I have tried several techniques (noise gate, some filters, minpower settings on helmholtz~ and sigmund~) that reduce the problem but not eliminate it.

    That being said, I'm not hoping to completely eliminate this problem 100% (because it's most likely not possible), but I was wondering if anyone had any suggestions for kinds of techniques that could differentiate a real-life instrument from this built-in synth playing on the iPhone speaker, where both instruments are playing the same note?

    Thank you :)

    posted in technical issues read more
  • chmod

    @weightless Hi again —

    I was trying to look at the envelope of the signal using env~ and subtracting this output with a delayed version of itself to find out where the envelope rises and falls. This seems to work pretty well for most of the instrument samples I was working with but I ran into a problem with a flute recording with a pretty heavy vibrato. Each "vibration" of the vibrato was detected as its own attack since thats where it would peak in the waveform as well.

    Your suggestion seems like it could work — I have all of my audio files in an Ableton project right now, is it alright if I send it to you that way (with all the samples in the project folder that is)?

    posted in technical issues read more
  • chmod

    @weightless

    I tried using that method today and found that if the notes are "tongued" and there is still a continuous pitch between every articulation, the -1500 does not appear as needed and the individual attacks are not picked up. This was especially apparent when I tested using my voice (saying "laaalaaalaaalaaa" for example)— there is a continuous pitch being picked up by sigmund~ but I am articulating the note four times.

    I've been trying to look at sudden jumps in the envelope instead but I need to find something that works "one size fits all" for different kinds of instruments and different volume levels as well, so it's been pretty tricky.

    posted in technical issues read more
  • chmod

    @weightless that's a great idea, I hadn't thought of that, the fact that sigmund outputs -1500 during silences in pitch mode.

    I knew I had to use a combination of object in some way, thanks a lot!

    posted in technical issues read more
  • chmod

    Hi everyone,

    I'm faced with a challenging problem involving rhythm detection for brass and wind instruments.

    If a saxophone (or similar instrument) plays four quarter notes continuously of the same pitch (let's say C), what do you think would be the best way to detect the start of each note— specifically if these notes are played smoothly with no clear separation between them?

    When the notes are clearly articulated (with a silence between them) I have been using sigmund~'s notes parameter to detect the attacks, but when the note is a continuous stream with a very subtle attack this does not work so well.

    I have also experimented with bonk~ but I can't seem to have any luck setting the parameters to ensure that it doesn't give "too much" output and detect attacks that aren't there or are coming from different sound sources.

    Any tips would be greatly appreciated, thanks!

    posted in technical issues read more
  • chmod

    Never mind, just a bit more searching and I found the proper way to do it from this part in the documentation:

    https://github.com/libpd/libpd/wiki/Adding-Pure-Data-external-libraries-to-your-project

    The big difference being that you have to forward declare the setup function a bit differently for c++, you have to do it like this:

    extern "C" {
        void helmholtz_tilde_setup();
    }
    

    posted in libpd / webpd read more
  • chmod

    @RonHerrema

    Hi there, did you ever end up getting this to work? I'm trying to integrate helmholtz~ into a libpd project as well, and I'm having trouble either building it as part of the libpd library (doesn't like .cpp files in the project), or including it in my Objective-C++ app.

    posted in libpd / webpd read more
  • chmod

    I just compiled helmholtz~ this evening. Really nice! Do you know if it does attack detection though?

    posted in technical issues read more
  • chmod

    Hey everyone~

    I'm working on a project right now that's essentially a rhythm-game (think Guitar Hero or Dance Dance Revolution) style app. It needs to listen to input from an instrument playing notes on a scrolling score of music.

    For the pitch tracking I only need to track monophonic voices, so this isn't a big issue and there are a lot of options— however I'm having a bit of trouble with the rhythm tracking of each note. I need to detect precise attack times for each note, at least at sixteenth note accuracy, hopefully better.

    So far I've been using sigmund~'s note mode to get the pitch and attack at around the same time, but there seems to be a bit of a delay at any combination of npts and hops sizes. I've also tried fiddle~ for the attack detection, with sigmund~ taking care of the pitches.

    Bonk~ seems to be interesting for responsive attack times, but there's the possibility of non-pitch noises being picked up as well that are not related to the instrument.

    I think it's just going to take a lot more experimentation on my end, but would anyone have any advice on a good strategy for accurately tracking the articulation and pitch of notes in real-time?

    posted in technical issues read more
  • chmod

    I like the weighted random idea —

    I think I'm going to start by implementing the bare-bones core structure of a piece, starting with the root note and mode, and then generating a chord structure based off of this mode. The idea of Schenkarian analysis is to start off by analyzing the larger chord structure of a piece (to find the overarching I-V-I motion) and then move inwards layer by layer in terms of complexity until the whole piece is analyzed down to the individual notes. I'm going to approach generation in the same way — when I get to the point where I am generating the smaller chord progressions I will probably use a weighted random scheme of commonly used motives.

    posted in technical issues read more
  • chmod

    Hi there—

    I've started research for an interesting project that requires real-time generation of simple music for infants and children. I'm reading up on different implementations of Schenkarian-style music generation — or rather Schenkarian analysis done in reverse to create original musical pieces. It seems that Pure Data would be an easy way to accomplish this goal — would anyone be able to give me some advice on where to begin, or point me to other Pure Data projects that deal with the same topic? I'd like to explore as many different examples as possible before making my own attempt.

    Thanks!

    -Chris

    posted in technical issues read more
  • chmod

    Hi there,

    I'm trying to find out if there's any way to send a bang when a signal changes in value.

    For input, I have a signal which can range from 0 - 20. I would like to have bangs output from separate outlets for when the signal's integer value is 0, 1, 2, 3, etc. Is there any way to do this?

    posted in technical issues read more
  • chmod

    Hi there,

    I'm making a synth that uses LFOs to modulate the pitch of some oscillators. I'm using this to determine the output frequency of the desired pitch bend:

    [inlet~] [inlet~]
    | /
    | /
    | [sig~ 2] /
    | | /
    | [pow~]
    | /
    [*~]
    [outlet~]

    The left inlet would be a frequency like 440Hz or something, and the right inlet determines the pitch bend in octaves. (A value of 1 in the right inlet will pitch the frequency in the left up and octave).

    Is there a more efficient way to do this? I'm running into some cpu issues with the pow~ object (I'm running libpd on mobile) and I'm trying to figure out the fastest way to do pitch calculations. Thanks in advance.

    posted in technical issues read more
  • chmod

    Hi Everyone,

    I'm trying to recreate all the filter types in Logic's ES2 synthesizer:

    So essentially, I want to build:

    Filter 1:

    • a resonant lowpass
    • resonant highpass
    • 'peak' filter
    • band reject filter
    • band pass filter

    Filter 2:

    • 12 dB resonant lowpass
    • 18 dB resonant lowpass
    • 24 dB resonant lowpass

    with an optional "fat" setting that restores low frequencies when the resonance is turned up.

    I'm very much a noob when it comes to low-level filter design. Is there any good tutorials/documents/places to start for a project like this?

    Thanks,

    -chmod

    posted in technical issues read more
Internal error.

Oops! Looks like something went wrong!