I have a query regarding iOS libpd (Audio Unit).

The libpd sample code uses the following code to hook into the Audio Unit render callback:

static OSStatus AudioRenderCallback(void *inRefCon,
                                    AudioUnitRenderActionFlags *ioActionFlags,
                                    const AudioTimeStamp *inTimeStamp,
                                    UInt32 inBusNumber,
                                    UInt32 inNumberFrames,
                                    AudioBufferList *ioData) {
    PdAudioUnit *pdAudioUnit = (PdAudioUnit *)inRefCon;
    Float32 *auBuffer = (Float32 *)ioData->mBuffers[0].mData;
    if (pdAudioUnit->inputEnabled_) {
        AudioUnitRender(pdAudioUnit->audioUnit_, ioActionFlags, inTimeStamp, kInputElement, inNumberFrames, ioData);
    int ticks = inNumberFrames >> pdAudioUnit->blockSizeAsLog_; // this is a faster way of computing (inNumberFrames / blockSize) **i.e. inNumberFrames/64**
    [PdBase processFloatWithInputBuffer:auBuffer outputBuffer:auBuffer ticks:ticks];
    return noErr;

So, this code depends on inNumberFrames being a multiple of 64, providing an integer number of ticks to PD. However, my understanding of the Audio Unit framework is that this buffer size is not deterministic, in fact the callback could potentially execute with any buffer size. Therefore, It is the responsibility of any custom Audio Unit code to explicitly handle any buffer size.

An example scenario is where sample rate conversion is being applied. For example, connecting a Bluetooth audio device, or simply connecting headphones on an iPhone 6s causes this SRC scenario, causing bad audio. This problem is demonstrable in the libpd sample projects (PDTest01 etc.).

So, I cannot understand how this can ever work properly, but I am confused as libpd appears to be used within existing apps.

Any help or perspective on this issue would be greatly appreciated!