Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[OsX/CoreAudio] How to handle removed (actively streaming) devices? #194

Open
the-drunk-coder opened this issue Mar 27, 2019 · 10 comments
Open

Comments

@the-drunk-coder
Copy link

the-drunk-coder commented Mar 27, 2019

Hi,

i ran into the following problem while trying to make my application safer to use.

When my application is running on a USB device, and i remove the device, the following happens (or doesn't happen):

1.) there's no exception or feedback whatsoever, on the console i can only see the following:

2019-03-27 09:38:47.258247+0100 SfearSpatializer[1525:23633] [AudioHAL_Client] HALC_ProxyIOContext.cpp:1399:IOWorkLoop:  HALC_ProxyIOContext::IOWorkLoop: failed to send the final message to the server, Error: 0x10000003
2019-03-27 09:38:47.259504+0100 SfearSpatializer[1525:23689] [AudioHAL_Client] HALC_ProxyIOContext.cpp:958:IOWorkLoop:  HALC_ProxyIOContext::IOWorkLoop: the server failed to start, Error: 0x6E6F7065

But i'd like to give the user some feedback that the application stopped because the device has been removed. Is there a way to retrieve that information ?

2.) When trying to restart audio processing on a different device, the application freezes when trying to stop the old stream. isStreamRunning() still returns true, but when calling stopStream() the freeze happens. The only option now is to force-quit the application.

In the case of a USB device this might be fine because hey, if somebody removes the device while running the application, what do you expect, right ? But it turns out, certain Apple models, like the recent Mac Mini, show your headphones as a separate audio device once you plug then in, and removes the device when you unplug them. An application crashing because you unplug your headphones is significantly less expected.

So, is there a way to handle this more gracefully?
Ideally i'd like to give the user a feedback that the device got unplugged, and that they might want to select a new one. Also not freeze when switching devices, of course.

Any hints ?

@garyscavone
Copy link
Contributor

garyscavone commented Mar 27, 2019 via email

@the-drunk-coder
Copy link
Author

the-drunk-coder commented Mar 27, 2019

The listener itself seems to be pretty straightforward, see this post:

https://stackoverflow.com/questions/23350779/how-to-detect-when-an-audio-device-is-disconnected-in-coreaudio

I tried it, and the disconnection is actually detected.

Still trying to figure out what to do when it's detected ...

@the-drunk-coder
Copy link
Author

the-drunk-coder commented Mar 27, 2019

Using this as a callback on disconnect at least allows to re-start with another device, without freezing.
I think the freezing stems from waiting on some condition variable, but i'm not sure yet.

Not sure if there isn't a more elegant solution, though. Also, this doesn't give any feedback.

static OSStatus disconnectCallback(AudioObjectID              inObjectID,
                            UInt32                              inNumberAddresses,
                            const AudioObjectPropertyAddress    inAddresses[],
                            void*                               apiPointer)
{
  RtApiCore* api_ = (RtApiCore*) apiPointer;

  api_->closeStream();
  
  return kAudioHardwareUnspecifiedError;
}

register at the end of RtApiCore :: probeDeviceOpen(...)

// add listener for detecting when a device is removed
property.mSelector = kAudioDevicePropertyDeviceIsAlive;
property.mScope = kAudioObjectPropertyScopeGlobal;

AudioObjectAddPropertyListener( id , &property, disconnectCallback, (void*) this);

@virusys
Copy link

virusys commented Feb 9, 2021

Hi there!

From what I understand; there's no way to handle audio interruptions/route changed notifications with rtaudio at the moment?

I'm attempting to integrate rtaudio into a libpd-based project running on Mac Catalyst. libpd has its own audio unit controller for iOS but on Mac it has been suggested to use rtaudio or portaudio to handle audio rendering.

When I switch my audio device from the default system output to my hardware soundcard while the app is running, I get a bunch of errors from coreaudio:

What's funny is that I can still hear output from libpd, but things like AVAudioPlayer and objects related to AVFoundation stop working. Here is the error I receive in the console as well as my audio controller code. Do you have any suggestions?

2021-02-08 19:17:30.256855-0500 Monster Musician[13766:573179] [aqme] AQMEIO_HAL.cpp:1553:IOProc: AQDefaultDevice: Abandoning I/O cycle because reconfig pending (1).
2021-02-08 19:17:30.258992-0500 Monster Musician[13766:572707]  HALC_ProxySystem::GetObjectInfo: got an error from the server, Error: 560947818 (!obj)
2021-02-08 19:17:30.259080-0500 Monster Musician[13766:572707]  HALC_ShellObject::HasProperty: there is no proxy object
2021-02-08 19:17:30.262117-0500 Monster Musician[13766:572707]  HALC_ProxySystem::GetObjectInfo: got an error from the server, Error: 560947818 (!obj)
2021-02-08 19:17:30.262177-0500 Monster Musician[13766:572707]  HALC_ShellObject::HasProperty: there is no proxy object
2021-02-08 19:17:30.262266-0500 Monster Musician[13766:572707]  AudioObjectRemovePropertyListener: no object with given ID 129
2021-02-08 19:17:30.262345-0500 Monster Musician[13766:572707]  AudioObjectRemovePropertyListener: no object with given ID 129
2021-02-08 19:17:30.262391-0500 Monster Musician[13766:572707]  AudioObjectRemovePropertyListener: no object with given ID 129
2021-02-08 19:17:30.262429-0500 Monster Musician[13766:572707]  AudioObjectRemovePropertyListener: no object with given ID 129
//
//  MMAudioController.m
//
//  Created by Christopher Niven on 2020-11-09.
//

#import "MMAudioController.h"
#include "z_libpd.h"
#import "DeviceUtils.h"

#if TARGET_OS_MACCATALYST
#include <iostream>
#include <unistd.h>
#include <stdlib.h>
#include "RtAudio.h"
#include "PdObject.h"


RtAudio audio;
pd::PdBase lpd;
PdObject pdObject;

int audioCallback(void *outputBuffer, void *inputBuffer, unsigned int nBufferFrames, double streamTime, RtAudioStreamStatus status, void *userData){
    
    // pass audio samples to/from libpd
    int ticks = nBufferFrames / 64;
    lpd.processFloat(ticks, (float *)inputBuffer, (float*)outputBuffer);
    
    return 0;
}

#endif

@interface MMAudioController ()
#if TARGET_OS_IPHONE
@property (nonatomic, retain) PdAudioController *audioController;
#endif
@end

@implementation MMAudioController

extern "C" {
    void helmholtz_tilde_setup();
    void mp3play_tilde_setup();
    void fiddle_tilde_setup();
    void fmodf_setup();
    void seq_setup();
    void triangle_tilde_setup();
    void midiflush_setup();
    void midiparse_setup();
}

#if TARGET_OS_IPHONE
@synthesize audioController = audioController_;
#endif
@synthesize dispatcher;

+ (MMAudioController *)sharedInstance {
    static MMAudioController *sharedInstance = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        sharedInstance = [[self alloc] init];
    });
    
    return sharedInstance;
}

- (void)initRtAudio {
    //setup input
    RtAudio::StreamParameters inParameters;
    inParameters.deviceId = audio.getDefaultInputDevice();
    RtAudio::DeviceInfo deviceInfo = audio.getDeviceInfo(inParameters.deviceId);
    

    inParameters.nChannels = deviceInfo.inputChannels;
    unsigned int sampleRate = deviceInfo.preferredSampleRate;
    unsigned int bufferFrames = 128;
            
    // use the RtAudio API to connect to the default audio device
    if(audio.getDeviceCount()==0){
        std::cout << "There are no available sound devices." << std::endl;
        exit(1);
    }
    
    RtAudio::DeviceInfo devInfo;
    devInfo = audio.getDeviceInfo(inParameters.deviceId);
    
    std::cout << "input device: " << devInfo.name << std::endl;
    
    RtAudio::StreamParameters outParameters;
    outParameters.deviceId = audio.getDefaultOutputDevice();
    outParameters.nChannels = 2;
    
    devInfo = audio.getDeviceInfo(outParameters.deviceId);
    std::cout << "output device: " << devInfo.name << std::endl;
    
    RtAudio::StreamOptions options;
    options.streamName = "libpd rtaudio test";
    options.flags = RTAUDIO_SCHEDULE_REALTIME;
    if(audio.getCurrentApi() != RtAudio::MACOSX_CORE) {
        options.flags |= RTAUDIO_MINIMIZE_LATENCY; // CoreAudio doesn't seem to like this
    }
    try {
        audio.openStream( &outParameters, &inParameters, RTAUDIO_FLOAT32, sampleRate, &bufferFrames, &audioCallback, NULL, &options );
        audio.startStream();
    }
    catch(RtAudioError& e) {
        std::cerr << e.getMessage() << std::endl;
        exit(1);
    }
    
    // init pd
    if(!lpd.init(deviceInfo.inputChannels, 2 /*deviceInfo.outputChannels*/, sampleRate)) {
        std::cerr << "Could not init pd" << std::endl;
        exit(1);
    }
    // send DSP 1 message to pd
    lpd.computeAudio(true);
    [PdBase openFile:@"main.pd" path:[[NSBundle mainBundle] resourcePath]];
}

- (instancetype)init {
    if ( (self = [super init]) )
    {
        // Do the actual initialization work.
        libpd_init();

        helmholtz_tilde_setup();
        mp3play_tilde_setup();
        fiddle_tilde_setup();
        fmodf_setup();
        triangle_tilde_setup();
        seq_setup();
        midiflush_setup();
        midiparse_setup();

#if TARGET_OS_MACCATALYST

        [self initRtAudio];
#else
        
        // Override point for customization after application launch.]]
        self.audioController = [[PdAudioController alloc] init];
        [self.audioController configurePlaybackWithSampleRate:44100 numberChannels:2 inputEnabled:YES mixingEnabled:NO];
        [PdBase openFile:@"main.pd" path:[[NSBundle mainBundle] resourcePath]];
        [self.audioController setActive:YES];
        [self.audioController print];
        NSLog(@"iphone");
#endif

        //handle interruptions
        
        [NSNotificationCenter.defaultCenter addObserver:self
                                               selector:@selector(interruptionOccurred:)
                                                   name:AVAudioSessionInterruptionNotification
                                                 object:nil];
        [NSNotificationCenter.defaultCenter addObserver:self
                                               selector:@selector(routeChanged:)
                                                   name:AVAudioSessionRouteChangeNotification
                                                 object:nil];
        
        dispatcher = [[PdDispatcher alloc] init];
        [PdBase setDelegate:dispatcher];
        
    }
    return self;
}

//TODO: adapt for RtAudio?
- (BOOL)isHeadsetPluggedIn {
    AVAudioSessionRouteDescription* route = [[AVAudioSession sharedInstance] currentRoute];
    for (AVAudioSessionPortDescription* desc in [route outputs]) {
        // make sure pd knows about adding BT compensation delay
        NSLog(@"sending %f to bluetooth_in_use", (float)([[desc portType] isEqualToString:AVAudioSessionPortBluetoothLE]
                                                         || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP]
                                                         || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP]));
        [PdBase sendFloat:(float)([[desc portType] isEqualToString:AVAudioSessionPortBluetoothLE]
                                  || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP]
                                  || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP]) toReceiver:@"bluetooth_in_use"];
        
        if ([[desc portType] isEqualToString:AVAudioSessionPortHeadphones]
            || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothLE]
            || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP]
            || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP]) {
            return YES;
        }
        
    }
    return NO;
}

- (void) interruptionOccurred:(NSNotification *)notification {
    NSLog(@"Interruption Happened!");
    [self routeChanged:notification];
}

- (void) routeChanged:(NSNotification *)notification {
    NSLog(@"Route Changed! %@", [notification userInfo]);
    
    NSNumber *headsetStatus = [NSNumber numberWithBool:[self isHeadsetPluggedIn]];
    
    [[NSNotificationCenter defaultCenter] postNotificationName:@"HeadphonesStatus" object:headsetStatus];
    [[NSUserDefaults standardUserDefaults] setObject:headsetStatus forKey:@"HeadphonesStatus"];
    [PdBase sendFloat:[headsetStatus floatValue] toReceiver:@"headphones_in_use"];
    
#if TARGET_OS_MACCATALYST
    if (audio.isStreamOpen()) {
        audio.closeStream();
        [self initRtAudio];
    }
#else
    [self.audioController setActive:NO];
    
    //    // figure out if we need to restart the audio unit at a different sampling rate
    
    if ([[[notification userInfo] objectForKey:AVAudioSessionRouteChangeReasonKey] intValue] == AVAudioSessionRouteChangeReasonNewDeviceAvailable ||
        [[[notification userInfo] objectForKey:AVAudioSessionRouteChangeReasonKey] intValue] == AVAudioSessionRouteChangeReasonOldDeviceUnavailable ||
        [[[notification userInfo] objectForKey:AVAudioSessionRouteChangeReasonKey] intValue] == AVAudioSessionRouteChangeReasonRouteConfigurationChange) {
        
        float samplingRate = [[AVAudioSession sharedInstance] preferredSampleRate];
        
        // need to check for default sampling rate here

        NSLog(@"restarting audio unit at sampling rate: %f", samplingRate);
        [self.audioController configurePlaybackWithSampleRate:samplingRate numberChannels:2 inputEnabled:YES mixingEnabled:NO];
        //        [self.pdAudio configureAmbientWithSampleRate:samplingRate numberChannels:2 mixingEnabled:YES];
    }
    [self.audioController setActive:YES];
#endif
}

- (BOOL)isSamplingRateSwitchNeeded {
    AVAudioSessionRouteDescription* route = [[AVAudioSession sharedInstance] currentRoute];
    //TODO: Inter-App Audio?
    for (AVAudioSessionPortDescription* desc in [route outputs]) {
        // make sure pd knows about adding BT compensation delay
        NSLog(@"sending %f to bluetooth_in_use", (float)([[desc portType] isEqualToString:AVAudioSessionPortBluetoothLE]
                                                         || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP]
                                                         || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP]));
        [PdBase sendFloat:(float)([[desc portType] isEqualToString:AVAudioSessionPortBluetoothLE]
                                  || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP]
                                  || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP]) toReceiver:@"bluetooth_in_use"];
        
        if ([[desc portType] isEqualToString:AVAudioSessionPortBluetoothLE]
            || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP]
            || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP]
            || [[desc portType] isEqualToString:AVAudioSessionPortUSBAudio]) {
            return YES;
        }
    }
    return NO;
}

@end

@Be-ing
Copy link

Be-ing commented Mar 18, 2021

I was excited when I saw this in RtAudio's README:

support dynamic connection of devices

Apparently this is meant to be taken very literally. Dynamic connection of devices is only one third of the puzzle. The other pieces are dynamic disconnection and persistent identification of devices across disconnection and reconnection.

The API documentation says:

Note that the device enumeration is system specific and will change if any devices are plugged or unplugged by the user. Thus, the device numbers should be verified immediately before opening a stream. As well, if a user unplugs a device while an open stream is using that device, the resulting stream behaviour will be undefined (a system error will likely be generated).

Are there any efforts to really support hotplug in RtAudio? Is anyone interested in working on this? This is crucial for live performance software and AFAICT no crossplatform library currently supports it, neither RtAudio, nor PortAudio, nor CPAL.

@garyscavone
Copy link
Contributor

garyscavone commented Mar 18, 2021 via email

@Be-ing
Copy link

Be-ing commented Mar 18, 2021

The main issue is whether there is a way in each API to detect such a disconnect.

There may be limitations with what information different APIs provide that prevent full implementation of hotplug for all APIs, but nevertheless it would be good to do it the best that is possible.

The persistent identification of devices is a harder problem and no clear solution has been proposed. As it stands, RtAudio gives the user enough information to keep track of devices through periodic checks of what is available (the user would need to manage her own list and keep it periodically updated). The one issue that has come up in the past is that two identical devices of the same make / model may not always be well distinguished by the underlying API but that is probably a relatively rare issue.

Yeah, handling multiple identical devices is a challenging edge case. A few example use cases would be two Behringer U-Phono UFO 202s for timecode vinyl control or two CDJs. Most likely only one device would be unplugged at a time so there would be no ambiguity when it is replugged. However a realistic edge case could be plugging two identical devices into one USB hub and the hub getting unplugged and replugged. The consequence of mixing up the devices would not be great, but IMO probably better than requiring the user to restart or reconfigure the application. In this context with Mixxx the software decks would swap which hardware mixer channels they go to, which would be odd and confusing but the show could go on.

Audio APIs generally abstract over multiple physical interfaces (USB, Thunderbolt, Bluetooth), so they cannot provide a uniform means to access unique identifying information of the device such as USB vendor ID, product ID, serial number, or USB port. Depending on the internal architecture of the OS it could be possible for the API to track that information internally to create unique identifiers but AFAIK none of the OS APIs do this. Please correct me if I'm wrong about that because it would be very useful if they do.

For me, I don’t have much time and the situation with my work is looking to make that even more problematic.

I probably will not invest much time in this for RtAudio either. Right now I am investigating different options for audio I/O in Mixxx in the future. We have been using PortAudio for many years. Unfortunately both the PortAudio and RtAudio APIs have major limitations when working with JACK which would also apply to PipeWire. Specifically, PortAudio and RtAudio both couple stream creation with usage. This is contrary to what is required for JACK or PipeWire routing applications and session managers to work. Those need the application to create all its ports regardless of whether they get connected to any other application or hardware port. Overcoming this would require a major breaking change to the API. Without this, the user experience is not great. CPAL's API does not have this problem so I think I'll be investing my efforts in that library. However if hotplug is implemented in RtAudio, I would reconsider if it might be worth the effort to switch Mixxx to RtAudio.

@FlashHit
Copy link

anything new about this?

@garyscavone
Copy link
Contributor

I put a lot of work into the newdeviceselection branch in late December / early January. I think it is basically done for all APIs. As part of that, I tried to deal with device removal issues, though it only seemed possible in some of the APIs (this would need to be verified). In OS-X, it should be detected and the stream should automatically stop if the device in use by the stream is removed mid-stream. And almost none of the APIs support a way to know when new devices are added (through a notification protocol). So, it is still necessary to reprobe the system from time to time to detect new devices.

@garyscavone
Copy link
Contributor

The new device selection updates have been merged into master now. I'll leave this issue open, in case someone wants to dig more deeply to see what may be done. For OS-X, RtAudio does detect if a device in use is removed and it closes the stream in that case. That is all that can be done as far as I can tell. It also works for Jack but I don't think I was able to manage it for the other APIs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants