AEAudioController Class Reference

Main controller class. More...

#import <AEAudioController.h>

Inherits <NSObject>.

Instance Methods

(NSTimeInterval) - AEAudioControllerInputLatency
 Input latency (in seconds)
 
(NSTimeInterval) - AEAudioControllerOutputLatency
 Output latency (in seconds)
 
(AudioTimeStamp) - AEAudioControllerCurrentAudioTimestamp
 Get the current audio system timestamp.
 
(void) - setAudiobusSenderPort:forChannel:
 Set an Audiobus sender port to send audio from a particular channel.
 
(void) - setAudiobusSenderPort:forChannelGroup:
 Set an Audiobus sender port to send audio from a particular channel group.
 
(ABFilterPort *audiobusFilterPort) - __deprecated_msg
 Audiobus filter port.
 
(ABSenderPort *audiobusSenderPort) - __deprecated_msg
 Audiobus sender port.
 
Channel and channel group management
(void) - addChannels:
 Add channels.
 
(void) - addChannels:toChannelGroup:
 Add channels to a channel group.
 
(void) - removeChannels:
 Remove channels.
 
(void) - removeChannels:fromChannelGroup:
 Remove channels from a channel group.
 
(NSArray *) - channels
 Obtain a list of all channels, across all channel groups.
 
(NSArray *) - channelsInChannelGroup:
 Get a list of channels within a channel group.
 
(AEChannelGroupRef- createChannelGroup
 Create a channel group.
 
(AEChannelGroupRef- createChannelGroupWithinChannelGroup:
 Create a channel sub-group within an existing channel group.
 
(void) - removeChannelGroup:
 Remove a channel group.
 
(NSArray *) - topLevelChannelGroups
 Get a list of top-level channel groups.
 
(NSArray *) - channelGroupsInChannelGroup:
 Get a list of sub-groups contained within a group.
 
(void) - setVolume:forChannelGroup:
 Set the volume level of a channel group.
 
(float) - volumeForChannelGroup:
 Get the volume level of a channel group.
 
(void) - setPan:forChannelGroup:
 Set the pan of a channel group.
 
(float) - panForChannelGroup:
 Get the pan of a channel group.
 
(void) - setPlaying:forChannelGroup:
 Set the playing status of a channel group.
 
(BOOL) - channelGroupIsPlaying:
 Get the playing status of a channel group.
 
(void) - setMuted:forChannelGroup:
 Set the mute status of a channel group.
 
(BOOL) - channelGroupIsMuted:
 Get the mute status of a channel group.
 
Filters
(void) - addFilter:
 Add an audio filter to the system output.
 
(void) - addFilter:toChannel:
 Add an audio filter to a channel.
 
(void) - addFilter:toChannelGroup:
 Add an audio filter to a channel group.
 
(void) - addInputFilter:
 Add an audio filter to the system input.
 
(void) - addInputFilter:forChannels:
 Add an audio filter to the system input.
 
(void) - removeFilter:
 Remove a filter from system output.
 
(void) - removeFilter:fromChannel:
 Remove a filter from a channel.
 
(void) - removeFilter:fromChannelGroup:
 Remove a filter from a channel group.
 
(void) - removeInputFilter:
 Remove a filter from system input.
 
(NSArray *) - filters
 Get a list of all top-level output filters.
 
(NSArray *) - filtersForChannel:
 Get a list of all filters currently operating on the channel.
 
(NSArray *) - filtersForChannelGroup:
 Get a list of all filters currently operating on the channel group.
 
(NSArray *) - inputFilters
 Get a list of all input filters.
 
Output receivers
(void) - addOutputReceiver:
 Add an output receiver.
 
(void) - addOutputReceiver:forChannel:
 Add an output receiver.
 
(void) - addOutputReceiver:forChannelGroup:
 Add an output receiver for a particular channel group.
 
(void) - removeOutputReceiver:
 Remove an output receiver.
 
(void) - removeOutputReceiver:fromChannel:
 Remove an output receiver from a channel.
 
(void) - removeOutputReceiver:fromChannelGroup:
 Remove an output receiver from a particular channel group.
 
(NSArray *) - outputReceivers
 Obtain a list of all top-level output receivers.
 
(NSArray *) - outputReceiversForChannel:
 Obtain a list of all output receivers for the specified channel.
 
(NSArray *) - outputReceiversForChannelGroup:
 Obtain a list of all output receivers for the specified group.
 
Input receivers
(void) - addInputReceiver:
 Add an input receiver.
 
(void) - addInputReceiver:forChannels:
 Add an input receiver, specifying a channel selection.
 
(void) - removeInputReceiver:
 Remove an input receiver.
 
(void) - removeInputReceiver:fromChannels:
 Remove an input receiver.
 
(NSArray *) - inputReceivers
 Obtain a list of all input receivers.
 
Timing receivers
(void) - addTimingReceiver:
 Add a timing receiver.
 
(void) - removeTimingReceiver:
 Remove a timing receiver.
 
(NSArray *) - timingReceivers
 Obtain a list of all timing receivers.
 
Metering
(void) - outputAveragePowerLevel:peakHoldLevel:
 Get output power level information since this method was last called.
 
(void) - outputAveragePowerLevels:peakHoldLevels:channelCount:
 Get output power level information for multiple channels since this method was last called.
 
(void) - averagePowerLevel:peakHoldLevel:forGroup:
 Get output power level information for a particular group, since this method was last called.
 
(void) - averagePowerLevels:peakHoldLevels:forGroup:channelCount:
 Get output power level information for a particular group, since this method was last called.
 
(void) - inputAveragePowerLevel:peakHoldLevel:
 Get input power level information since this method was last called.
 
(void) - inputAveragePowerLevels:peakHoldLevels:channelCount:
 Get input power level information for multiple channels since this method was last called.
 
Utilities
(AudioStreamBasicDescription *) - AEAudioControllerAudioDescription
 Get access to the configured AudioStreamBasicDescription.
 
(AudioStreamBasicDescription *) - AEAudioControllerInputAudioDescription
 Get access to the input AudioStreamBasicDescription.
 
(long) - AEConvertSecondsToFrames
 Convert a time span in seconds into a number of frames at the current sample rate.
 
(NSTimeInterval) - AEConvertFramesToSeconds
 Convert a number of frames into a time span in seconds.
 
(BOOL) - AECurrentThreadIsAudioThread
 Determine if the current thread is the audio thread.
 

Properties

NSString * audioSessionCategory
 Audio session category to use.
 
BOOL allowMixingWithOtherApps
 Whether to allow mixing audio with other apps.
 
BOOL useMeasurementMode
 Whether to use the "Measurement" Audio Session Mode for improved audio quality and bass response.
 
BOOL avoidMeasurementModeForBuiltInSpeaker
 Whether to avoid using Measurement Mode with the built-in speaker.
 
BOOL boostBuiltInMicGainInMeasurementMode
 Whether to boost the input volume while using Measurement Mode with the built-in mic.
 
BOOL muteOutput
 Mute output.
 
float masterOutputVolume
 Access the master output volume.
 
BOOL enableBluetoothInput
 Enable audio input from Bluetooth devices.
 
BOOL inputGainAvailable
 Determine whether input gain is available.
 
float inputGain
 Set audio input gain (if input gain is available)
 
BOOL voiceProcessingEnabled
 Whether to use the built-in voice processing system.
 
BOOL voiceProcessingOnlyForSpeakerAndMicrophone
 Whether to only perform voice processing for the SpeakerAndMicrophone route.
 
AEInputMode inputMode
 Input mode: How to handle incoming audio.
 
NSArray * inputChannelSelection
 Input channel selection.
 
NSTimeInterval preferredBufferDuration
 Preferred buffer duration (in seconds)
 
NSTimeInterval currentBufferDuration
 Current buffer duration (in seconds)
 
NSTimeInterval inputLatency
 Input latency (in seconds)
 
NSTimeInterval outputLatency
 Output latency (in seconds)
 
BOOL automaticLatencyManagement
 Whether to automatically account for input/output latency.
 
BOOL running
 Determine whether the audio engine is running.
 
BOOL playingThroughDeviceSpeaker
 Determine whether audio is currently being played through the device's speaker.
 
BOOL recordingThroughDeviceMicrophone
 Determine whether audio is currently being recorded through the device's mic.
 
BOOL audioInputAvailable
 Whether audio input is currently available.
 
BOOL inputEnabled
 Whether audio input is currently enabled.
 
BOOL outputEnabled
 Whether audio output is currently available.
 
int numberOfInputChannels
 The number of audio channels that the current audio input device provides.
 
AudioStreamBasicDescription inputAudioDescription
 The audio description defining the input audio format.
 
AudioStreamBasicDescription audioDescription
 The audio description that the audio controller was setup with.
 
AudioUnit audioUnit
 The Remote IO audio unit used for input and output.
 
AUGraph audioGraph
 The audio graph handle.
 
ABReceiverPort * audiobusReceiverPort
 Audiobus receiver port.
 

Setup and start/stop

(AudioStreamBasicDescription) + interleaved16BitStereoAudioDescription
 16-bit stereo audio description, interleaved
 
(AudioStreamBasicDescription) + nonInterleaved16BitStereoAudioDescription
 16-bit stereo audio description, non-interleaved
 
(AudioStreamBasicDescription) + nonInterleavedFloatStereoAudioDescription
 Floating-point stereo audio description, non-interleaved.
 
(BOOL) + voiceProcessingAvailable
 Determine whether voice processing is available on this device.
 
(id) - initWithAudioDescription:
 Initialize the audio controller system, with the audio description you provide.
 
(id) - initWithAudioDescription:inputEnabled:
 Initialize the audio controller system, with the audio description you provide.
 
(id) - initWithAudioDescription:options:
 Initialize the audio controller system, with the audio description you provide.
 
(id) - initWithAudioDescription:inputEnabled:useVoiceProcessing:
 Initialize the audio controller system, with the audio description you provide.
 
(id) - initWithAudioDescription:inputEnabled:useVoiceProcessing:outputEnabled:
 Initialize the audio controller system, with the audio description you provide.
 
(BOOL) - start:
 Start audio engine.
 
(void) - stop
 Stop audio engine.
 
(BOOL) - setAudioDescription:error:
 Set a new audio description.
 
(BOOL) - setInputEnabled:error:
 Enable or disable input.
 
(BOOL) - setOutputEnabled:error:
 Enable or disable output.
 
(BOOL) - setAudioDescription:inputEnabled:outputEnabled:error:
 Composite update method.
 

Realtime/Main thread messaging system

AEMessageQueuemessageQueue
 The asynchronous message queue used for safe communication between main and realtime thread.
 
(void) - performAsynchronousMessageExchangeWithBlock:responseBlock:
 Send a message to the realtime thread asynchronously, if running, optionally receiving a response via a block.
 
(BOOL) - performSynchronousMessageExchangeWithBlock:
 Send a message to the realtime thread synchronously, if running.
 
(void) - AEAudioControllerSendAsynchronousMessageToMainThread
 Send a message to the main thread asynchronously.
 
(void) - beginMessageExchangeBlock
 Begins a block of messages to be performed consecutively.
 
(void) - endMessageExchangeBlock
 Ends a consecutive block of messages.
 

Detailed Description

Main controller class.

Use:

  1. [Initialise](initWithAudioDescription:), with the desired audio format.
  2. Set required parameters.
  3. Add channels, input receivers, output receivers, timing receivers and filters, as required. Note that all these can be added/removed during operation as well.
  4. Call start: to begin processing audio.

Method Documentation

+ (AudioStreamBasicDescription) interleaved16BitStereoAudioDescription

16-bit stereo audio description, interleaved

Deprecated:
Use AEAudioStreamBasicDescriptionInterleaved16BitStereo instead.
+ (AudioStreamBasicDescription) nonInterleaved16BitStereoAudioDescription

16-bit stereo audio description, non-interleaved

Deprecated:
Use AEAudioStreamBasicDescriptionNonInterleaved16BitStereo instead.
+ (AudioStreamBasicDescription) nonInterleavedFloatStereoAudioDescription

Floating-point stereo audio description, non-interleaved.

Deprecated:
Use AEAudioStreamBasicDescriptionNonInterleavedFloatStereo instead.
+ (BOOL) voiceProcessingAvailable

Determine whether voice processing is available on this device.

Older devices are not able to perform voice processing - this determines whether it's available. See voiceProcessingEnabled for info.

- (id) initWithAudioDescription: (AudioStreamBasicDescription)  audioDescription

Initialize the audio controller system, with the audio description you provide.

Creates and configures the audio unit and initial mixer audio unit.

This initialises the audio system without input (from microphone, etc) enabled. If you desire audio input, use initWithAudioDescription:inputEnabled:useVoiceProcessing:.

Parameters
audioDescriptionAudio description to use for all audio
- (id) initWithAudioDescription: (AudioStreamBasicDescription)  audioDescription
inputEnabled: (BOOL)  enableInput 

Initialize the audio controller system, with the audio description you provide.

Creates and configures the input/output audio unit and initial mixer audio unit.

Parameters
audioDescriptionAudio description to use for all audio
enableInputWhether to enable audio input from the microphone or another input device
- (id) initWithAudioDescription: (AudioStreamBasicDescription)  audioDescription
options: (AEAudioControllerOptions options 

Initialize the audio controller system, with the audio description you provide.

Creates and configures the audio unit and initial mixer audio unit.

Parameters
audioDescriptionAudio description to use for all audio
optionsOptions to enable input, voice processing, etc. (See AEAudioControllerOptions).
- (id) initWithAudioDescription: (AudioStreamBasicDescription)  audioDescription
inputEnabled: (BOOL)  enableInput
useVoiceProcessing: ("use initWithAudioDescription:options: instead")  __deprecated_msg 

Initialize the audio controller system, with the audio description you provide.

Creates and configures the input/output audio unit and initial mixer audio unit.

Parameters
audioDescriptionAudio description to use for all audio
enableInputWhether to enable audio input from the microphone or another input device
useVoiceProcessingWhether to use the voice processing unit (see voiceProcessingEnabled and voiceProcessingAvailable).
Deprecated:
Use initWithAudioDescription:options: instead
- (id) initWithAudioDescription: (AudioStreamBasicDescription)  audioDescription
inputEnabled: (BOOL)  enableInput
useVoiceProcessing: (BOOL)  useVoiceProcessing
outputEnabled: ("use initWithAudioDescription:options: instead")  __deprecated_msg 

Initialize the audio controller system, with the audio description you provide.

Creates and configures the input/output audio unit and initial mixer audio unit.

Parameters
audioDescriptionAudio description to use for all audio
enableInputWhether to enable audio input from the microphone or another input device
useVoiceProcessingWhether to use the voice processing unit (see voiceProcessingEnabled and voiceProcessingAvailable).
enableOutputWhether to enable audio output. Sometimes when recording from external input-only devices at high sample rates (96k) you may need to disable output for the sample rate to be actually used.
Deprecated:
Use initWithAudioDescription:options: instead
- (BOOL) start: (NSError **)  error

Start audio engine.

Parameters
errorOn output, if not NULL, the error
Returns
YES on success, NO on failure
- (void) stop

Stop audio engine.

- (BOOL) setAudioDescription: (AudioStreamBasicDescription)  audioDescription
error: (NSError **)  error 

Set a new audio description.

This will cause the audio controller to stop, teardown and recreate its rendering resources, then start again (if it was previously running).

Parameters
audioDescriptionThe new audio description
errorOn output, the error, if one occurred
Returns
YES on success, NO on failure
- (BOOL) setInputEnabled: (BOOL)  inputEnabled
error: (NSError **)  error 

Enable or disable input.

This will cause the audio controller to stop, teardown and recreate its rendering resources, then start again (if it was previously running).

Parameters
inputEnabledWhether to enable input
errorOn output, the error, if one occurred
Returns
YES on success, NO on failure
- (BOOL) setOutputEnabled: (BOOL)  outputEnabled
error: (NSError **)  error 

Enable or disable output.

This will cause the audio controller to stop, teardown and recreate its rendering resources, then start again (if it was previously running).

Parameters
outputEnabledWhether to enable output
errorOn output, the error, if one occurred
Returns
YES on success, NO on failure
- (BOOL) setAudioDescription: (AudioStreamBasicDescription)  audioDescription
inputEnabled: (BOOL)  inputEnabled
outputEnabled: (BOOL)  outputEnabled
error: (NSError **)  error 

Composite update method.

This convenience method updates the audio description, and the input and output enabled status.

Parameters
audioDescriptionThe new audio description
inputEnabledWhether to enable input
outputEnabledWhether to enable output
errorOn output, the error, if one occurred
Returns
YES on success, NO on failure
- (void) addChannels: (NSArray *)  channels

Add channels.

Takes an array of one or more objects that implement the AEAudioPlayable protocol.

Parameters
channelsAn array of id<AEAudioPlayable> objects
- (void) addChannels: (NSArray *)  channels
toChannelGroup: (AEChannelGroupRef group 

Add channels to a channel group.

Parameters
channelsArray of id<AEAudioPlayable> objects
groupGroup identifier
- (void) removeChannels: (NSArray *)  channels

Remove channels.

Takes an array of one or more objects that implement the AEAudioPlayable protocol.

Parameters
channelsAn array of id<AEAudioPlayable> objects
- (void) removeChannels: (NSArray *)  channels
fromChannelGroup: (AEChannelGroupRef group 

Remove channels from a channel group.

Parameters
channelsArray of id<AEAudioPlayable> objects
groupGroup identifier
- (NSArray*) channels

Obtain a list of all channels, across all channel groups.

- (NSArray*) channelsInChannelGroup: (AEChannelGroupRef group

Get a list of channels within a channel group.

Parameters
groupGroup identifier
Returns
Array of id<AEAudioPlayable> objects contained within the group
- (AEChannelGroupRef) createChannelGroup

Create a channel group.

Channel groups cause the channels within the group to be pre-mixed together, so that one filter can be applied to several channels without the added performance impact.

You can create trees of channel groups using addChannels:toChannelGroup:, with filtering at each branch, for complex filter chaining.

Returns
An identifier for the created group
- (AEChannelGroupRef) createChannelGroupWithinChannelGroup: (AEChannelGroupRef group

Create a channel sub-group within an existing channel group.

With this method, you can create trees of channel groups, with filtering steps at each branch of the tree.

Parameters
groupGroup identifier
Returns
An identifier for the created group
- (void) removeChannelGroup: (AEChannelGroupRef group

Remove a channel group.

Removes channels from the group and releases associated resources.

Parameters
groupGroup identifier
- (NSArray*) topLevelChannelGroups

Get a list of top-level channel groups.

Returns
Array of NSValues containing pointers (group identifiers)
- (NSArray*) channelGroupsInChannelGroup: (AEChannelGroupRef group

Get a list of sub-groups contained within a group.

Parameters
groupGroup identifier
Returns
Array of NSNumber containing sub-group identifiers
- (void) setVolume: (float)  volume
forChannelGroup: (AEChannelGroupRef group 

Set the volume level of a channel group.

Parameters
volumeGroup volume (0 - 1)
groupGroup identifier
- (float) volumeForChannelGroup: (AEChannelGroupRef group

Get the volume level of a channel group.

Parameters
groupGroup identifier
Returns
Group volume (0 - 1)
- (void) setPan: (float)  pan
forChannelGroup: (AEChannelGroupRef group 

Set the pan of a channel group.

Parameters
panGroup pan (-1.0, left to 1.0, right)
groupGroup identifier
- (float) panForChannelGroup: (AEChannelGroupRef group

Get the pan of a channel group.

Parameters
groupGroup identifier
Returns
Group pan (-1.0, left to 1.0, right)
- (void) setPlaying: (BOOL)  playing
forChannelGroup: (AEChannelGroupRef group 

Set the playing status of a channel group.

If this is NO, then the group will be silenced and no further render callbacks will be performed on child channels until set to YES again.

Parameters
playingWhether group is playing
groupGroup identifier
- (BOOL) channelGroupIsPlaying: (AEChannelGroupRef group

Get the playing status of a channel group.

Parameters
groupGroup identifier
Returns
Whether group is playing
- (void) setMuted: (BOOL)  muted
forChannelGroup: (AEChannelGroupRef group 

Set the mute status of a channel group.

If YES, group will be silenced, but render callbacks of child channels will continue to be performed.

Parameters
mutedWhether group is muted
groupGroup identifier
- (BOOL) channelGroupIsMuted: (AEChannelGroupRef group

Get the mute status of a channel group.

Parameters
groupGroup identifier
Returns
Whether group is muted
- (void) addFilter: (id< AEAudioFilter >)  filter

Add an audio filter to the system output.

Audio filters are used to process live audio before playback.

Parameters
filterAn object that implements the AEAudioFilter protocol
- (void) addFilter: (id< AEAudioFilter >)  filter
toChannel: (id< AEAudioPlayable >)  channel 

Add an audio filter to a channel.

Audio filters are used to process live audio before playback.

You can apply audio filters to one or more channels - use channel groups to do so without the extra performance overhead by pre-mixing channels together first. See createChannelGroup.

You can also apply more than one audio filter to a channel - each audio filter will be performed on the audio in the order in which the filters were added using this method.

Parameters
filterAn object that implements the AEAudioFilter protocol
channelThe channel on which to perform audio processing
- (void) addFilter: (id< AEAudioFilter >)  filter
toChannelGroup: (AEChannelGroupRef group 

Add an audio filter to a channel group.

Audio filters are used to process live audio before playback.

Create and add filters to a channel group to process multiple channels with one filter, without the performance hit of processing each channel individually.

Parameters
filterAn object that implements the AEAudioFilter protocol
groupThe channel group on which to perform audio processing
- (void) addInputFilter: (id< AEAudioFilter >)  filter

Add an audio filter to the system input.

Audio filters are used to process live audio.

Parameters
filterAn object that implements the AEAudioFilter protocol
- (void) addInputFilter: (id< AEAudioFilter >)  filter
forChannels: (NSArray *)  channels 

Add an audio filter to the system input.

Audio filters are used to process live audio.

Parameters
filterAn object that implements the AEAudioFilter protocol
channelsAn array of NSNumbers identifying by index the input channels to filter, or nil for default (the same as addInputFilter:)
- (void) removeFilter: (id< AEAudioFilter >)  filter

Remove a filter from system output.

Parameters
filterThe filter to remove
- (void) removeFilter: (id< AEAudioFilter >)  filter
fromChannel: (id< AEAudioPlayable >)  channel 

Remove a filter from a channel.

Parameters
filterThe filter to remove
channelThe channel to stop filtering
- (void) removeFilter: (id< AEAudioFilter >)  filter
fromChannelGroup: (AEChannelGroupRef group 

Remove a filter from a channel group.

Parameters
filterThe filter to remove
groupThe group to stop filtering
- (void) removeInputFilter: (id< AEAudioFilter >)  filter

Remove a filter from system input.

Parameters
filterThe filter to remove
- (NSArray*) filters

Get a list of all top-level output filters.

- (NSArray*) filtersForChannel: (id< AEAudioPlayable >)  channel

Get a list of all filters currently operating on the channel.

Parameters
channelChannel to get filters for
- (NSArray*) filtersForChannelGroup: (AEChannelGroupRef group

Get a list of all filters currently operating on the channel group.

Parameters
groupChannel group to get filters for
- (NSArray*) inputFilters

Get a list of all input filters.

- (void) addOutputReceiver: (id< AEAudioReceiver >)  receiver

Add an output receiver.

Output receivers receive audio that is being played by the system. Use this method to add a receiver to receive audio that consists of all the playing channels mixed together.

Parameters
receiverAn object that implements the AEAudioReceiver protocol
- (void) addOutputReceiver: (id< AEAudioReceiver >)  receiver
forChannel: (id< AEAudioPlayable >)  channel 

Add an output receiver.

Output receivers receive audio that is being played by the system. Use this method to add a callback to receive audio from a particular channel.

Parameters
receiverAn object that implements the AEAudioReceiver protocol
channelA channel
- (void) addOutputReceiver: (id< AEAudioReceiver >)  receiver
forChannelGroup: (AEChannelGroupRef group 

Add an output receiver for a particular channel group.

Output receivers receive audio that is being played by the system. By registering a callback for a particular channel group, you can receive the mixed audio of only that group.

Parameters
receiverAn object that implements the AEAudioReceiver protocol
groupA channel group identifier
- (void) removeOutputReceiver: (id< AEAudioReceiver >)  receiver

Remove an output receiver.

Parameters
receiverThe receiver to remove
- (void) removeOutputReceiver: (id< AEAudioReceiver >)  receiver
fromChannel: (id< AEAudioPlayable >)  channel 

Remove an output receiver from a channel.

Parameters
receiverThe receiver to remove
channelChannel to remove receiver from
- (void) removeOutputReceiver: (id< AEAudioReceiver >)  receiver
fromChannelGroup: (AEChannelGroupRef group 

Remove an output receiver from a particular channel group.

Parameters
receiverThe receiver to remove
groupA channel group identifier
- (NSArray*) outputReceivers

Obtain a list of all top-level output receivers.

- (NSArray*) outputReceiversForChannel: (id< AEAudioPlayable >)  channel

Obtain a list of all output receivers for the specified channel.

Parameters
channelA channel
- (NSArray*) outputReceiversForChannelGroup: (AEChannelGroupRef group

Obtain a list of all output receivers for the specified group.

Parameters
groupA channel group identifier
- (void) addInputReceiver: (id< AEAudioReceiver >)  receiver

Add an input receiver.

Input receivers receive audio that is being received by the microphone or another input device.

Note that the audio format provided to input receivers added via this method depends on the value of inputMode.

Check the audio buffer list parameters to determine the kind of audio you are receiving (for example, if you are using an interleaved format such as interleaved16BitStereoAudioDescription then the audio->mBuffers[0].mNumberOfChannels field will be 1 for mono, and 2 for stereo audio). If you are using a non-interleaved format such as nonInterleaved16BitStereoAudioDescription, then audio->mNumberBuffers will be 1 for mono, and 2 for stereo.

Parameters
receiverAn object that implements the AEAudioReceiver protocol
- (void) addInputReceiver: (id< AEAudioReceiver >)  receiver
forChannels: (NSArray *)  channels 

Add an input receiver, specifying a channel selection.

Input receivers receive audio that is being received by the microphone or another input device.

This method allows you to specify which input channels to receive by providing an array of NSNumbers with indexes identifying the selected channels.

Note that the audio format provided to input receivers added via this method depends on the value of inputMode.

Check the audio buffer list parameters to determine the kind of audio you are receiving (for example, if you are using an interleaved format such as interleaved16BitStereoAudioDescription then the audio->mBuffers[0].mNumberOfChannels field will be 1 for mono, and 2 for stereo audio). If you are using a non-interleaved format such as nonInterleaved16BitStereoAudioDescription, then audio->mNumberBuffers will be 1 for mono, and 2 for stereo.

Parameters
receiverAn object that implements the AEAudioReceiver protocol
channelsAn array of NSNumbers identifying by index the input channels to receive, or nil for default (the same as addInputReceiver:)
- (void) removeInputReceiver: (id< AEAudioReceiver >)  receiver

Remove an input receiver.

If receiver is registered for multiple channels, it will be removed for all of them.

Parameters
receiverReceiver to remove
- (void) removeInputReceiver: (id< AEAudioReceiver >)  receiver
fromChannels: (NSArray *)  channels 

Remove an input receiver.

Parameters
receiverReceiver to remove
channelsSpecific channels to remove receiver from
- (NSArray*) inputReceivers

Obtain a list of all input receivers.

- (void) addTimingReceiver: (id< AEAudioTimingReceiver >)  receiver

Add a timing receiver.

Timing receivers receive notifications for when time has advanced. When called from an input context, the call occurs before any input receiver calls are performed. When called from an output context, it occurs before any output receivers are performed.

This mechanism can be used to trigger time-dependent events.

Parameters
receiverAn object that implements the AEAudioTimingReceiver protocol
- (void) removeTimingReceiver: (id< AEAudioTimingReceiver >)  receiver

Remove a timing receiver.

Parameters
receiverAn object that implements the AEAudioTimingReceiver protocol
- (NSArray*) timingReceivers

Obtain a list of all timing receivers.

- (void) performAsynchronousMessageExchangeWithBlock:

Send a message to the realtime thread asynchronously, if running, optionally receiving a response via a block.

This is a synchronization mechanism that allows you to schedule actions to be performed on the realtime audio thread without any locking mechanism required. Pass in a block, and the block will be performed on the realtime thread at the next polling interval.

Important: Do not interact with any Objective-C objects inside your block, or hold locks, allocate memory or interact with the BSD subsystem, as all of these may result in audio glitches due to priority inversion.

If provided, the response block will be called on the main thread after the message has been sent. You may exchange information from the realtime thread to the main thread via a shared data structure (such as a struct, allocated on the heap in advance), or __block variables.

If running is NO, then message blocks will be performed on the main thread instead of the realtime thread.

Parameters
blockA block to be performed on the realtime thread.
responseBlockA block to be performed on the main thread after the handler has been run, or nil.
- (BOOL) performSynchronousMessageExchangeWithBlock:

Send a message to the realtime thread synchronously, if running.

This is a synchronization mechanism that allows you to schedule actions to be performed on the realtime audio thread without any locking mechanism required. Pass in a block, and the block will be performed on the realtime thread at the next polling interval.

Important: Do not interact with any Objective-C objects inside your block, or hold locks, allocate memory or interact with the BSD subsystem, as all of these may result in audio glitches due to priority inversion.

This method will block the current thread until the block has been performed on the realtime thread. You may pass information from the realtime thread to the calling thread via the use of __block variables.

If all you need is a checkpoint to make sure the Core Audio thread is not mid-render, etc, then you may pass nil for the block.

If running is NO, then message blocks will be performed on the main thread instead of the realtime thread.

If the block is not processed within a timeout interval, this method will return NO.

Parameters
blockA block to be performed on the realtime thread.
Returns
YES if the block could be performed, NO otherwise.
- (void) AEAudioControllerSendAsynchronousMessageToMainThread (__unsafe_unretained AEAudioController *)  audioController
(AEMessageQueueMessageHandler handler
(void *)  userInfo
(int)  userInfoLength 

Send a message to the main thread asynchronously.

This is a synchronization mechanism that allows you to schedule actions to be performed on the main thread, without any locking or memory allocation. Pass in a function pointer and optionally a pointer to data to be copied and passed to the handler, and the function will be called on the realtime thread at the next polling interval.

Tip: To pass a pointer (including pointers to __unsafe_unretained Objective-C objects) through the userInfo parameter, be sure to pass the address to the pointer, using the "&" prefix:

AEMessageQueueSendMessageToMainThread(queue, myMainThreadFunction, &pointer, sizeof(void*));

or

AEMessageQueueSendMessageToMainThread(queue, myMainThreadFunction, &object, sizeof(MyObject*));

You can then retrieve the pointer value via a void** dereference from your function:

void * myPointerValue = *(void**)userInfo;

To access an Objective-C object pointer, you also need to bridge the pointer value:

MyObject *object = (__bridge MyObject*)*(void**)userInfo;
Parameters
audioControllerThe audio controller.
handlerA pointer to a function to call on the main thread.
userInfoPointer to user info data to pass to handler - this will be copied.
userInfoLengthLength of userInfo in bytes.
- (void) beginMessageExchangeBlock

Begins a block of messages to be performed consecutively.

Calling this method will cause message processing on the realtime thread to be suspended until endMessageExchangeBlock is called.

- (void) endMessageExchangeBlock

Ends a consecutive block of messages.

- (void) outputAveragePowerLevel: (Float32 *)  averagePower
peakHoldLevel: (Float32 *)  peakLevel 

Get output power level information since this method was last called.

Parameters
averagePowerIf not NULL, on output will be set to the average power level of the most recent output audio, in decibels
peakLevelIf not NULL, on output will be set to the peak level of the most recent output audio, in decibels
- (void) outputAveragePowerLevels: (Float32 *)  averagePowers
peakHoldLevels: (Float32 *)  peakLevels
channelCount: (UInt32)  count 

Get output power level information for multiple channels since this method was last called.

Parameters
averagePowersIf not NULL, each element of the array on output will be set to the average power level of the most recent output audio for each channel up to count, in decibels
peakLevelsIf not NULL, each element of the array on output will be set to the peak level of the most recent output audio for each channel up to count, in decibels
channelCountspecifies the number of channels to fill in the averagePowers and peakLevels array parameters
- (void) averagePowerLevel: (Float32 *)  averagePower
peakHoldLevel: (Float32 *)  peakLevel
forGroup: (AEChannelGroupRef group 

Get output power level information for a particular group, since this method was last called.

Parameters
averagePowerIf not NULL, on output will be set to the average power level of the most recent audio, in decibels
peakLevelIf not NULL, on output will be set to the peak level of the most recent audio, in decibels
groupThe channel group
- (void) averagePowerLevels: (Float32 *)  averagePowers
peakHoldLevels: (Float32 *)  peakLevels
forGroup: (AEChannelGroupRef group
channelCount: (UInt32)  count 

Get output power level information for a particular group, since this method was last called.

Parameters
averagePowerIf not NULL, each element of the array on output will be set to the average power level of the most recent audio for each channel, in decibels
peakLevelIf not NULL, each element of the array on output will be set to the peak level of the most recent audio for each channel, in decibels
groupThe channel group
channelCountspecifies the number of channels to fill in the averagePowers and peakLevels array parameters
- (void) inputAveragePowerLevel: (Float32 *)  averagePower
peakHoldLevel: (Float32 *)  peakLevel 

Get input power level information since this method was last called.

Parameters
averagePowerIf not NULL, on output will be set to the average power level of the most recent input audio, in decibels
peakLevelIf not NULL, on output will be set to the peak level of the most recent input audio, in decibels
- (void) inputAveragePowerLevels: (Float32 *)  averagePowers
peakHoldLevels: (Float32 *)  peakLevels
channelCount: (UInt32)  count 

Get input power level information for multiple channels since this method was last called.

Parameters
averagePowersIf not NULL, each element of the array on output will be set to the average power level of the most recent input audio for each channel up to count, in decibels
peakLevelsIf not NULL, each element of the array on output will be set to the peak level of the most recent input audio for each channel up to count, in decibels
channelCountspecifies the number of channels to fill in the averagePowers and peakLevels array parameters
- (AudioStreamBasicDescription*) AEAudioControllerAudioDescription (__unsafe_unretained AEAudioController *)  audioController

Get access to the configured AudioStreamBasicDescription.

- (AudioStreamBasicDescription*) AEAudioControllerInputAudioDescription (__unsafe_unretained AEAudioController *)  audioController

Get access to the input AudioStreamBasicDescription.

- (long) AEConvertSecondsToFrames (__unsafe_unretained AEAudioController *)  audioController
(NSTimeInterval)  seconds 

Convert a time span in seconds into a number of frames at the current sample rate.

- (NSTimeInterval) AEConvertFramesToSeconds (__unsafe_unretained AEAudioController *)  audioController
(long)  frames 

Convert a number of frames into a time span in seconds.

- (BOOL) AECurrentThreadIsAudioThread (void) 

Determine if the current thread is the audio thread.

- (NSTimeInterval) AEAudioControllerInputLatency (__unsafe_unretained AEAudioController *)  controller

Input latency (in seconds)

To account for hardware latency, if automaticLatencyManagement is NO, you can use this function to offset audio timestamps. Note that if automaticLatencyManagement is YES (the default), you should not use this method.

For example:

timestamp.mHostTime -= AEHostTicksFromSeconds(AEAudioControllerInputLatency(audioController));

Note that when connected to Audiobus input, this function returns 0.

Parameters
controllerThe audio controller
Returns
The currently-reported hardware input latency
- (NSTimeInterval) AEAudioControllerOutputLatency (__unsafe_unretained AEAudioController *)  controller

Output latency (in seconds)

To account for hardware latency, if automaticLatencyManagement is NO, you can use this function to offset audio timestamps. Note that if automaticLatencyManagement is YES (the default), you should not use this method.

For example:

timestamp.mHostTime += AEHostTicksFromSeconds(AEAudioControllerOutputLatency(audioController));

Note that when connected to Audiobus, this value will automatically account for any Audiobus latency.

Parameters
controllerThe audio controller
Returns
The currently-reported hardware output latency
- (AudioTimeStamp) AEAudioControllerCurrentAudioTimestamp (__unsafe_unretained AEAudioController *)  controller

Get the current audio system timestamp.

For use on the audio thread; returns the latest audio timestamp, either for the input or the output bus, depending on when this method is called.

Parameters
controllerThe audio controller
Returns
The last-seen audio timestamp for the most recently rendered bus
- (void) setAudiobusSenderPort: (ABSenderPort *)  senderPort
forChannel: (id< AEAudioPlayable >)  channel 

Set an Audiobus sender port to send audio from a particular channel.

When assigned to a channel and connected via Audiobus, audio for the given channel will be sent out the Audiobus sender port.

Parameters
senderPortThe Audiobus sender port, or nil to remove the port
channelChannel for the sender port

Provided by category AEAudioController(AudiobusAdditions).

- (void) setAudiobusSenderPort: (ABSenderPort *)  senderPort
forChannelGroup: (AEChannelGroupRef channelGroup 

Set an Audiobus sender port to send audio from a particular channel group.

When assigned to a channel and connected via Audiobus, audio for the given group will be sent out the Audiobus sender port.

Parameters
senderPortThe Audiobus sender port, or nil to remove the port
channelGroupChannel group for the sender port

Provided by category AEAudioController(AudiobusAdditions).

- (ABFilterPort* audiobusFilterPort) __deprecated_msg ("No longer in use") 

Audiobus filter port.

Set this property to an Audiobus filter port to let TAAE correctly update the number of input channels when connected.

Provided by category AEAudioController(AudiobusAdditions).

- (ABSenderPort* audiobusSenderPort) __deprecated_msg ("use ABSenderPort's audio unit initializer instead") 

Audiobus sender port.

Deprecated: use ABSenderPort's audio unit initializer (using AEAudioController's audioUnit property.

This method has been deprecated, as it doesn't support synchronization and latency compensation.

Provided by category AEAudioController(AudiobusAdditions).

Property Documentation

- (AEMessageQueue*) messageQueue
readnonatomicstrong

The asynchronous message queue used for safe communication between main and realtime thread.

If running is NO, then message blocks passed to this instance will be performed on the main thread instead of the realtime thread.

- (NSString*) audioSessionCategory
readwritenonatomicassign

Audio session category to use.

See discussion in the Audio Session Programming Guide The default value is AVAudioSessionCategoryPlayAndRecord if audio input is enabled, or AVAudioSessionCategoryPlayback otherwise, with mixing with other apps enabled.

- (BOOL) allowMixingWithOtherApps
readwritenonatomicassign

Whether to allow mixing audio with other apps.

When this is YES, your app's audio will be mixed with the output of other applications. If NO, then any other apps playing audio will be stopped when the audio engine is started.

Note: If you are using remote controls with UIApplication's beginReceivingRemoteControlEvents, setting this to YES will stop the remote controls working. This is an iOS limitation.

Default: YES

- (BOOL) useMeasurementMode
readwritenonatomicassign

Whether to use the "Measurement" Audio Session Mode for improved audio quality and bass response.

Note that when the device's built-in mic is being used, TAAE can automatically boost the gain, as this is very low while Measurement Mode is enabled. See boostBuiltInMicGainInMeasurementMode.

Default: NO

- (BOOL) avoidMeasurementModeForBuiltInSpeaker
readwritenonatomicassign

Whether to avoid using Measurement Mode with the built-in speaker.

When used with the built-in speaker, Measurement Mode results in quite low audio output levels. Setting this property to YES causes TAAE to avoid using Measurement Mode with the built-in speaker, avoiding this problem.

Default is YES.

- (BOOL) boostBuiltInMicGainInMeasurementMode
readwritenonatomicassign

Whether to boost the input volume while using Measurement Mode with the built-in mic.

When the device's built-in mic is being used while Measurement Mode is enabled (see useMeasurementMode), TAAE can automatically boost the gain, as this is very low with Measurement Mode. This takes place independently of the inputGain setting.

Default is YES.

- (BOOL) muteOutput
readwritenonatomicassign

Mute output.

Set to YES to mute all system output. Note that even if this is YES, playback callbacks will still receive audio, as the silencing happens after output receiver callbacks are called.

- (float) masterOutputVolume
readwritenonatomicassign

Access the master output volume.

Note that this value affects the output of the audio engine; it doesn't modify the hardware volume setting.

- (BOOL) enableBluetoothInput
readwritenonatomicassign

Enable audio input from Bluetooth devices.

Note that setting this property to YES may have implications for input latency.

Default is NO.

- (BOOL) inputGainAvailable
readnonatomicassign

Determine whether input gain is available.

- (float) inputGain
readwritenonatomicassign

Set audio input gain (if input gain is available)

Value must be in the range 0-1

- (BOOL) voiceProcessingEnabled
readwritenonatomicassign

Whether to use the built-in voice processing system.

This can be useful for removing echo/feedback when playing through the speaker while simultaneously recording through the microphone. Not suitable for music, but works adequately well for speech.

Note that changing this value will cause the entire audio system to be shut down and restarted with the new setting, which will result in a break in audio playback.

Enabling voice processing in short buffer duration environments (< 0.01s) may cause stuttering.

Default is NO.

- (BOOL) voiceProcessingOnlyForSpeakerAndMicrophone
readwritenonatomicassign

Whether to only perform voice processing for the SpeakerAndMicrophone route.

This causes voice processing to only be enabled in the classic echo removal scenario, when audio is being played through the device speaker and recorded by the device microphone.

Default is YES.

- (AEInputMode) inputMode
readwritenonatomicassign

Input mode: How to handle incoming audio.

If you are using an audio format with more than one channel, this setting defines how the system receives incoming audio.

See AEInputMode for a description of the available options.

Default is AEInputModeFixedAudioFormat.

- (NSArray*) inputChannelSelection
readwritenonatomicstrong

Input channel selection.

When there are more than one input channel, you may specify which of the available channels are actually used as input. This is an array of NSNumbers, each referring to a channel (starting with the number 0 for the first channel).

Specified input channels will be mapped to output chanels in the order they appear in this array, so the first channel specified will be mapped to the first output channel (the only output channel, if output is mono, or the left channel for stereo output), the second input to the second output (the right channel).

By default, the first two inputs will be used, for devices with more than 1 input channel.

- (NSTimeInterval) preferredBufferDuration
readwritenonatomicassign

Preferred buffer duration (in seconds)

Set this to low values for better latency, but more processing overhead, or higher values for greater latency with lower processing overhead. This parameter affects the length of the audio buffers received by the various callbacks.

System default is ~23ms, or 1024 frames.

- (NSTimeInterval) currentBufferDuration
readnonatomicassign

Current buffer duration (in seconds)

This is the current hardware buffer duration, which may or may not be the same as the preferredBufferDuration property, depending on the set of active apps on the device and the order in which they were launched.

Observable.

- (NSTimeInterval) inputLatency
readnonatomicassign

Input latency (in seconds)

The currently-reported hardware input latency. See AEAudioControllerInputLatency.

- (NSTimeInterval) outputLatency
readnonatomicassign

Output latency (in seconds)

The currently-reported hardware output latency. See AEAudioControllerOutputLatency

- (BOOL) automaticLatencyManagement
readwritenonatomicassign

Whether to automatically account for input/output latency.

If this property to YES (defautlt), the timestamps you see in the various callbacks will automatically account for input and output latency. If you set this property to NO and you wish to account for latency, you will need to use the inputLatency and outputLatency properties, or their corresponding C functions AEAudioControllerInputLatency and AEAudioControllerOutputLatency yourself.

Default is YES.

- (BOOL) running
readnonatomicassign

Determine whether the audio engine is running.

This is affected by calling start and stop on the audio controller.

- (BOOL) playingThroughDeviceSpeaker
readnonatomicassign

Determine whether audio is currently being played through the device's speaker.

This property is observable

- (BOOL) recordingThroughDeviceMicrophone
readnonatomicassign

Determine whether audio is currently being recorded through the device's mic.

This property is observable

- (BOOL) audioInputAvailable
readnonatomicassign

Whether audio input is currently available.

Note: This property is observable

- (BOOL) inputEnabled
readnonatomicassign

Whether audio input is currently enabled.

Note: This property is observable

- (BOOL) outputEnabled
readnonatomicassign

Whether audio output is currently available.

Note: This property is observable

- (int) numberOfInputChannels
readnonatomicassign

The number of audio channels that the current audio input device provides.

Note that this will not necessarily be the same as the number of audio channels your app will receive, depending on the inputMode and inputChannelSelection properties. Use inputAudioDescription to obtain an AudioStreamBasicDescription representing the actual incoming audio.

Note: This property is observable

- (AudioStreamBasicDescription) inputAudioDescription
readnonatomicassign

The audio description defining the input audio format.

Note: This property is observable

See also inputMode and inputChannelSelection

- (AudioStreamBasicDescription) audioDescription
readnonatomicassign

The audio description that the audio controller was setup with.

- (AudioUnit) audioUnit
readnonatomicassign

The Remote IO audio unit used for input and output.

- (AUGraph) audioGraph
readnonatomicassign

The audio graph handle.

- (ABReceiverPort*) audiobusReceiverPort
readwritenonatomicretain

Audiobus receiver port.

Set this property to an Audiobus receiver port to receive audio from this port instead of the system audio input.

Provided by category AEAudioController(AudiobusAdditions).


The documentation for this class was generated from the following file: