The Amazing Audio Engine provides quite a number of utilities and other bits and pieces designed to make writing audio apps easier.
The AEAudioFileLoaderOperation class provides an easy way to load audio files into memory. All audio formats that are supported by the Core Audio subsystem are supported, and audio is converted automatically into the audio format of your choice.
The class is an NSOperation
subclass, which means that it can be run asynchronously using an NSOperationQueue
. Alternatively, you can use it in a synchronous fashion by calling start
directly:
Note that this class loads the entire audio file into memory, and doesn't support streaming of very large audio files. For that, you will need to use the ExtAudioFile
services directly.
The AEAudioFileWriter class allows you to easily write to any audio file format supported by the system.
To use it, instantiate it using initWithAudioDescription: , passing in the audio format you wish to use. Then, begin the operation by calling beginWritingToFileAtPath:fileType:error: , passing in the path to the file you'd like to record to, and the file type to use. Common file types include kAudioFileAIFFType
, kAudioFileWAVEType
, kAudioFileM4AType
(using AAC audio encoding), and kAudioFileCAFType
.
Once the write operation has started, you use the C functions AEAudioFileWriterAddAudio and AEAudioFileWriterAddAudioSynchronously to write audio to the file. Note that you should only use AEAudioFileWriterAddAudio when writing audio from the Core Audio thread, as this is done asynchronously in a way that does not hold up the thread.
When you are finished, call finishWriting to close the file.
AudioBufferList
is the basic unit of audio for Core Audio, representing a small time interval of audio. This structure contains one or more pointers to an area of memory holding the audio samples: For interleaved audio, there will be one buffer holding the interleaved samples for all channels, while for non-interleaved audio there will be one buffer per channel.
The Amazing Audio Engine provides a number of utility functions for dealing with audio buffer lists:
AudioStreamBasicDescription
and a number of frames to allocate, and will allocate and initialise an audio buffer list and the corresponding memory buffers appropriately.AudioStreamBasicDescription
and return the number of frames contained within the audio buffer list given the mDataByteSize
values within.mDataByteSize
values to correspond to the given number of frames.mData
pointers by the given number of frames, and decrements the mDataByteSize
values accordingly.Note: Do not use those functions above that perform memory allocation or deallocation from within the Core Audio thread, as this may cause performance problems.
Additionally, the AEAudioBufferManager class lets you perform standard ARC/retain-release memory management with AudioBufferLists.
Core Audio uses the AudioStreamBasicDescription
type for describing kinds of audio samples. The Amazing Audio Engine provides a number of utilities for working with these types:
Vector operations offer orders of magnitude improvements in processing efficiency over performing the same operation as a large number of scalar operations.
For example, take the following code which calculates the absolute maximum value within an audio buffer:
This consists of frames address calculations, followed by frames calls to fabs
, frames floating-point comparisons, and at worst case, frames assignments, followed by frames integer increments.
This can be replaced by a single vector operation, using the Accelerate framework:
For those working with floating-point audio, this already works, but for those working in other audio formats, an extra conversion to floating-point is required.
If you are using only non-interleaved 16-bit signed integers, then this can be performed easily, using vDSP_vflt16
. Otherwise, The Amazing Audio Engine provides the AEFloatConverter class to perform this operation easily with any audio format:
Thread synchronization is notoriously difficult at the best of times, but when the timing constraints introduced by the Core Audio realtime thread are taken into account, this becomes a very tricky problem indeed.
A common solution is the use of mutexes with try-locks, so that rather than blocking on a lock, the Core Audio thread will simply fail to acquire the lock, and will abort the operation. This can work, but always runs the risk of creating audio artefacts when it stops generating audio for a time interval, which is precisely the problem that we are trying to avoid by not blocking.
All this can be avoided with The Amazing Audio Engine's messaging feature.
This utility allows the main thread to send messages to the Core Audio thread, and vice versa, without any locking required.
To send a message to the Core Audio thread, use either performAsynchronousMessageExchangeWithBlock:responseBlock: , or performSynchronousMessageExchangeWithBlock: :
To send messages from the Core Audio thread back to the main thread, you need to define a C callback, which takes the form defined by AEMessageQueueMessageHandler, then call AEAudioControllerSendAsynchronousMessageToMainThread , passing a reference to any parameters, with the length of the parameters in bytes.
Whatever is passed via the 'userInfo' parameter of AEAudioControllerSendAsynchronousMessageToMainThread will be copied onto an internal buffer. A pointer to the copied item on the internal buffer will be passed to the callback you provide.
Note: This is an important distinction. The bytes pointed to by the 'userInfo' parameter value are passed by value, not by reference. To pass a pointer to an instance of an Objective-C class, you need to pass the address to the pointer to copy using the "&" operator.
This:
Not this:
To access an Objective-C object pointer from the main thread handler function, you can bridge a dereferenced void**
to your object type, like this:
For certain applications, it's important that events take place at a precise time. NSTimer
and the NSRunLoop
scheduling methods simply can't do the job when it comes to millisecond-accurate timing, which is why The Amazing Audio Engine provides support for receiving time cues.
Audio receivers, channels and filters all receive and can act on audio timestamps, but there are some cases where it makes more sense to have a separate class handle the timing and synchronization.
In that case, you can implement the AEAudioTimingReceiver protocol and add your class as a timing receiver via addTimingReceiver:. The callback you provide will be called from two contexts: When input is received (AEAudioTimingContextInput), and when output is about to be generated (AEAudioTimingContextOutput). In both cases, the timing receivers will be notified before any of the audio receivers or channels are invoked, so that you can set app state that will affect the current time interval.
AEBlockScheduler is a class you can use to schedule blocks for execution at a particular time. This implements the AEAudioTimingReceiver protocol, and provides an interface for scheduling blocks with sample-level accuracy.
To use it, instantiate AEBlockScheduler, add it as a timing receiver with addTimingReceiver:, then begin scheduling events using scheduleBlock:atTime:timingContext:identifier: :
The block will be passed the current time, and the number of frames offset between the current time and the scheduled time.
The alternate scheduling method, scheduleBlock:atTime:timingContext:identifier:mainThreadResponseBlock: , allows you to provide a block that will be called on the main thread after the schedule has completed.
There are a number of utilities you can use to construct and calculate timestamps, including now, timestampWithSecondsFromNow:, hostTicksFromSeconds: and secondsFromHostTicks:.