Low-Latency Audio Playback on Android

Low-latency audio playback on Android

For lowest latency on Android as of version 4.2.2, you should do the following, ordered from least to most obvious:

  1. Pick a device that supports FEATURE_AUDIO_PRO if possible, or FEATURE_AUDIO_LOW_LATENCY if not. ("Low latency" is 50ms one way; pro is <20ms round trip.)

  2. Use OpenSL. The Dalvik GC has a low amortized cost, but when it runs it takes more time than a low-latency audio thread can allow.

  3. Process audio in a buffer queue callback. The system runs buffer queue callbacks in a thread that has more favorable scheduling than normal user-mode threads.

  4. Make your buffer size a multiple of AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER). Otherwise your callback will occasionally get two calls per timeslice rather than one. Unless your CPU usage is really light, this will probably end up glitching. (On Android M, it is very important to use EXACTLY the system buffer size, due to a bug in the buffer handling code.)

  5. Use the sample rate provided by AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE). Otherwise your buffers take a detour through the system resampler.

  6. Never make a syscall or lock a synchronization object inside the buffer callback. If you must synchronize, use a lock-free structure. For best results, use a completely wait-free structure such as a single-reader single-writer ring buffer. Loads of developers get this wrong and end up with glitches that are unpredictable and hard to debug.

  7. Use vector instructions such as NEON, SSE, or whatever the equivalent instruction set is on your target processor.

  8. Test and measure your code. Track how long it takes to run--and remember that you need to know the worst-case performance, not the average, because the worst case is what causes the glitches. And be conservative. You already know that if it takes more time to process your audio than it does to play it, you'll never get low latency. But on Android this is even more important, because the CPU frequency fluctuates so much. You can use perhaps 60-70% of CPU for audio, but keep in mind that this will change as the device gets hotter or cooler, or as the wifi or LTE radios start and stop, and so on.

Low-latency audio is no longer a new feature for Android, but it still requires device-specific changes in the hardware, drivers, kernel, and framework to pull off. This means that there's a lot of variation in the latency you can expect from different devices, and given how many different price points Android phones sell at, there probably will always be differences. Look for FEATURE_AUDIO_PRO or FEATURE_AUDIO_LOW_LATENCY to identify devices that meet the latency criteria your app requires.

Android AudioTrack latency with playback

I'm assuming your goal is to come up with some interactive audio solution (that is, where sound is played in response to some user action), because in this scenario low latency really matters.

On Android, to achieve the lowest latency you need to use Open SL ES API which is available to native (C++) code via NDK. The only Java side mechanism that can achieve low latency is SoundPool class, but it has limitations in what kind of sounds you can play.

For more information, see the page on high-performance audio, and also check out this SO answer: Low-latency audio playback on Android

Android: OpenSL lowest possible latency

It seems the device I am testing on, the Galaxy S5 does not have android.hardware.audio.low-latency. Which means that openSL cannot use fast track with it and will result in high latency. This is really surprising because the device is pretty new so it should support a new feature such as this. But despite doing everything required to have low latency in my app, it will have high latency on the device I am testing on.

From the android ndk openSL docs:
"These improvements are available via OpenSL ES for device implementations that claim feature "android.hardware.audio.low_latency". If the device doesn't claim this feature but supports API level 9 (Android platform version 2.3) or later, then you can still use the OpenSL ES APIs but the output latency may be higher. The lower output latency path is used only if the application requests a buffer count of 2 or more, and a buffer size and sample rate that are compatible with the device's native output configuration."

For anyone wondering if your app is the problem. Be sure to check if the device you are testing on supports this feature. Also check out the Performance section of the (ndk)/docs/opensles/index.html and the links in the question

Android: sound API (deterministic, low latency)

SoundPool is the lowest-latency interface on most devices, because the pool is stored in the audio process. All of the other audio paths require inter-process communication. OpenSL is the best choice if SoundPool doesn't meet your needs.

Why OpenSL? AudioTrack and OpenSL have similar latencies, with one important difference: AudioTrack buffer callbacks are serviced in Dalvik, while OpenSL callbacks are serviced in native threads. The current implementation of Dalvik isn't capable of servicing callbacks at extremely low latencies, because there is no way to suspend garbage collection during audio callbacks. This means that the minimum size for AudioTrack buffers has to be larger than the minimum size for OpenSL buffers to sustain glitch-free playback.

On most Android releases this difference between AudioTrack and OpenSL made no difference at all. But with Jellybean, Android now has a low-latency audio path. The actual latency is still device dependent, but it can be considerably lower than previously. For instance, http://code.google.com/p/music-synthesizer-for-android/ uses 384-frame buffers on the Galaxy Nexus for a total output latency of under 30ms. This requires the audio thread to service buffers approximately once every 8ms, which was not feasible on previous Android releases. It is still not feasible in a Dalvik thread.

This explanation assumes two things: first, that you are requesting the smallest possible buffers from OpenSL and doing your processing in the buffer callback rather than with a buffer queue. Second, that your device supports the low-latency path. On most current devices you will not see much difference between AudioTrack and OpenSL ES. But on devices that support Jellybean+ and low-latency audio, OpenSL ES will give you the lowest-latency path.



Related Topics



Leave a reply



Submit