Jump to content
  • Computer Latency: What It Is and How to Reduce It

    By Anderton |

    It’s the Achilles Heel of computer-based recording...

     

    By Craig Anderton

     

    Lurking deep within your computer is a killjoy for anyone who wants to play software synthesizers in real time, or play instruments (such as guitar) through processing plug-ins: Latency — the delay your computer introduces between the time you hit a note on a keyboard, and when you hear it come out of the speakers.
     

    But look at it from the computer’s point of view. Even the most powerful processor can only do so many millions of calculations per second; when it’s busy scanning your keyboard, checking its ports, shuffling data in and out of RAM, and generally sweating its little silicon butt off, you can understand why it sometimes has a hard time keeping up.

     

    To avoid running out of audio, the computer sticks some of the incoming audio in a buffer, which is like a savings account for your audio: When the computer is so busy elsewhere that it can’t deal with audio, it makes a “withdrawal” from the buffer instead. The larger the buffer, the less likely the computer will run out of audio data when it needs it. But a larger buffer also means that the audio is being diverted for a longer period of time before hitting the computer, which is the genesis of latency.

     

    MINIMIZING LATENCY

    The first step in minimizing delay is, unfortunately, the most expensive one: a processor upgrade. Today’s multi-GHz processors are so fast they actually travel backward in time! Well, not really, but massive computational power is a Good Thing.

     

    The second step involves drivers, little pieces of code that provide communications between your computer and sound card (or USB/FireWire interface). Don’t let their size fool you — they are the data gatekeepers, and how efficiently they do their task greatly affects latency.

     

    Steinberg devised the first low-latency driver mode for audio interfaces, based on their ASIO (Advanced Streaming Input Output) drivers. These tied in closely with the CPU, bypassing various layers of both Mac and Windows operating systems. At that time the Mac used Sound Manager, and Windows used something that seemed to change names every few weeks, but was equally unsuited to musical needs. Cards that supported ASIO were essential for serious musical applications; ASIO led to ASIO2, which was even better.

     

    Eventually, Apple and Microsoft wised up. Microsoft brought forth the WDM driver mode, which was light years ahead of their previous efforts. And starting with OS X Apple gave us Core Audio, which tied in even more closely with low-level operating system elements (Fig. 1).

     

    motu-prefs-8de0ce47.png.f46ababf922e373410d8f2fbbd3b7e5d.png

    Fig. 1: The Preferences from MOTU's Digital Performer, which is in the process of testing out an Avid interface. It's being set to the interface's lowest available buffer value of 128 samples.

     

    Microsoft offers other low-latency protocols, but on Windows, ASIO remains the de facto low-latency standard. Thanks to these driver improvements, it’s now possible to obtain latencies under 10 ms with a decent processor and an audio interface that supports low-latency drivers like ASIO, WDM, or Core Audio.

     

    THE DIFFERENT TYPES OF LATENCY

    Be aware that when you see a latency figure, it may have nothing to do with reality. Latency may simply express the amount of reserve storage the buffers have, which will be a low figure. But there's also latency involved in converting analog to digital and back again (about 1.2ms at 44.1kHz), as well as latency caused by other factors in a computer-based system and its associated hardware. A more realistic figure is the total round-trip latency, or the total delay from input to output. For example, latency may be 1.5ms for the sample buffers, but the real latency incorporates that figure and hardware latencies. This could add up to something like 5ms for input latency and 4ms for output latency, giving a total round-trip latency of around 9ms (Fig. 2).

     

    prefs-f1826a5a.png.34331d63ff2138b1391203ae5486013f.png

    Fig. 2: This shows the audio preferences setting from Cakewalk Sonar. The panel to the right sets the buffer size in Roland's VS-700 interface; in this case, it's 64 samples. To the left, Sonar displays this delay, as well as the input, output, and total round-trip latency.

     

    Also note that although we’ve expressed latency in milliseconds, some manufacturers specify it in samples. This isn’t as intuitive, but it’s not hard to translate samples to milliseconds. This involves delving into some math, but if the following makes your head explode, don’t worry and just remember the golden rule of latency: Use the lowest setting that gives reliable audio operation. In other words, if the latency is expressed in milliseconds, use the lowest setting that works. If it’s specified in samples, you still use the lowest setting that works. Now, the math: with a 44.1kHz sampling rate, there are 44,100 samples taken per second. So each sample is 1/44,100th of a second long, or about 0.023 ms. So if the buffer latency is 256 samples, at 44.1 kHz that means a delay of 256 X 0.023 ms—about 5.8 ms.

     

    A final complication is that the interface reports its latency to the computer, which is how it calculates its latency figures. However, this reporting is not always accurate. This isn't some kind of conspiracy, and the figure shouldn't be too far off, but the takeaway is to believe your ears. If one set of hardware sounds like it's giving lower latency but the specs indicate otherwise, your ears are probably right.

     

    WHY "DIRECT MONITORING" ISN'T ALWAYS THE ANSWER

    You may have heard about an audio interface feature called “direct monitoring,” which supposedly reduces latency. And it does, but only for audio input signals (e.g., mic, hardware synth, guitar, etc.). It does this by sending the input signal directly to the audio output, essentially bypassing the computer. When you’re playing software synthesizers, or any audio through plug-ins (for example, guitar through guitar amp emulation plug-ins), turn direct monitoring off. What you want to hear is being generated inside the computer, so shunting the audio input to the output is not a solution.

     

    You’ll typically find direct monitoring settings in one of two places: An applet that comes with the sound card, or within a DAW program.

     

    HOW LOW CAN YOU GO?

    It will always take a finite amount of time to convert analog to digital at the input, and digital to analog at the output. Unfortunately, though, ultra-low latency settings (or higher sampling rates, for that matter) make your computer work harder, so you’ll be limited as to how many software synthesizers and plug-ins can run before your computer goes compu-psycho. You’ll know your computer is going too far when it the audio starts to sputter, crackle, or mute. As latency will continue to be a part of our musical lives for the foreseeable future, before closing out let’s cover some tips on living with latency.

    • Set your sample buffers to the highest comfortable value. For me, 5 ms is sufficiently responsive, and makes the computer happier than choosing 2 or 3 ms.
    • When you're starting a project, you can usually set latency lower than when mixing, after you've inserted a bunch of plug-ins. If you want to lay down a guitar or soft synth part using plug-ins, try to do so early in the recording process.
    • Sometimes there are two latency adjustments: A Control Panel for the audio interface sets a minimum amount of latency, and the host can increase from this value if needed. Or, the host may “lock” to the control panel setting.
    • Seek out and download your audio interfaces’s latest drivers. Dedicated programmers are mainlining Pepsi and eating pizza as we speak so that we can have more efficient audio performance — don’t disappoint them.
    • If you have multiple soft synths playing back at once, use your program’s “freeze” function (if available) to disconnect some synths from the CPU. Or, render a soft synth’s output as a hard disk audio track (then remove the soft synth), which is far less taxing on our little microchip buddies. Hint: If you retain the MIDI track driving the soft synth, which places virtually no stress on your CPU, you can always edit the part later by re-inserting the soft synth.

     

    craigguitarvertical-5b5709a8.jpg.54444041ea7804e4dcf86a32fbfaff37.jpgCraig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.




    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...