Convolution-based reverb offers realism—but what are the tradeoffs?
By Craig Anderton
Acoustic spaces create the most natural reverb, but there’s only one “preset”—and try fitting a concert hall in your project studio. Granted, some people run mics and speakers to a tiled room (e.g., bathroom) for some decent, tight-sounding reverbs. But emulating the classic concrete room sound that was on so many great recordings, let alone other acoustic environments, is not an easy task.
OLD SKOOL DSP
Before checking out convolution reverb, consider synthesized digital reverb, which has ruled the digital reverb world for several decades (Fig. 1).
Fig. 1: Ableton Live’s reverb is an example of an algorithmic type that synthesizes the sonic effects of being in a reverberant space.
These generally break down the reverb effect into two processes. The first is “early reflections,” the initial sound that happens when sound waves first bounce off of various surfaces. Then comes the reverb “tail,” which is more of a wash of sound caused by feeding these reflections into an engine that calculates and synthesizes a gazillion of reflections, with their various amplitude and frequency response variations. Most algorithmic reverbs also create diffusion, which determines whether the echoes are more blended or discrete.
Many, if not most, digital reverbs are not true stereo devices; they sum stereo inputs into mono, and synthesize a stereo space. Unlike “real world” reverb, though, where you hear different reverb effects from different sound sources in a space, DSP produces a “one size fits all” reverb that subjects all sound sources— regardless of location—to the same reverb effect. While most of the time this is okay, for complex orchestral emulations, standard reverb algorithms lack precision.
ENTER CONVOLUTION REVERB
Convolution reverbs are based on samples rather than synthesis, which produces a highly realistic sound. Technically speaking, convolution is a mathematical term that describes what happens when you multiply two spectra. For reverb, one of these will be the sound source itself, and the other will be an impulse, which is a recording of an acoustic space’s characteristics.
As an analogy, think of the impulse as a “mold” of a particular space, and that the sound is “poured” into the mold. If the space is a concert hall, then the sound takes on the characteristics of the concert hall. But anything can be used as an impulse. For example, convolving a synthesized guitar patch with an impulse recording of an acoustic guitar body creates a more realistic guitar sound. Impulses exist not just for famous concert halls, clubs, etc., but also for amplifiers, tunnels, resonant structures, spring reverbs, filters, and the like. I’ve even used drum loops as impulses—wild.
The tradeoff has traditionally been the usual sampler vs. synthesizer issue: Lack of parameter control. But just as some companies have figured out how to get “inside the sample,” convolution reverbs are getting more flexible as well. Waves broke through with their IR-1, which allowed tailoring the sound produced by the impulse (Fig. 2).
Fig. 2: Waves' IR-1 started the trend to adding more editing capabilities to convolution reverbs.
Most modern convolution reverbs are quite editable, and as easy to use and understand as standard reverbs. You may notice that changing parameters feels a little slow due to all the calculations being performed, but this isn’t a big deal. Thanks to today’s faster processors, convolution reverbs have become commonplace—several virtual instruments, like Native Instruments’ Kontakt and MOTU’s Ethno Instrument, include convolution reverbs, as do several DAWs (Fig. 3).
Fig. 3: The Open Air reverb included with PreSonus Studio One Pro can open impulses from other sources, like the one from Sonar’s PerfectSpace convolution reverb.
As to the impulses, they’re created by recording a space’s reverberant characteristics after “exciting” the room with a set of sweep tones designed for impulse recording, or firing a shot from a starter pistol. The object is to use a trigger that generates sound throughout the entire audible spectrum, to allow capturing the space’s total frequency response.
THE REST OF THE STORY
Convolution-based processing slurps CPU power, but fortunately, host software “freeze” functions (which apply effects to a hard disk track, then “disconnects” the effects) tends to make that less of a drawback, even with slower CPUs.
Another problem is latency, which is added by the convolution process. For reverb, it’s not too serious; think of it as free pre-delay. Early convolution reverbs used to have latencies in the hundreds of milliseconds, but many now hit under 5-10ms with a fast enough CPU. If the delay is problematic, you can bounce just the processed sound to a track, and slip it forward in time.
Finally, a convolution reverb is only as good as its impulses: If someone recorded a room impulse with a tinny-sounding mic, you’ll have a tinny-sounding room. You also want a reverb package that supplies a lot of impulses, so you can really discover convolution reverb’s power. However, note that you can download free impulses from www.noisevault.com, and some are quite good. The site also hosts discussions and news about convolution-based reverbs.
But whether you use a stand-alone convolution reverb, the one bundled into a host program, or even one included with an instrument, you’ll find convolution offers extremely convincing reverb emulations—and more.
Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.