Jump to content
  • Creating Parallel Effects in DAW Software

    By Anderton |

    Expand your sound with parallel effects

     

    By Craig Anderton

     

    Back when hardware was king, creating parallel effects was pretty easy. You'd send an output into a Y-cable, split it to two different effects, and there you had it: Instant parallel processing, where one signal could take two different paths. One obvious use was creating stereo effects out of a mono source; for example, the parallel processing could consist of two chorus devices, or delays set to different delay times, each panned to opposite sides of the stereo field.

     

    Digital Audio Workstation (DAW) software is a different story. In almost all cases, the program will assume you want to put the effects in series, one right after another (Figs. 1 and 2).

     

    535ec7bb486b7.jpg.e8d9030e24a3abecaf3da2c3cf562956.jpg

    Fig. 1: In Avid Pro Tools, each channel has five series inserts, A-E. The first channel has three effects inserted, the second channel has one effect inserted, while the third channel has no effects inserted.

     

    535ec7bb4dcaa.jpg.372b26bb557e5fe44edad09937d79d95.jpg

    Fig. 2: PreSonus Studio One Pro allows unlimited inserts, but you can also expand them to show a thumbnail of the settings, or collapse to take up less space. Channels 1 and 3 show expanded effects, while channel 2 shows three collapsed effects.

     

    There are a some exceptions to serial inserts of effects; Mackie's Tracktion lets you insert complex combinations of series and parallel effects with a track, and Ableton Live, starting with version 6, lets you create instrument racks with parallel effects (Fig. 3). But for most programs, you'll need to get a little creative.

     

    535ec7bb555bb.thumb.jpg.15eb0498f80838bdd83f7ea21ae02c57.jpg

    Fig. 3: Ableton Live makes it easy to create parallel effects chains. In this example, the parallel effects include a chain of series compression and saturation to create distortion (shown), as well as a parallel chain with delay and another with reverb.

     

    DO YOU COPY?

    One way to achieve parallel effects is to copy (clone) the track to which you want to apply the parallel effects, resulting in several parallel audio tracks. You then apply effects to these tracks as needed. For example, suppose you want to add a parallel effect to a piano track, where a noise gate lets through only the peaks; furthermore, this goes to a reverb that's panned far left. Meanwhile, a second noise gate sends a different set of peaks through a short delay, to a different reverb that's panned far right. You could do this with aux sends, but there's an alternative.

     

    For this example, we need three parallel tracks:

     

    • Straight piano only
    • Straight piano + noise gate + reverb1 (panned left)
    • Straight piano + noise gate + delay + reverb2 (panned right)

     

    Copy the straight piano track two times for a total of three piano tracks. The first track is the "straight," unprocessed track. In the second track, insert the noise gate and reverb, then pan the track toward the left. For the third track, insert the noise gate, delay, and second reverb, then pan that track toward the right. (Of course, you could also slide the third track behind a bit in time to create the delay, but sometimes it's a lot more convenient to just dial in a delay, particularly if you need to sync to tempo.)

     

    Because tracks in today's DAWs are aligned with sample accuracy (and assuming the effects paths have delay compensation), you won't hear any flamming, comb filtering, or other undesirable effects when you combine the tracks.

     

    "VIRTUAL MICS" WITH PARALLEL EQ

    Here's a real-world example of using parallel effects to create a wider stereo image (Fig. 4).

     

    In some ways pianos are fun to record, because they generate sound over a wide area. Stick a couple mics in the right places, and you'll end up with some great stereo imaging. But other instruments, such as classical guitar, accordian, percussion, etc. don't have a wide stereo image if you hear them from more than a few feet away—although up close, it can be a different story. If you're facing a guitarist, your right ear picks up on some of the finger squeaks and string noise from the guitarist's fretting hand. Your left ear picks up some of the body's "bass boom;" although not as directional as the high-frequency finger noise, it still shifts the lower spectra somewhat to the left. Meanwhile, the main guitar sound fills the room, providing the acoustic equivalent of a "center channel."

    535ec7bb5e27b.jpg.873a52ee24bbbe0335221850b53ea409.jpg
     
    Fig. 4: These three EQ curves (shown in Sonar), when panned as described and mixed for the proper balance, create a much larger image that belies the fact a recording was done with a single mic.
     

    This all became very clear to me when recording a guitar/keyboard duo, where the keyboard had a nice spread but the guitar kept getting shoved to the center of the image. What to do? I tried using two mics on the guitar, but the phasing issues were unacceptable. Then I thought about what made the sound "wider" as you got closer, and a solution suggested itself. I've also used the following technique to stretch a piano and organ's image beyond what I could obtain simply by using two mics; in fact, this basic principle works for most sound sources where the bass doesn't need to be in the middle of the stereo image.

     

    The first step in simulating the effect of being close to the guitar was to copy the original guitar track to two more tracks. The first clone provided the "squeak" component by including a highpass filter that cut off the low end starting around 1kHz. This was panned toward the right. The second clone for the "boom" channel used a lowpass filter with a sharp cutoff from 400Hz on up. This was panned to the left.

     

    Adding these two tracks to the main track pulled out some of the "finger squeaks" and "boom" components that were in the original sound, and positioned them in a more realistic stereo location. This also stretched the stereo image somewhat. And because these signals were extracted from one mic, there were none of the phasing problems associated with multiple mics.

     

    As to mixing these three elements, the drastic amounts of high and lowpass filtering on the cloned channels brought their overall levels way down, even without touching the channel fader. If you isolate these tracks, it seems as if their impact would be non-existent due to the low level and restricted frequency range. But if you mix them in with the main channel, the entire sound comes to life. -HC-

     

     

     

     

    image_86469.jpg

     Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.

     




    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...