Jump to content
  • How to Record Synthesizers and Electronic Instruments

    By Anderton |

    There's much more to recording an electronic instrument than just feeding the output to an empty track

     

    by Craig Anderton

     

    Recording an electronic instrument is simple, right? You just take the output, direct inject it into the mixing console (or insert a plug-in virtual instrument directly into a DAW’s mixer), and set a reasonable level. And yes, that approach works just fine…provided you want to sound just like everyone else who is doing precisely the same thing.

    But you wouldn’t mic a drum set by taking the first mic you found and pointing it the general direction of the drummer, nor would you record an electric guitar by just plugging it into a mixer. A little extra effort spent on finding the optimum way to record an electronic instrument can make a tremendous difference in the overall “feel” of any track that incorporates synthesized sound.

    Granted that synths and drum machines don’t need miking, but there are other considerations such as an unnatural sound when mixed with acoustic instruments, background noise, lack of expressiveness, timing inconsistencies, and other issues that should be addressed to get the most out of your silicon-based musical buddies.

    That’s what this article is about, but first, a word of warning: rules were made to be broken. There is no “right” or “wrong” way to record, only ways that satisfy you to a greater or lesser degree. Sometimes doing the exact opposite of what’s expected gives the best results (and lands you on the charts!). So take the following as suggestions, not rules, that may be just what’s needed when you want to spice up an otherwise ordinary synth sound.

     

    THE SYNTHESIZER’S SECRET IDENTITY

    One crucial aspect of recording a synth is to define the desired results as completely as possible. Using synths to reinforce guitars on a heavy metal track is a completely different musical task compared to creating an all-synthesized 30-second spot. Sometimes you want synths to sound warm and organic, but if you’re doing techno, you’ll probably want a robot, machine-like vibe (with trance music, you might want to combine both possibilities).

    So, analyze your synth’s “sonic signature” - is it bright, dark, gritty, clean, warm, metallic, or something else altogether? Whereas some people attach value judgements to these different characteristics, veteran synthesists understand that different synthesizers have different general sound qualities, and choose the right sound for the right application. For example, analog synths - and virtual instruments that model analog synths (Fig. 1) - tend to use lowpass filters for shaping sounds that reduce high frequencies, producing a "warmer" sound.

     

    535ec7d1f397c.thumb.png.58852e323867605b9dca7f9f54c4e96a.png

    Fig. 1: Arturia's emulation of Bob Moog's Minimoog delivers a warm, analog sound in a virtual instrument - and does it so well that Bob Moog himself endorsed it.

     

     Digital samplers generally include lowpass filters too, but their “native” sound tends to be brighter. So if you’re using synthesizer with acoustic instruments like guitar, voice, piano, etc. - which naturally don’t have a lot of high-frequency energy – you might find that an analog hardware synth, or virtual analog synth, more closely matches the characteristics of “real” acoustic and electric instruments and blend in better.

    Of course you could use equalization to tame an overly-bright synth, but there’s a subtle difference between an instrument’s inherent characteristics and the modifications you can make to those characteristics. In a similar vein, a synth’s lo-fi options (or output passed through a lo-fi processor, like something that reduces bit resolution to 8 or 12 bits; see Fig. 2) may offer just enough “grunge” to fit in a little better with rock material.

     

    535ec7d201a6a.png.adc1c2a65ed217375d4f7356ee86014a.png

    Fig. 2: Best Service's Drums Overkill virtual instrument works with Native Instruments' Kontakt Player. The Drums Overkill instrument itself has a lo-fi processor (outlined in red), and Kontakt's mixer allows inserting the same lo-fi processor as well, which is outlined in light blue.

     

    For background music for commercial videos, I often pull out the “bright guys”—FM synths (like Native Instruments’ FM8) and other plug-ins with the filtering bypassed. These give more of an edge at lower volumes, and their “clean” qualities leave space for narration, effects, and other important sonic elements.

    So, start with as close an approximation as possible to the desired result. But even if you don’t have an arsenal of synths, keep your final goal in mind. There’s lots you can do to influence the overall timbre of a synthesizer and achieve that goal.

     

    SPACE: THE FINAL FRONT EAR

    We have two ears, and listen through air. The sound we hear is influenced by the weather, the distance to the sound source, whether we’ve listened to too much loud music on headphones, the shape of our ears, and many other factors. Hardware or virtual synths generate electrical signals that need never reach air until we hear the final mix, but there are compelling reasons to avoid always going direct with hardware, or staying “in the box” with virtual instruments.

    Compared to acoustic instruments, synth sounds are relatively static - especially since the rise of sample-playback machines. Yet our ears are accustomed to hearing evolving, complex acoustical waveforms that are very much unlike synth waveforms, and creating a simple acoustic environment for the synth is one way to end up with a more interesting, complex sound. This can also help synths blend in with tracks that include lots of miked instruments, because the latter usually include some degree of room ambience (even with fairly “dead” rooms).

    One technique  to synthesize an acoustic environment involves using signal processors. Try sending the synthesizer output through a reverb unit set to the sound of a small, dark room with very few (if any) first reflection components (Fig. 3). This should be just enough to give the synthesized sound a bit of acoustic depth.

     

    535ec7d20399e.png.73664571dd9111818884264b6f034ed6.png

    Fig. 3: Adding a subtle, small room effect - in this case, using IK Multimedia's CSR room emulation - can help make an electronic instrument fit in better with other tracks.

     

    When the synth and other instruments go through a main hall reverb bus during mixdown, they’ll mesh together a lot better. Another trick is to add two or three very short delays (20-50 ms, no feedback) mixed fairly far down. A stereo delay unit works just fine. Delays this short can add “comb filtering” effects that alter the frequency response in the same way that a real room does.

    You may want to create a different type of acoustic environment than a room, such as a guitar amp for electric guitar patches. Amps generally add distortion, equalization, limiting, and speaker simulation. Feeding the synth through an amp sim (Waves GTR, Native Instruments Guitar Rig Pro, IK Multimedia AmpliTube, Line 6 POD Farm 2, Peavey ReValver, etc.) can give a sound with much more character; in fact going through the sim might add too much character, in which case putting the effect in parallel, or in a bus that picks off some but not all of the synth sound, might give the ideal result.

    A second way to create an acoustic environment is to use the Real Thing, especially when recording a hardware synthesizer. A vintage tube guitar amp is a truly amazing signal processor, even when it’s not adding distortion; plug your synth into it and stick a mic in its face. The sound is very, very different compared to going direct.

    Virtual instruments can take advantage of this technique too; just pretend you’re re-amping a guitar track. Send the virtual instrument output directly to a hardware audio output on your computer’s audio interface (this assumes that your interface has multiple outputs), run it through your hardware processor of choice, then feed the hardware out into a spare audio interface input and record this signal in your DAW (Fig. 4). There will likely be some delay due to going from digital to analog then back to digital again, but you can always compensate for this by “nudging” the track a little bit earlier.

     

    535ec7d207051.png.f3c0d90fd73fc586ccbf7bcb32c74d2b.png

    Fig. 4: Many programs, such as Cakewalk Sonar (shown here), let you insert a hardware processor as if it was a software plug-in.

     

    Another way to add the feel of an acoustic space to a synth is to mix in a bit of miked sound of you playing the keys (sometimes a contact mic works best). This should be mixed very subtly in the background—just noticeable enough to give a low-level aural “cue.” You may be surprised at how much this adds a natural sound quality to synthesized keyboards.

     

    “BUILDING BLOCK” SYNTHESIS

    As noted earlier, different forms of synthesis have different strengths, so layering several synths can provide an interesting composite timbre. With hardware, this involves daisy-chaining multiple synths and setting them all to the same MIDI channel; with virtual instruments, you can send a MIDI track output to multiple synths but if you can only drive one synth at a time, you can always clone the MIDI track and assign each clone to a different synth.

     As one example of why layering can be useful, every time you play a sample it will exhibit the same attack characteristics. Sure, you can do tricks like velocity-switching or sample start point changes, but a better approach is to layer something like an FM synth programmed to produce a more complex transient. FM synths don’t have the same “photographic” level of realism as samplers, but can produce wide timbral variations—particularly on a sound’s attack—that are keyed to velocity. I’ve used this to good advantage on harp and plucked string sounds, where the FM synth provides the pluck, and the sampler, the body of the sound. Grafting the two elements together produces a far more satisfying effect than either one by itself.

    There’s a caution, though. If the two sounds sustain for any length of time, the timbral difference may become too noticeable. Therefore, you might want to set a fairly short decay on the “attack” sound and a bit of an attack rise on the “sustain” sound so that it doesn’t overwhelm the attack component.

    And here’s a tip along the same lines for the terminally lazy: For an instantly bigger synthesized sound for acoustic instruments, layer another synth and call up a like-named patch compared to what you’re using (e.g., layer two different vibes or cello patches). This doesn’t always work, but it’s amazing how many times this will make a really cool sound (especially if one of the sounds is mixed fairly far back to provide support, rather than competing with the main sound).

    This technique also works fabulously with drum machines—just assign two or more different drum sounds to the same note. One great combination is a TR-808-type kick thud blended with a tight, dance-music thwack.

     

    TO BOUNCE, OR NOT TO BOUNCE?

    Some people wonder whether it’s best to run synth tracks as virtual instruments into the mix, or bounce them into audio tracks so you mix them as you would any other audio track. I highly recommend bouncing any synth tracks to audio, because virtual instrument settings may be harder to re-create at a later date should compatibility problems arise (e.g., the synth you used is no longer compatible with a newer operating system). Once something’s has been converted to audio, it’s there to stay.

    What’s more, an audio track will stress out your CPU less than a virtual instrument. This may be important with projects that have lots of tracks, or have a video track. Your DAW may also have a “freeze” function (Fig. 5), which is essentially the same thing as bouncing the instrument output to audio – but this leaves your instrument “on standby” should you want to edit it.

     

    535ec7d20a702.png.40b65df328666ba88d09c1bb6417cec4.png

    Fig. 5: "Freezing" a track lets you treat a virtual instrument track as an audio track, which requires much less CPU power. In Ableton Live, frozen instruments are shown in an icy blue color.

     

    Even if you don’t use freeze and bounce an instrument to audio, at least save the synth patch you used and retain the MIDI track driving the instrument in case you need to go back to the original track setup in the future, and do some edits. Also note that many virtual instruments include effects such as chorusing, flanging, echo, reverb, distortion, equalization, etc. All things being equal, if you use these effects instead of separate plug-ins, then saving the synth preset saves any associated effects settings as well. Furthermore, bouncing the synth output to audio, or freezing the track, preserves the desired effects settings.

     

    AVOIDING DISTORTION WITH SYNTHESIZERS

    Synthesizers can have a huge dynamic range, to the point where peaks can create distortion (either internally within the synth, or when feeding an audio output). Proper synth programming can help keep this under control. Here are some tips:

    • Detuned oscillators, though they sound nice and fat, create strong peaks when the chorused waveform peaks occur at the same time. To solve this, drop one oscillator’s level about 30\\\%-50\\\% below the other. The sound will remain animated, yet the peaks won’t be as drastic and will be less likely to cause distortion.
    • High-resonance filter settings are troublesome; hitting a note at the filter’s resonant frequency creates a radical peak. An easy fix is to follow the synth with a limiter or maximizer plug-in - some synths even have limiters built in (Fig. 6). Set the limiter controls for fast attack and moderate decay. As the limiter’s main function is to trap short peaks and transients, set the threshold fairly high, and use a very high compression ratio. This will leave most of the signal relatively unaffected, but peaks won’t exceed a safe, non-distorting level.

     

    535ec7d210c99.png.eded91fbe214937c57203d270ac805f3.png

    Fig. 6: Cakewalk's Rapture virtual instrument has a limiter to cut excessive peaks down to size.

     

    • Remember that most synths have several level adjustments: mixes for individual oscillators, the envelope levels controlling DCAs, final output mixer, onboard signal processing levels, etc. For maximum dynamic range and minimum distortion, tweak these with the same care you would exercise when gain-staging a mixer.

     

    SYNTH PROGRAMMING TIPS FOR BETTER RECORDING

     

    • For a real wide stereo field without resorting to ambience processing, try using a synth’s “combi” mode (also called performance, multi, etc.) to combine several versions of the same program. Restrict the note range of each combi, then pan each range to a different place in the stereo field. For example you could pan the lowest note range full left, the highest full right, and other ranges in between. Note that this won’t use up polyphony if the ranges don’t overlap.
    • LFO panning can produce an overly regular, boring sound with sustained sounds, but panning short, percussive sounds (e.g., claves, tambourine hits, cowbell, etc.) can work very well. Because the sound is short, you don’t hear it pan per se; instead, each time the sound appears, it will be in a slightly different place in the stereo field. If you have a rock-solid kick and snare, having a percussive part dancing around the stereo field can add considerable interest.
    • When analog tape was king, many engineers used it (knowingly or unknowingly) to perform soft limiting and generate some distortion on drum sounds by recording well into the red (overload) zone of their VU meters. With virtual instruments, try recording percussion and bass sounds with just a hint of distortion or saturation. This will give more punch, but depending on the signal source, you may not really notice the distortion because it clips only the extremely fast transients at the beginning of the sound.
    • To pull a synthesized sound out of a mix, add a little bit of a pitch transient using an oscillator’s pitch envelope. Here’s one of my favorite examples: Program a choir patch using two oscillators. Now apply a pitch envelope to one oscillator that falls down to proper pitch over about 50 ms, and a second pitch envelope to the second oscillator that rises to the proper pitch over about the same time period. Set the depth for a subtle effect. This creates a more interesting transient sound that draws the ear in, and makes the sound seem louder. Remove the pitch envelopes, and the voices appear to drop further back in the mix, even without a level change.

     

    MAKING TRACKS

    Remember, machines don’t kill music - people do. If your synths sound sterile on playback, roll up your sleeves and get to the source of the problem. Like most acoustic instruments, the human experience is fraught with complexity, imperfection, and magic. Introduce some of that spirit to your synth recordings, and they’ll ring truer to your heart, as well as to your music.

     

    5318ee7a832a8.jpg.3c3de62bb1c8ecd15c330af0f5f04682.jpgCraig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.




    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...