Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. It's Not Just about Notes, but about Emotion by Craig Anderton Vocals are the emotional focus of most popular music, yet many self-produced songs don't pay enough attention to the voice's crucial importance. Part of this is due to the difficulty in being objective enough to produce your own vocals; luckily, I've been fortunate to work with some great producers over the years, and have picked up some points to remember when producing myself. So, let's look at a way to step back and put more EDR (Emotional Dynamic Range) into your vocals. WHAT IS EDR? Dynamics isn't just about level variations, but also emotional variations. No matter how well you know the words to a song, begin by printing out or writing a copy of the lyrics. This will become a road map that guides your delivery through the piece. Reviewing a song and showing where to add emphasis can help guide a vocal performance. Grab two different colored pens, and analyze the lyrics. Underline words or phrases that should be emphasized in one color (e.g., blue), and words that are crucial to the point of the song in the other color (e.g., red). For example, here are notes on the second verse for a song I recorded a couple years ago. In the first line, "hot" is an attention-getting word and rhymes with "got," so it receives emphasis. As the song concerns a relationship that revs up because of dancing and music, "music" is crucial to the point of the song and gets added emphasis. In line 2, "feel" and "heat" get emphasis, especially because "heat" refers back to "hot," and is foreshadowing to "Miami" in the fourth line. Line 3 doesn't get a huge emphasis, as it provides the "breather" before hitting the payoff line, which includes the title of the song ("The Miami Beat"). "Dancing" has major emphasis, "Miami beat" gets less because it re-appears several times in the tune . . . no point in wearing out its welcome. By going through a song line by line, you'll have a better idea of where/how to make the song tell a story, create a flow from beginning to end, and emphasize the most important elements. Also, going over the lyrics with a fine-tooth comb is good quality control to make sure every word counts. TYPES OF EMPHASIS Emphasis is not just about singing louder. Other ways to emphasize a word or phrase are: Bend pitch. Words with bent pitch will stand out compared to notes sung "straight." For example, in line 4 above, "dancing" slides around the pitch to add more emphasis. Clipped vs. sustained. Following a clipped series of notes with sustained sounds tends to raise the emotional level. Think of Sam and Dave's song "Soul Man": The verses are pretty clipped, but when they go into "I'm a soul man," they really draw out "soul man." The contrast with the more percussive singing in the verses is dramatic. Throat vs. lungs. Pushing air from the throat sounds very different compared to drawing air from the lungs. The breathier throat sound is good for setting up a fuller, louder, lung-driven sound. Abba's "Dancing Queen" highlights some of these techniques: the section of the song starting with "Friday night and the lights are low" is breathier and more clipped (although the ends of lines tend to be more sustained). As the song moves toward the "Dancing Queen" and "You can dance" climax, the notes are more sustained and less breathy. Timbre changes. Changing your voice's timbre draws attention to it (David Bowie uses this technique a lot). Doubling a vocal line can make a voice seem stronger, but I suggest placing the doubled vocal back in the mix compared to the main vocal—enough to support, not compete. Vibrato. Vibrato is often overused to add emphasis. You don't need to add much; think of Miles Davis, who almost never used vibrato, electing instead to use well-placed pitch-bending. (Okay, so he wasn't a singer...but he used his trumpet in a very vocal manner.) Generally, vibrato "fades out" just before the note ends, like pulling back the mod wheel on a synthesizer. This adds a sense of closure that completes a phrase. "Better" is not always better. Paradoxically, really good vocalists can find it difficult to hit a wide emotional dynamic range because they have the chops to sing at full steam all the time. This is particularly true with singers who come from a stage background, where they're used to singing for the back row. Lesser vocalists often make up for a lack of technical skill by craftier performances, and fully exploiting the tools they have. If you have a great voice, fine—but don't end up like the guitarist who can play a zillion notes a second, but ultimately has nothing to say. Pull back and let your performance "breathe." As vocals are the primary human-to-human connection in a great deal of popular music, reflect on every word, because every word is important. If some words simply don't work, it's better to rewrite the song than rely on vocal technique or artifice to carry you through. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. by Craig Anderton Determining an album's song order is never easy, partly because if you want to know for sure whether the order works or not, you need to listen to the entire project from start to finish. Only then do you realize there are some minor problems - like the first four songs all end in fadeouts, or you have three consecutive songs that feature the same vocalist. So you try another order, and listen again... But there's a quicker way to come up with a possible song order: Use a spreadsheet to create a matrix that lists as many song parameters as possible - not just tempo and key - to help sort out what might give the best flow and coherence. Of course, different types of music require very different parameters, but the point of this article is to present a general approach - hopefully you can adapt this concept to your own music. CHOOSING THE PARAMETERS The more accurately you can quantify a song's characteristics, the easier it is to come up with a meaningful matrix. Fig. 1 shows how I used a spreadsheet program to create the matrix; let's discuss the parameter descriptions. Fig. 1: A spreadsheet can help you get an overview of your songs, making it easier to determine an album's song order. Note the use of color to differentiate the start and end of the CD's two "halves." Title, Tempo, and Key are self-explanatory. Attitude describes, however inadequately, the song's main emotional qualities. This parameter is needed mostly to avoid bunching up too many songs with the same kind of feel, but also gives an idea of the basic emotional "road map." Main Lead describes what provides the main lead in the piece. Some of my tunes use actual vocals, some use vocal samples arranged to form a sort of lead line, while others have an instrumental lead (e.g., guitar). Guitar indicates the degree to which various tunes feature guitar (my primary instrument). For example, I didn't want all the songs that featured guitar solos to run together. Intro is how the song starts. I included this parameter because I once came up with a song order where two songs in a row started with sustained guitar fading in; separating the two worked much better. Out is how the song ends. For example, you don't want all songs that fade out to occur right after another. But also, by looking over the Out and its subsequent Intro, you can get a feel for how the songs hang together Also note that the 1st and 7th songs are in blue, and the 5th and 11th songs in red. This is because I tend to think of a CD as having two distinct parts. This isn't just a throwback to the days of vinyl; by giving each half its own identity, I think it's a lot easier to listen to a CD all the way through, because the experience is more like listening to two shorter CDs back-to-back. On this CD, an "intermission" separates the two halves. This instrumental transition has no real tempo and consists primarily of long, dreamy lead guitar lines, so it's a good place to "reset" the rhythmic continuity and start over. The second half has a nice climb from 102, to 110, to 130, then a brief dip down to 125 before closing out at more neutral 101. TESTING THE ORDER Here are three useful tools for testing song orders. If you have a portable player like an iPod or Smart Phone, transfer the tunes to it and create various playlists. Listen to them and live with them for a while to determine which ones you like best. A similar idea is to burn a CD with all the tunes, and use a CD player that lets you program a particular song order. Create one huge sound file with all the cuts, then open this up in a digital audio editor capable of creating a playlist. Use the playlist to try out different orders. You can usually audition the playlist transitions, often with a user-settable pre- and post-roll time. Most CD-burning programs make it easy to arrange songs in a particular order, then play through them. Generally, it will also be easy to listen to the transitions between songs. And of course, once you get the order right, you can burn a CD. WHICH SPREADSHEET? It doesn't really matter what spreadsheet you use (the screen shot shows an old version of Microsoft's ubiquitous Excel; the Open Office spreadsheet works just fine too, and it's free). In fact, you don't really have to use a spreadsheet at all; a word processor will often do the job, or for that matter, paper and pencil. SETTING PRIORITIES This may seem like an overly-clinical way to determine song order, but think of it as an idea-starter, not a dictator. At the very least, it will probably help indicate which pairs of songs work well together. The matrix also provides a point of departure, which is always easier than just starting with a "blank page." The final arbiter of a good order is your ears, but check out this approach and see if it's as helpful to you as it has been to me. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. Reverb is a crucial part of most vocal recordings, so make sure you get it right By Craig Anderton Reverb and vocals were made for each other; few recordings put the voice totally out front, with no ambience. However, there’s much more to getting the right vocal reverb sound that just dialing up a preset and crossing your fingers. ONE REVERB OR MANY? Back in the stone age of recording, a recording had one reverb, and all signals were bused to it. Often the vocals sent more signal than some of the other instruments, but the result was a cohesive “group” sound. Later on, studios often used a specific reverb for vocals. Much of the motivation for doing this was to make the voice more distinctive, and if the studio had a plate reverb, that was often the reverb of choice because it tended to have a brighter, crisper sound than a traditional room reverb. This complemented voice well, which tends not to have a lot of high-frequency response. With the advent of digital reverb, some people went crazy—one reverb type on the voice, gated reverb on drums, some gauzy reverb on guitars, and maybe even one or two reverbs in an aux bus. The result is a sound that bears no resemblance to the real world. That in itself is not always a bad thing, but if taken to extremes your ears—which know what acoustical spaces sound like—recognize the sound as “phony.” Unless you’re going for a novelty effect, this can be a problem. If your digital reverb has a convincing plate algorithm, try that as a channel insert effect on vocals and use a good room or hall reverb in an aux bus for your other signals. To help create a smoother blend, send some of the vocal reverb to the main reverb (Fig. 1). Fig. 1: This mixer routing in Pro Tools shows a Universal Audio EMT140 plate reverb inserted in the vocal path, but with an additional send going to the main “hall” reverb that processes the other instruments. This will likely require dialing back the vocal reverb level a bit, as the main reverb will bring up the level somewhat. TO DIFFUSE, OR NOT TO DIFFUSE? A reverb’s diffusion control increases the density of the echoes. Higher diffusion settings give a less “focused” sound, producing more of a “wash.” This is helpful with percussive instruments, because percussive sounds create sharp echoes with digital reverb. Turning up diffusion gives a smoother sound. However, a voice isn’t percussive, and high diffusion settings can produce an overly “thick” sound. This violates the First Rule of Vocal Reverb: The reverb should never “step on” the vocal. Instead, try low diffusion settings (Fig. 2). Fig. 2: Low diffusion settings, as shown here in the Waves Renaissance reverb, are often preferable for vocals compared to high diffusion settings. This produces a reverb sound that blends in with the vocals rather than sounding like a separate effect that lives apart from the voice. WHAT ABOUT EQ? Many reverbs have adjustable high and low-frequency decays, or at least levels, with a crossover point between the two (Fig. 3). Fig. 3: The Breverb reverb from Overloud has separate decay times for the high and low bands. With voice, I tend to use a longer high decay than low decay. This gives a reverb splash to the “s” sounds and mouth artifacts, while reining in low frequency reverb components that have the potential to make the sound more muddy. Remember, crispness with vocals is usually a good thing, because it increases intelligibility—as long as you didn’t already add massive amounts of high frequency EQ to the vocal itself. Experimentation is key to finding the right crossover point, because of differences between male and female voices, tonality, range, etc. Start around 1kHz and move upward from there until you dial in the right sound. REALLY, THERE’S NOTHING LIKE AN ACOUSTIC SPACE Sure, digital reverb algorithms have made tremendous progress in the past few years. Nonetheless, there’s nothing quite like a real acoustic space to give an ambient quality that remains elusive to pin down in the digital domain. But this doesn’t mean you need a concert hall to get a good reverb sound. Even relatively small spaces, if they’re reflective enough, will do the job. Simply send an aux bus out to a speaker in your bathroom (remove any towels or soft surfaces, and pull shower curtains back), then put a mic in the bathroom and bring its output back into a mixer input. Send some of your vocal channel’s digital reverb output through an aux bus into this space, and add just enough of the acoustical reverb to provide the equivalent of “sonic caulking” to the digital reverb sound. The room will add early reflections that will be far more complex and interesting than all but the very best digital reverbs can deliver—and you might be very surprised just how much this can “sweeten” up your sound. And if you’re in an experimental frame of mind, consider adding some feedback to the room reverb: Send some of the room reverb return back into the send output feeding the speaker. Be very careful, though, and keep the monitors at extremely low levels as you work on the sound—you don’t want a major feedback blast! Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Reverb is a crucial part of most vocal recordings, so make sure you get it right By Craig Anderton Reverb and vocals were made for each other; few recordings put the voice totally out front, with no ambience. However, there’s much more to getting the right vocal reverb sound that just dialing up a preset and crossing your fingers. ONE REVERB OR MANY? Back in the stone age of recording, a recording had one reverb, and all signals were bused to it. Often the vocals sent more signal than some of the other instruments, but the result was a cohesive “group” sound. Later on, studios often used a specific reverb for vocals. Much of the motivation for doing this was to make the voice more distinctive, and if the studio had a plate reverb, that was often the reverb of choice because it tended to have a brighter, crisper sound than a traditional room reverb. This complemented voice well, which tends not to have a lot of high-frequency response. With the advent of digital reverb, some people went crazy—one reverb type on the voice, gated reverb on drums, some gauzy reverb on guitars, and maybe even one or two reverbs in an aux bus. The result is a sound that bears no resemblance to the real world. That in itself is not always a bad thing, but if taken to extremes your ears—which know what acoustical spaces sound like—recognize the sound as “phony.” Unless you’re going for a novelty effect, this can be a problem. If your digital reverb has a convincing plate algorithm, try that as a channel insert effect on vocals and use a good room or hall reverb in an aux bus for your other signals. To help create a smoother blend, send some of the vocal reverb to the main reverb (Fig. 1). Fig. 1: This mixer routing in Pro Tools shows a Universal Audio EMT140 plate reverb inserted in the vocal path, but with an additional send going to the main “hall” reverb that processes the other instruments. This will likely require dialing back the vocal reverb level a bit, as the main reverb will bring up the level somewhat. TO DIFFUSE, OR NOT TO DIFFUSE? A reverb’s diffusion control increases the density of the echoes. Higher diffusion settings give a less “focused” sound, producing more of a “wash.” This is helpful with percussive instruments, because percussive sounds create sharp echoes with digital reverb. Turning up diffusion gives a smoother sound. However, a voice isn’t percussive, and high diffusion settings can produce an overly “thick” sound. This violates the First Rule of Vocal Reverb: The reverb should never “step on” the vocal. Instead, try low diffusion settings (Fig. 2). Fig. 2: Low diffusion settings, as shown here in the Waves Renaissance reverb, are often preferable for vocals compared to high diffusion settings. This produces a reverb sound that blends in with the vocals rather than sounding like a separate effect that lives apart from the voice. WHAT ABOUT EQ? Many reverbs have adjustable high and low-frequency decays, or at least levels, with a crossover point between the two (Fig. 3). Fig. 3: The Breverb reverb from Overloud has separate decay times for the high and low bands. With voice, I tend to use a longer high decay than low decay. This gives a reverb splash to the “s” sounds and mouth artifacts, while reining in low frequency reverb components that have the potential to make the sound more muddy. Remember, crispness with vocals is usually a good thing, because it increases intelligibility—as long as you didn’t already add massive amounts of high frequency EQ to the vocal itself. Experimentation is key to finding the right crossover point, because of differences between male and female voices, tonality, range, etc. Start around 1kHz and move upward from there until you dial in the right sound. REALLY, THERE’S NOTHING LIKE AN ACOUSTIC SPACE Sure, digital reverb algorithms have made tremendous progress in the past few years. Nonetheless, there’s nothing quite like a real acoustic space to give an ambient quality that remains elusive to pin down in the digital domain. But this doesn’t mean you need a concert hall to get a good reverb sound. Even relatively small spaces, if they’re reflective enough, will do the job. Simply send an aux bus out to a speaker in your bathroom (remove any towels or soft surfaces, and pull shower curtains back), then put a mic in the bathroom and bring its output back into a mixer input. Send some of your vocal channel’s digital reverb output through an aux bus into this space, and add just enough of the acoustical reverb to provide the equivalent of “sonic caulking” to the digital reverb sound. The room will add early reflections that will be far more complex and interesting than all but the very best digital reverbs can deliver—and you might be very surprised just how much this can “sweeten” up your sound. And if you’re in an experimental frame of mind, consider adding some feedback to the room reverb: Send some of the room reverb return back into the send output feeding the speaker. Be very careful, though, and keep the monitors at extremely low levels as you work on the sound—you don’t want a major feedback blast! Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Six tips for your six string sim friends by Craig Anderton No one denies the convenience of amp sims; the controversy is always about sound quality. Fortunately, often a few choice edits are all it takes to change an amp sim sound from “okay” to “great.” 1. Your strings, pick, and pickups all affect how an amp sim responds to your axe, so check the input level. If a patch is a fizzy, buzzy mess, reduce the incoming signal—either by turning down the amp sim’s input control, or reducing the your audio interface’s input level control. Also, sometimes moving your pickups even just a little bit further away from the strings will make a huge sonic improvement. As little as a few millimeters is often all you need for a much sweeter sound. Distancing your pickups slightly more from your strings can reduce transients and pick sounds; these often become a non-harmonic mess when processed through the sim’s distortion. 2. The sim or interface input isn’t the only place to check levels. Because you often want distorted guitar sounds, you might miss unintentional—and nasty—distortion caused by overloading an amp or effect stage within the amp sim. Many amp sims make the level-setting process fail-safe by offering a Learn option for at least the input, but others take this further (e.g., Native Instruments’ Guitar Rig has a learn option to avoid internal overload within an amp—this is extremely helpful). Guitar Rig has a “learn” function to set levels at the input and output, as well as within the amp itself. In any event, if there’s no “learn”-based assistance, be conservative when sending signals to an amp, then try turning the levels up. Often, you’ll hear a very obvious transition point between intended distortion and internal distortion. 3. Trim the highs. You know how pulling back on your tone control can give a “rounder” sound when feeding something with distortion? Your tone control might not react the same way when running into a computer interface, so include an EQ processor before a distortion effect or distorted amp, and filter the highs. Inserting a de-esser (or a dynamics processor that includes a de-essing preset) before an amp sim can also improve the “sweetness” dramatically—adjust it so that playing hard reduces high frequencies. In Sonar X2, Waves' Renaissance De-Esser is before Softube's Metal Room amp sim. 4. Add a parametric EQ after an amp, and notch out annoying frequencies. There’s an in-depth article on this techniques, including plenty of audio examples, at http://www.harmonycentral.com/t5/Gear-Articles/How-to-Make-Amp-Sims-Sound-More-Analog/ba-p/34643372. Sometimes a couple well-placed, steep notches can reduce "fizz" and make for a smoother sound. 5. Try out different cabinet and “virtual miking” options, as these can have a huge effect on the sound. However, the results may not be consistent—a mic choice that sounds perfect with one amp might not work with a different one. Try this: Check out each cabinet, and choose the one you think sounds best. Next, try all the mics, then choose your favorite. If you can change miking positions, try that too. To find the optimum amp/mic combination, first try the various amp/cabinet models until you find a favorite. Then, try various mics and after selecting your favorite mic, try running through the amp/cabs one more time. Now, go back to the cabinet and run through the options again. If a different one sounds better because you changed the virtual mic, keep that cabinet and run through the mic and mic placement options again. Keep repeating this cycle until the sound is optimum. 6. Add some delay to the straight sound. You don’t listen to a guitar by sticking your ear a couple inches from a cabinet—you hear the sound in the context of an acoustical space, and a little delay can create a more complex amp sound with more depth and even a little bit of ambience. Some amp sims, like Line 6’s POD Farm, include an “air,” "room," or “ambience” parameter that provides the same kind of effect. POD Farm 2's Room parameter, which lets you place the cabinet closer to or further away from the listener, adds ambience. You don’t want an obvious echo, so try the range of 15-25ms (adjust for the best sound while listening in mono so you can catch any cancellation issues). If there aren’t any delay lines with this little an amount of delay, a chorus might do the job—set the initial delay to around 20ms, and turn off any modulation. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. A Cable Is Not Just a Piece of Wire . . . By Craig Anderton If a guitar player hears something that an engineer says is impossible, lay your bets on the guitarist. For example, some guitarists can hear differences between different cords. Although some would ridicule that idea—wire is wire, right?—different cords can affect your sound, and in some cases, the difference can be drastic. What's more, there's a solid, repeatable, technically valid reason why this is so. However, cords that sound very different with one amp may sound identical with a different amp, or when using different pickups. No wonder guitarists verge on the superstitious about using a particular pickup, cord, and amp. But you needn't be subjected to this kind of uncertainty if you learn why these differences occur, and how to compensate for them. THE CORDAL TRINITY Even before your axe hits its first effect or amp input, much of its sound is already locked in due to three factors: Pickup output impedance (we assume you're using standard pickups, not active types) Cable capacitance Amplifier input impedance We'll start with cable capacitance, as that's a fairly easy concept to understand. In fact, cable capacitance is really nothing more than a second tone control applied across your pickup. A standard tone control places a capacitor from your "hot" signal line to ground. A capacitor is a frequency-sensitive component that passes high frequencies more readily than low frequencies. Placing the capacitor across the signal line shunts high frequencies to ground, which reduces the treble. However the capacitor blocks lower frequencies , so they are not shunted to ground and instead shuffle along to the output. (For the technically-minded, a capacitor consists of two conductors separated by an insulator—a definition which just happens to describe shielded cable as well.) Any cable exhibits some capacitance—not nearly as much as a tone control, but enough to be significant in some situations. However, whether this has a major effect or not depends on the two other factors (guitar output impedance and amp input impedance) mentioned earlier. AMP INPUT IMPEDANCE When sending a signal to an amplifier, some of the signal gets lost along the way—sort of like having a leak in a pipe that's transferring water from one place to another. Whether this leak is a pinhole or gaping chasm depends on the amp's input impedance. With stock guitar pickups, lower input impedances load down the guitar and produce a "duller" sound (interestingly, tubes have an inherently high input impedance, which might account for one aspect of the tube's enduring popularity with guitarists). Impedance affects not only level, but the tone control action as well. The capacitor itself is only one piece of the tone control puzzle, because it's influenced by the amp's input impedance. The higher the impedance, the greater the effect of the tone control. This is why a tone control can seem very effective with some amps and not with others. Although a high amp input impedance keeps the level up and provides smooth tone control action (the downside is that high impedances are more susceptible to picking up noise, RF, and other types of interference), it also accentuates the effects of cable capacitance. A cable that robs highs when used with a high input impedance amp can have no audible effect with a low input impedance amp. THE FINAL PIECE OF THE PUZZLE Our final interactive component of this whole mess is the guitar's output impedance. This impedance is equivalent to sticking a resistor in series with the guitar that lowers volume somewhat. Almost all stock pickups have a relatively high output impedance, while active pickups have a low output impedance. As with amp input impedance, this interacts with your cable to alter the sound. Any cable capacitance will be accented if the guitar has a high output impedance, and have less effect if the output impedance is low. There's one other consideration: the guitar output impedance and amp input impedance interact. Generally, you want a very high amplifier input impedance if you're using stock pickups, as this minimizes loss (in particular, high frequency loss). However, active pickups with low output impedances are relatively immune to an amp's input impedance. THE BOTTOM LINE So what does all this mean? Here are a few guidelines. Low guitar output impedance + low amp input impedance. Cable capacitance won't make much difference, and the capacitor used with a standard tone control may not appear to have much of an effect. Increasing the tone control's capacitor value will give a more pronounced high frequency cut. (Note: if you replace stock pickups with active pickups, keep this in mind if the tone control doesn't seem as effective as it had been.) Bottom line: you can use just about any cord, and it won't make much difference. Low guitar output impedance + high amp input impedance. With the guitar's volume control up full, the guitar output connects directly to the amp input, so the same basic comments as above (low guitar output Z with low amp input Z) applies. However, turning down the volume control isolates the guitar output from the amp input. At this point, cable capacitance has more of an effect, especially of the control is a high-resistance type (greater than 250k). High guitar output impedance + low amp input impedance. Just say no. This maims your guitar's level and high frequency response, and is not recommended. High guitar output impedance + high amp input impedance. This is the common, 50s/60s setup scenario with a passive guitar and tube amp. In this case, cable capacitance can have a major effect. In particular, coil cords have a lot more capacitance than standard cords, and can make a huge sonic difference. However, the amp provides minimum loading on the guitar, which with a quality cord, helps to preserve high end "sheen" and overall level. Taking all the above into account, if you want a more consistent guitar setup that sounds pretty much the same regardless of what cable you use (and is also relatively immune to amplifier loading), consider replacing your stock pickups with active types. Alternately, you can add an impedance converter ("buffer board") right after the guitar output (or for that matter, any effect such as a compressor, distortion box, etc. that has a high input impedance and low output impedance). This will isolate your guitar from any negative effects of high-capacitance cables or low impedance amp inputs. If you're committed to using a stock guitar and high impedance amp, there are still a few things you can do to preserve your sound: Keep the guitar cord as short as possible. The longer the cable, the greater the accumulated cable capacitance. Cable specs will include a figure for capacitance (usually specified in "picofarads per foot"). If you make your own cables, choose cable with the lowest pF per foot, consistent with cable strength. (Paradoxically, strong, macho cables often have more capacitance, whereas light weight cables have less.) Avoid coil cords, and keep your volume control as high up as possible. Don't believe the hype about "audiophile cords." They may make a difference; they may not. If you don't hear any difference with your setup, then save your money and go with something less expensive. Before closing, I should mention that this article does simplify matters somewhat because there's also the issue of reactance, and that too interacts with the guitar cable capacitance. However, I feel that the issues covered here are primarily what influence the sound, so let's leave how reactance factors into this for a later day. Remember, if you axe doesn't sound quite right, don't immediately reach for the amp: There's a lot going on even before your signal hits the amp's input jack. And if a guitarist swears that one cord sounds different from another, that could very well be the case—however, now you know why that is, and what to do about it. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Vocoders Used to Be Expensive and Super-Complex - But No More by Craig Anderton Heard any robot voices lately? Of course you have, because vocoded vocals are all over the place, from commercials to dance tracks. Vocoders have been on hits before, like Styx’s “Mr. Roboto” and Lipps Inc.’s “Funky Town,” but today they’re just as likely to be woven in the fabric of a song (Daft Punk, Air) as being applied as a novelty effect. So, let's take a look at vocoder basics, and how to make them work for you. VOCODER BASICS Vocoders are best known for giving robot voice sounds, but they have plenty of other uses. A vocoder, whether hardware or virtual, has two inputs: instrument (the “carrier” input), and mic (the “modulation” input). As you talk into the mic, the vocoder analyzes the frequency bands where there’s energy, and opens up corresponding filters that process the carrier input. This impresses your speech characteristics onto the carrier’s signal. Clockwise from top: Reason BV512, Waves Morphoder, Ableton Live Vocoder, Apple Logic Evoc 20 Some programs, including Cubase, Logic, Sonar, Reason, and Ableton Live bundle in vocoders. However, until recently, the ability to sidechain a second input to provide the modulator (or carrier) was difficult to implement. Two common workarounds are to include a sound generator within the plug-in and use the input for the mic, which is the approach taken by Waves’ Morphoder; or, insert the plug-in in an existing audio track, and use what’s on the track as the carrier. VOCODER APPLICATIONS Talking instruments. To create convincing “talking instrument” effects, use a carrier signal rich in harmonics, with a complex, sustained waveform. Remember, even though a vocoder is loaded with filters, if nothing’s happening in the range of a given filter, then that filter will not affect the sound. Vocoding an instrument such as flute gives very poor results; a guitar will produce acceptable vocoding, but a distorted guitar or big string pad will work best. Synthesizers generate complex sounds that are excellent candidates for vocoding. Choir effects. To obtain a convincing choir effect, call up a voice-like program (e.g, pulse waveform with some low pass filtering and moderate resonance, or sampled choirs) with a polyphonic keyboard, and use this for the carrier. Saying “la-la,” “ooooh,” “ahhh,” and similar sounds into the mic input, while playing fairly complex chords on the synthesizer, imparts these vocal characteristics to the keyboard sound. Adding a chorus unit to the overall output can give an even stronger choir effect. Backup vocals. Having more than one singer in a song adds variety, but if you don’t have another singer at a session to create “call-and-response” type harmonies, a vocoder might be able to do the job. Use a similar setup to the one described above for choir effects, but instead of playing chords and saying “ooohs” and “ahhhhs” to create choirs, play simpler melody or harmony lines and speak the words for the back-up vocal. Singing the words (instead of speaking them) and mixing in some of the original mic sound creates a richer effect. Cross-synthesis. No law says you have to use voice with vocoder. For a really cool effect, use a sustained sound like a pad for the carrier, and drums for the modulator. The drums will impart a rhythmic, pulsing effect to the pad. Crowd sounds. Create the sound of a chanting crowd (think political rally) by using white noise as the carrier. This multiplies your voice into what sounds like dozens of voices. This technique also works for making nasty horror movie sounds, because the voice adds an organic quality, while the white noise contributes an otherworldly, ghostly component. Don’t forget to tweak. Some vocoders let you change the number of filters (bands) used for analysis; more filters (e.g.,16 and above) give higher intelligibility, whereas fewer filters create a more “impressionistic” sound. Also, many speech components that contribute to intelligibility are in the upper midrange and higher frequencies, yet few instruments have significant amounts of energy in these parts of the frequency spectrum. Some vocoders include a provision to inject white noise (a primary component of unpitched speech sounds) into the instrument signal to allow “S” and similar sounds to appear at the output. Different vocoders handle this situation in different ways. The days when vocoders were noisy, complicated, expensive, and difficult-to-adjust hardware boxes are over. If you haven't experimented with a software vocoder lately, you jut might be in for a very pleasant surprise. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Yes, you really can use multiple audio interfaces simultaneously with a single computer by Craig Anderton You have a go-to interface that’s great, but then one day you run out of mic inputs. Too bad your computer can’t address more than one interface at a time . . . Or can it? Actually, both Macintosh and Windows computer can let you use more than one interface at a time, if you know the rules. For Windows, although with rare exceptions you can’t aggregate ASIO devices, you can aggregate interfaces that work with WDM/KS, WASAPI, or WaveRT drivers. Just select one of these drivers in your host software, and all the I/O will appear as available inputs and outputs in your application (Fig. 1). Fig. 1: Sonar X1 is set to WDM/KS, so all the I/O from a Roland Octa-Capture and DigiTech’s iPB-10 effects processor become available. With the Mac, you can aggregate Core Audio interfaces. Open Audio MIDI Setup (located in Applications/Utilities), and choose Show Audio Window. Click the little + sign in the lower left corner; an Aggregate Device box appears. Double-click it to change its name ("Apollo+MBobMini" in Fig. 2). You'll see a list of available I/O. Check the interfaces you want to aggregate, then check "Resample" for the secondary interface or interfaces (Fig. 2); this tells the computer to treat your primary, or unchecked, interface as the clock source. Now all input and output options will be available in your host program. Fig. 2: Universal Audio's Apollo is being supplemented by an Avid Mbox Mini. If you encounter any problems, just go to the Audio MIDI Setup program’s Help, and search on Aggregation. Choose Combining Audio Devices, and follow the directions. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. There's much more to recording an electronic instrument than just feeding the output to an empty track by Craig Anderton Recording an electronic instrument is simple, right? You just take the output, direct inject it into the mixing console (or insert a plug-in virtual instrument directly into a DAW’s mixer), and set a reasonable level. And yes, that approach works just fine…provided you want to sound just like everyone else who is doing precisely the same thing. But you wouldn’t mic a drum set by taking the first mic you found and pointing it the general direction of the drummer, nor would you record an electric guitar by just plugging it into a mixer. A little extra effort spent on finding the optimum way to record an electronic instrument can make a tremendous difference in the overall “feel” of any track that incorporates synthesized sound. Granted that synths and drum machines don’t need miking, but there are other considerations such as an unnatural sound when mixed with acoustic instruments, background noise, lack of expressiveness, timing inconsistencies, and other issues that should be addressed to get the most out of your silicon-based musical buddies. That’s what this article is about, but first, a word of warning: rules were made to be broken. There is no “right” or “wrong” way to record, only ways that satisfy you to a greater or lesser degree. Sometimes doing the exact opposite of what’s expected gives the best results (and lands you on the charts!). So take the following as suggestions, not rules, that may be just what’s needed when you want to spice up an otherwise ordinary synth sound. THE SYNTHESIZER’S SECRET IDENTITY One crucial aspect of recording a synth is to define the desired results as completely as possible. Using synths to reinforce guitars on a heavy metal track is a completely different musical task compared to creating an all-synthesized 30-second spot. Sometimes you want synths to sound warm and organic, but if you’re doing techno, you’ll probably want a robot, machine-like vibe (with trance music, you might want to combine both possibilities). So, analyze your synth’s “sonic signature” - is it bright, dark, gritty, clean, warm, metallic, or something else altogether? Whereas some people attach value judgements to these different characteristics, veteran synthesists understand that different synthesizers have different general sound qualities, and choose the right sound for the right application. For example, analog synths - and virtual instruments that model analog synths (Fig. 1) - tend to use lowpass filters for shaping sounds that reduce high frequencies, producing a "warmer" sound. Fig. 1: Arturia's emulation of Bob Moog's Minimoog delivers a warm, analog sound in a virtual instrument - and does it so well that Bob Moog himself endorsed it. Digital samplers generally include lowpass filters too, but their “native” sound tends to be brighter. So if you’re using synthesizer with acoustic instruments like guitar, voice, piano, etc. - which naturally don’t have a lot of high-frequency energy – you might find that an analog hardware synth, or virtual analog synth, more closely matches the characteristics of “real” acoustic and electric instruments and blend in better. Of course you could use equalization to tame an overly-bright synth, but there’s a subtle difference between an instrument’s inherent characteristics and the modifications you can make to those characteristics. In a similar vein, a synth’s lo-fi options (or output passed through a lo-fi processor, like something that reduces bit resolution to 8 or 12 bits; see Fig. 2) may offer just enough “grunge” to fit in a little better with rock material. Fig. 2: Best Service's Drums Overkill virtual instrument works with Native Instruments' Kontakt Player. The Drums Overkill instrument itself has a lo-fi processor (outlined in red), and Kontakt's mixer allows inserting the same lo-fi processor as well, which is outlined in light blue. For background music for commercial videos, I often pull out the “bright guys”—FM synths (like Native Instruments’ FM8) and other plug-ins with the filtering bypassed. These give more of an edge at lower volumes, and their “clean” qualities leave space for narration, effects, and other important sonic elements. So, start with as close an approximation as possible to the desired result. But even if you don’t have an arsenal of synths, keep your final goal in mind. There’s lots you can do to influence the overall timbre of a synthesizer and achieve that goal. SPACE: THE FINAL FRONT EAR We have two ears, and listen through air. The sound we hear is influenced by the weather, the distance to the sound source, whether we’ve listened to too much loud music on headphones, the shape of our ears, and many other factors. Hardware or virtual synths generate electrical signals that need never reach air until we hear the final mix, but there are compelling reasons to avoid always going direct with hardware, or staying “in the box” with virtual instruments. Compared to acoustic instruments, synth sounds are relatively static - especially since the rise of sample-playback machines. Yet our ears are accustomed to hearing evolving, complex acoustical waveforms that are very much unlike synth waveforms, and creating a simple acoustic environment for the synth is one way to end up with a more interesting, complex sound. This can also help synths blend in with tracks that include lots of miked instruments, because the latter usually include some degree of room ambience (even with fairly “dead” rooms). One technique to synthesize an acoustic environment involves using signal processors. Try sending the synthesizer output through a reverb unit set to the sound of a small, dark room with very few (if any) first reflection components (Fig. 3). This should be just enough to give the synthesized sound a bit of acoustic depth. Fig. 3: Adding a subtle, small room effect - in this case, using IK Multimedia's CSR room emulation - can help make an electronic instrument fit in better with other tracks. When the synth and other instruments go through a main hall reverb bus during mixdown, they’ll mesh together a lot better. Another trick is to add two or three very short delays (20-50 ms, no feedback) mixed fairly far down. A stereo delay unit works just fine. Delays this short can add “comb filtering” effects that alter the frequency response in the same way that a real room does. You may want to create a different type of acoustic environment than a room, such as a guitar amp for electric guitar patches. Amps generally add distortion, equalization, limiting, and speaker simulation. Feeding the synth through an amp sim (Waves GTR, Native Instruments Guitar Rig Pro, IK Multimedia AmpliTube, Line 6 POD Farm 2, Peavey ReValver, etc.) can give a sound with much more character; in fact going through the sim might add too much character, in which case putting the effect in parallel, or in a bus that picks off some but not all of the synth sound, might give the ideal result. A second way to create an acoustic environment is to use the Real Thing, especially when recording a hardware synthesizer. A vintage tube guitar amp is a truly amazing signal processor, even when it’s not adding distortion; plug your synth into it and stick a mic in its face. The sound is very, very different compared to going direct. Virtual instruments can take advantage of this technique too; just pretend you’re re-amping a guitar track. Send the virtual instrument output directly to a hardware audio output on your computer’s audio interface (this assumes that your interface has multiple outputs), run it through your hardware processor of choice, then feed the hardware out into a spare audio interface input and record this signal in your DAW (Fig. 4). There will likely be some delay due to going from digital to analog then back to digital again, but you can always compensate for this by “nudging” the track a little bit earlier. Fig. 4: Many programs, such as Cakewalk Sonar (shown here), let you insert a hardware processor as if it was a software plug-in. Another way to add the feel of an acoustic space to a synth is to mix in a bit of miked sound of you playing the keys (sometimes a contact mic works best). This should be mixed very subtly in the background—just noticeable enough to give a low-level aural “cue.” You may be surprised at how much this adds a natural sound quality to synthesized keyboards. “BUILDING BLOCK” SYNTHESIS As noted earlier, different forms of synthesis have different strengths, so layering several synths can provide an interesting composite timbre. With hardware, this involves daisy-chaining multiple synths and setting them all to the same MIDI channel; with virtual instruments, you can send a MIDI track output to multiple synths but if you can only drive one synth at a time, you can always clone the MIDI track and assign each clone to a different synth. As one example of why layering can be useful, every time you play a sample it will exhibit the same attack characteristics. Sure, you can do tricks like velocity-switching or sample start point changes, but a better approach is to layer something like an FM synth programmed to produce a more complex transient. FM synths don’t have the same “photographic” level of realism as samplers, but can produce wide timbral variations—particularly on a sound’s attack—that are keyed to velocity. I’ve used this to good advantage on harp and plucked string sounds, where the FM synth provides the pluck, and the sampler, the body of the sound. Grafting the two elements together produces a far more satisfying effect than either one by itself. There’s a caution, though. If the two sounds sustain for any length of time, the timbral difference may become too noticeable. Therefore, you might want to set a fairly short decay on the “attack” sound and a bit of an attack rise on the “sustain” sound so that it doesn’t overwhelm the attack component. And here’s a tip along the same lines for the terminally lazy: For an instantly bigger synthesized sound for acoustic instruments, layer another synth and call up a like-named patch compared to what you’re using (e.g., layer two different vibes or cello patches). This doesn’t always work, but it’s amazing how many times this will make a really cool sound (especially if one of the sounds is mixed fairly far back to provide support, rather than competing with the main sound). This technique also works fabulously with drum machines—just assign two or more different drum sounds to the same note. One great combination is a TR-808-type kick thud blended with a tight, dance-music thwack. TO BOUNCE, OR NOT TO BOUNCE? Some people wonder whether it’s best to run synth tracks as virtual instruments into the mix, or bounce them into audio tracks so you mix them as you would any other audio track. I highly recommend bouncing any synth tracks to audio, because virtual instrument settings may be harder to re-create at a later date should compatibility problems arise (e.g., the synth you used is no longer compatible with a newer operating system). Once something’s has been converted to audio, it’s there to stay. What’s more, an audio track will stress out your CPU less than a virtual instrument. This may be important with projects that have lots of tracks, or have a video track. Your DAW may also have a “freeze” function (Fig. 5), which is essentially the same thing as bouncing the instrument output to audio – but this leaves your instrument “on standby” should you want to edit it. Fig. 5: "Freezing" a track lets you treat a virtual instrument track as an audio track, which requires much less CPU power. In Ableton Live, frozen instruments are shown in an icy blue color. Even if you don’t use freeze and bounce an instrument to audio, at least save the synth patch you used and retain the MIDI track driving the instrument in case you need to go back to the original track setup in the future, and do some edits. Also note that many virtual instruments include effects such as chorusing, flanging, echo, reverb, distortion, equalization, etc. All things being equal, if you use these effects instead of separate plug-ins, then saving the synth preset saves any associated effects settings as well. Furthermore, bouncing the synth output to audio, or freezing the track, preserves the desired effects settings. AVOIDING DISTORTION WITH SYNTHESIZERS Synthesizers can have a huge dynamic range, to the point where peaks can create distortion (either internally within the synth, or when feeding an audio output). Proper synth programming can help keep this under control. Here are some tips: Detuned oscillators, though they sound nice and fat, create strong peaks when the chorused waveform peaks occur at the same time. To solve this, drop one oscillator’s level about 30\\\%-50\\\% below the other. The sound will remain animated, yet the peaks won’t be as drastic and will be less likely to cause distortion. High-resonance filter settings are troublesome; hitting a note at the filter’s resonant frequency creates a radical peak. An easy fix is to follow the synth with a limiter or maximizer plug-in - some synths even have limiters built in (Fig. 6). Set the limiter controls for fast attack and moderate decay. As the limiter’s main function is to trap short peaks and transients, set the threshold fairly high, and use a very high compression ratio. This will leave most of the signal relatively unaffected, but peaks won’t exceed a safe, non-distorting level. Fig. 6: Cakewalk's Rapture virtual instrument has a limiter to cut excessive peaks down to size. Remember that most synths have several level adjustments: mixes for individual oscillators, the envelope levels controlling DCAs, final output mixer, onboard signal processing levels, etc. For maximum dynamic range and minimum distortion, tweak these with the same care you would exercise when gain-staging a mixer. SYNTH PROGRAMMING TIPS FOR BETTER RECORDING For a real wide stereo field without resorting to ambience processing, try using a synth’s “combi” mode (also called performance, multi, etc.) to combine several versions of the same program. Restrict the note range of each combi, then pan each range to a different place in the stereo field. For example you could pan the lowest note range full left, the highest full right, and other ranges in between. Note that this won’t use up polyphony if the ranges don’t overlap. LFO panning can produce an overly regular, boring sound with sustained sounds, but panning short, percussive sounds (e.g., claves, tambourine hits, cowbell, etc.) can work very well. Because the sound is short, you don’t hear it pan per se; instead, each time the sound appears, it will be in a slightly different place in the stereo field. If you have a rock-solid kick and snare, having a percussive part dancing around the stereo field can add considerable interest. When analog tape was king, many engineers used it (knowingly or unknowingly) to perform soft limiting and generate some distortion on drum sounds by recording well into the red (overload) zone of their VU meters. With virtual instruments, try recording percussion and bass sounds with just a hint of distortion or saturation. This will give more punch, but depending on the signal source, you may not really notice the distortion because it clips only the extremely fast transients at the beginning of the sound. To pull a synthesized sound out of a mix, add a little bit of a pitch transient using an oscillator’s pitch envelope. Here’s one of my favorite examples: Program a choir patch using two oscillators. Now apply a pitch envelope to one oscillator that falls down to proper pitch over about 50 ms, and a second pitch envelope to the second oscillator that rises to the proper pitch over about the same time period. Set the depth for a subtle effect. This creates a more interesting transient sound that draws the ear in, and makes the sound seem louder. Remove the pitch envelopes, and the voices appear to drop further back in the mix, even without a level change. MAKING TRACKS Remember, machines don’t kill music - people do. If your synths sound sterile on playback, roll up your sleeves and get to the source of the problem. Like most acoustic instruments, the human experience is fraught with complexity, imperfection, and magic. Introduce some of that spirit to your synth recordings, and they’ll ring truer to your heart, as well as to your music. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Sometimes simple changes can lead to big results by Craig Anderton Want a bigger sound from your guitar? Consider using medium gauge strings (.011 top instead of .010). Sure, you won’t be able to fly around them as fast as lighter-gauge strings – at first. But over time, you’ll not only get used to them, you’ll find that your guitar has a bigger sound, less of a tendency to “fret out,” and might even stay in tune a little better. Just remember to adjust the intonation to take the changed string diameter into account. In fact, why stop there? You can create a custom set designed for your playing style. You might want a thicker gauge for your lower strings so they really ring out, and lighter strings on the top for bending (remember, you can angle the pickup, adjust the pole pieces, or both to compensate for differences in volume). You can even do something like use only one string for the first and second strings of a 12-string (as opposed to doubling them) so you can get big chords, but also bend strings easily. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Don't just fix the hiss—noise gates have plenty of other uses By Craig Anderton Noise gates aren't as relevant as they were back in the analog days, when hiss was an uninvited intruder on anything you recorded. But noise gates can do some really cool special effects that have nothing to do with reducing hiss. This article shows how to make them a lot more interesting, and throws in a bunch of fun audio examples, too. But first, let's do some noise gate basics for the uninitiated. NOISE GATE BASICS A noise gate mutes its output with low-level input signals, but higher-level signals can pass through. Following are the typical adjustable parameters found in a noise gate, whether analog, digital, or plug-in (Fig. 1). Fig. 1: The MasterWorks stereo noise gate from MOTU's Digital Performer. Threshold: If the input level to the gate passes below the threshold, the gate "closes" and mutes the output. Once the signal exceeds the threshold, the gate opens again. Range: Determines the difference in level between the gate on and gate off. If set to infinity, when the gate is closed, no signal goes through the gate at all. Attack: This determines how long it takes for the gate to go from full off to full on once the input exceeds the threshold. Hold: This leave the gate open for a fixed amount of time after it's triggered. If the signal passes below the threshold before the hold time is up, the gate will still ramain open. Decay: This sets the time required for the gate to go from full on to full off once the signal falls below the threshold. Since decaying signals often criss-cross the threshold as they decay, increasing the decay time prevents "chattering." Key source (also called key input or sidechain input): Normally, the gate opens and closes based on the input signal's amplitude. The key input allows patching in a different control signal for the gating action (for example, using a kick drum as the key signal to turn a sound on and off in time with the kick's rhythm). Note that in this example, the key input includes a filter so you can isolate particular a frequency range for triggering the gate. For example, if you want to trigger bass with kick drum, you could reduce the highs to focus in on the kick sound only. All right, let's get into applications. SELECTIVE REVERB I was using a premixed drum loop from the Discrete Drums Series 2 library, but in one particular part of the song, I wished that the snare—and only the snare—had some reverb. Although Series 2 is a multitrack library, I didn't want to go back and build up the drum loop from scratch. So why not just extract the snare drum sound, put some reverb on that, and mix it in with the drums? Referring to Fig. 2, I copied the drum loop in Track 1 to a second track in Sonar X1 (if you were doing this in hardware, you'd split it into two mixer channel inputs). In the second track, there's EQ inserted to roll off all the low end, which took most of the kick out of the signal, as well as the high end, to reduce the level of cymbal crashes. Fig. 2: Copying the drum loop allows for processing the copied track to augment the original one (click to enlarge). The next step was to insert a noise gate in Track 2, and raise the gate threshold so that only the snare peaks made it through. These peaks fed the reverb, which dumped into the master bus along with the original drums. The end result: Reverb on the snare only, added in with the rest of the drums. Here's an audio example that demonstrates the above. The first four measures are the loop only, the next four measures are the extracted/reverbed sound only, and the last four measures combine the two. See? It is possible! PSEUDO PEAK EXPANDER A similar trick works for situations where a drum part is not dynamic enough. Split the signal to an additional mixer channel, and set its gate so that only the peaks come through. Adjust this module's level so that the peaks add to the original sound, thus providing a boost only on peaks. This also works with any kind of percussive transient—pick noise on guitar, for example, or percussive B3-type transients. GROTESQUE DISTORTION Setting the attack and decay time to the minimum amount possible (typically 0 ms with attack, and anywhere from 0 to 10 ms with decay) can cause the gate to open and close so fast that it actually triggers on individual cycles of the input signal, causing distortion. This technique works especially well with highly compressed drums; in this context, the threshold now becomes an "ugly" control. At low thresholds, you'll hear an occasional buzz. Higher thresholds can give mean, nasty, spiky sounds. (Note: if there is a minimum decay time, this will set a limit on the highest frequencies that will be affected. For example, with 5 ms of decay, anything above about 200 Hz will not be affected on a per-cycle basis.) Here's an audio example of a defenseless drum loop being distorted by this technique. ATTACK DELAY UNIT Adding some attack time in the 100-250 ms range allows a signal to "swell" to its maximum level. For instance, if you pause briefly between notes, when a new note exceeds the threshold it will fade in over the specified amount of attack time. This can alter the attack characteristics of percussive instruments like piano and guitar, or add "brass-like" attacks to organ sounds. Another interesting use is with vocals, to reduce breath inhale noises. As the singer breathes in, the inhale fades in. This makes the breath sound less prominent, but doesn't cut it out completely (which can sound unnatural). However, for this application the decay time setting is also important. With a long decay, the gate may remain open during the space between notes, which prevents triggering a new attack when a new note plays. Conversely, too short a decay can result in the "chattering" effect described earlier. So, use the shortest possible decay time, consistent with a smooth sound. KICK DRUM "HUM DRUM" Here's a trick for hardware noise gates. Suppose you want to augment an existing kick drum sound with a monster rap kick, like that famous TR-808 rap sound. Here's a sneaky way to do it: Set a sine wave test tone oscillator somewhere between 40 and 60Hz, and plug it into a mixer channel module containing the noise gate. Patch the kick drum into the gate's key input and set the threshold relatively high, so that the kick exceeds the threshold for only a very short amount of time. Set the noise gate decay for the desired amount of oscillator decay. Hopefully your gate decay can go up to about 2 seconds, but even 1 second can do the job. Now whenever the kick drum hits, it opens up the gate for a fraction of a second and lets through the sine wave; the decay time then provides the desired fadeout. REAL-TIME MANIPULATION This real-time performance tip can sound very cool with hip-hop, techno, and other types of music that rely on variations within drum loops. With most loops, the snare and kick will reach the highest levels, with (typically) hi-hat below that and percussion (maracas, shakers, tambourine, etc.) mixed in the background. Tweaking the noise gate threshold in real time causes selected parts of the loop to drop out. For example, with the threshold at minimum, you hear the entire loop. Move the threshold up, and the percussion disappears. Move it up further, and the high-hat drops out. Raise it even higher, and the snare and kick lose their decays and become ultra-percussive. Check out this audio example, with a loop being processed by a noise gate in real time. For this application, you want no attack time and a fairly short decay (about 50 ms). This can add really cool dynamics to a drum loop. So who says noise gates have to be boring? Check out some of these techniques, and you'll have a whole different take on the little critters.
  12. Sometimes Little Improvements Add Up To Big Improvements By Craig Anderton The whole is equal to the sum of its parts…as anyone who ever used analog tape will attest. Who can forget that feeling of hearing yet another contribution to the noise floor whenever you brought up a fader, as one more track of tape hiss worked its way to the output? With digital recording, tape hiss isn’t an issue any more. But our standards are now more stringent, too. We expect 24-bit resolution, and noise floors that hit theoretical minimums. As a result, every little extra dB of noise, distortion, or coloration adds up, especially if you’re into using lots of tracks. A cheapo mic pre’s hiss might not make a big difference if it’s used only to capture a track of the lead singer in the punk band Snot Puppies of Doom, but if you’re using it to record twelve tracks of acoustic instruments, you will hear a difference. I’ve often stated that all the matters in music is the emotional impact, but still, it’s even better when that emotional impact is married with pristine sound quality. So, let’s get out the "audio magnifying glass" (even though they don’t work for mixing, headphones are great when you need to really pay attention to details on a track), and clean up our tracks … one dB at a time. PREVENTING THE NOISE PROBLEM Even in today’s digital world, there’s hiss from converters, guitar amps, preamps, direct boxes, instrument outputs, and more. The individual contribution in one track may not be much, but when low level signals aren’t masked by noise, you’ll hear a much more "open" sound and improved soundstage. (And if you don’t think extremely low levels of noise make that much of a difference, consider dithering—it’s very low level, but has a significant effect on our perception of sound.) The first way to reduce noise is prevention. Maybe it’s worth spending the bucks on a better mic pre if it’s going to shave a few dB off your noise figure. And what about your direct box? If it’s active, it might be time for an upgrade there as well. If it’s not active but transformer-based instead, then that’s an issue in itself as the transformer may pick up hum (first line of defense: re-orient it). Here are some additional tips: Gain-staging (the process of setting levels as a signal travels from one stage to the next stage, so that one stage neither overloads the next stage, nor feeds it too little signal) is vital to minimizing noise, as you want to send the maximum level short of distortion to the next stage. But be careful. Personally, I’d rather lose a few dB of noise figure than experience distortion caused by an unintentional overload. Crackles can be even more problematic than hiss. Use contact cleaner on your patch cord plugs, jack contacts, and controls. Tiny crackles can be masked during the recording process by everything else that’s making noises, but may show up under scrutiny during playback. In a worst-case situation, the surfaces of dissimilar metals may have actually started to crystallize. Not only can that generate noise, but these crystals are all potential miniature crystal radios, which can turn RFI into audio that gets pumped audio into the connection. Not good. Make sure any unnecessary mixer channels are muted when you record. Every unmuted channel is another potential source of noise. Unless you have a high-end sound card like the Lynx line, avoid sending any analog signals into your computer. Use digital I/O and a separate, remote converter. Although most people use LCD monitors these days, if there's a CRT on while you’re recording, don’t forget that it’s pumping out a high frequency signal (around 15kHz). This can get into your mics. Turn it off while recording. When recording electric guitar, pickups are prone to picking up hum and other interference. Try various guitar positions until you find the one that generates the minimum amount of noise. If you have a Line 6 Variax, consider yourself fortunate —it won’t pick up hum due to using a piezo pickup. No matter how hard you try, though, some noise is going to make it into your recorded tracks. That’s when it’s time to bring out the heavy artillery: noise removal, noise gating, and noise reduction. DEALING WITH NOISE AFTER THE FACT With a typical hard disk-based DAW, you have three main ways to get rid of constant noise (hiss and some types of hum): noise gating, noise removal, and noise reduction. Noise gating is the crudest method of removing noise. As a refresher, a noise gate has a particular threshold level. Signals above this level pass through unimpeded to the gate out. Signals below this threshold (e.g., hiss, low level hum, etc.) cause the gate to switch off, so it doesn’t pass any audio and mutes the output. Early noise gates were subject to a variety of problems, like "chattering" (i.e., as a signal decayed, its output level would criss-cross over the threshold, thus switching the gate on and off rapidly). Newer gates (Fig. 1) have controls that can specify attack time so that the gate ramps up instead of slamming on, decay time controls so the gate shuts off more smoothly, and a "look-ahead" function so you can set a bit of attack time yet not cut off initial transients. Fig. 1: The Gate section of Cubase’s VST Dynamics module (the compressor is toward the right) includes all traditional functions, but also offers gating based on frequency so that only particular frequencies open the gate. This makes it useful as a special effect as well as for reducing noise. In this case, the kick is being isolated and gated. Noise gates are effective with very low level signals and tracks with defined "blocks" of sound with noise inbetween, but the noise remains when signal is present—it’s just masked. (For more about noise gates, check out the article "Noise Gates Don't Have to Be Boring.") Manual noise removal is the manual version of noise gating (Fig. 2). It’s a far more tedious process, but can lead to better results with "problem" material. Fig. 2: The upper vocal track (shown in Cakewalk Sonar) has had the noise between phrases removed manually, with fades added; the lower track hasn't been processed yet. With noise removal, you cut the quiet spaces between the audio you want to keep, adding fades as desired to fade in or out of the silence, thus making any transitions less noticeable. However, doing this for all the tracks in a tune can be pretty time-consuming; in most cases, noise gating will do an equally satisfactory job. Noise reduction subtracts the noise from a track, rather than simply masking it. Because noise reduction is a complex process, you’ll usually need to use a stand-alone application like Adobe Audition (Fig. 3), Steinberg Wavelab, Sony Sound Forge, iZotope RX2, and various Waves plug-ins. Fig. 3: Sound Forge's Noise Reduction tools have been around for years, but remain both effective and easy to use. With stand-alone programs, you’ll likely have to export the track in your DAW as a separate audio file, process it in the noise reduction program, then import it back into your project. Also, you'll generally need a sample of the noise you’re trying to remove (called a "noise print," in the same sense as a fingerprint). It need only be a few hundred milliseconds, but should consist solely of the signal you’re trying to remove, and nothing else. Once you have this sample, the program can mathematically subtract it from the waveform, thus leaving a de-noised waveform. However, some noise reduction algorithms don’t need a noise print; instead, they use filtering to remove high frequencies when only hiss is present. This is related to how a noise gate works, except that it’s a more evolved way to remove noise as (hopefully) only the frequencies containing noise are affected. "Surgical" removal makes it possible to remove specific artifacts, like a finger squeak on a guitar string, or a cough in the middle of a live performance. The main way to do this is with a spectral view that shows not only amplitude and time, but also, frequency. This makes it easy to pick out something like a squeak or cough from the music, then remove it (Fig. 4). Fig. 4: Adobe Audition's spectral view and "Spot Healing Brush Tool" makes it easy to remove extraneous sounds. Here, a cough has been isolated and selected for removal. Audition does elaborate background copying and crossfading to "fill in" the space caused by the removal. While this all sounds good in theory—and 90\% of the time, it’s good in practice too—there are a few cautions. Noise reduction works best on signals that don’t have a lot of noise. Trying to take out large chunks of noise will inevitably remove some of the audio you want to keep. Use the minimum amount of noise reduction needed to achieve the desired result. 6 to 10dB is usually pretty safe. Larger values may work, but this may also add some artifacts to the audio. Let your ears be the judge; like distortion, I find audible artifacts more objectionable than a little bit of noise. You can sometimes save presets of particular noise prints, for example, of a preamp you always use. This lets you apply noise reduction to signals even if you can’t find a section with noise only. In some cases you may obtain better results by running the noise reduction twice with light noise removal rather than once with more extensive removal. So is all this effort worth it? I think you’ll be pretty surprised when you hear what happens to a mix when the noise contributed by each track is gone. Granted, it’s not the biggest difference in the world, and we’re talking about something that happens at a very low level. But minimizing even low-level noise can lead to a major improvement to the final sound … like removing the dust from a fine piece of art. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Sort out the confusion, and find out what’s best for you By Craig Anderton By now, everyone was going to be loading soft synths into their laptops, and taking them to the gig instead of keyboards. Oh, and we were also supposed to travel around with personal jet packs and of course, flying cars. Well, the future doesn’t always turn out as expected, does it? Hardware keyboards are actually having somewhat of a renaissance. Keyboards are a mature field, and there are a huge number of options that offer significant value, whether you’re looking for an inexpensive arranger keyboard like the Casio WK-7500, a full-blown workstation like Yamaha’s Motif XF series, something more economical like the Korg Krome or Yamaha MOX, or even a top-of-the-line, state-of-the-art keyboard like the Korg Kronos X or Roland Jupiter-80. Or, maybe you want a separate tone module and keyboard controller . . . Casio WK-7500 (top), Korg Kronos (middle), and Yamaha MOX (bottom). But all these options can be overwhelming—how do you choose the model that’s right for your needs? That’s what this article is all about, so let’s get started. SELF-CONTAINED OR MODULAR? Before getting into how a keyboard sounds, consider the possible configurations. Self-contained models have a keyboard and all associated tone-generation circuitry in a single enclosure. They are likely to include controls that allow changing the sound in real time, which can add expressiveness to your playing. Note the control surface toward the upper right on Yamaha's Motif XF. Although the Motif XF is a complete workstation with an onboard sequencer, its control surface can also control DAWs like Cubase. They’re also easy to set up. Furthermore, self-contained devices called workstations include multitrack sequencers, which record what you play and can build up complete arrangements from the onboard sounds. This is a great feature for songwriters. Stage pianos are another type of self-contained model, which tend to contain more “bread and butter” sounds like strings, pianos, bass, etc. than more exotic synth timbres. The Nord Stage 2 stage piano is a fine example of this type of keyboard. A modular setup has a separate master keyboard (like M-Audio’s Axiom and Axiom Pro series) and one or more tone generator modules that connect via a MIDI cable. M-Audio's Axiom Pro line of controllers doesn't make sounds, but controls hardware tone modules and software synthesizers. The keyboard’s MIDI output provides a digitally-encoded version of what you play on the keyboard. The tone module’s MIDI input responds to this data, and plays the appropriate notes. To change your sound, keep your master controller and simply replace (or supplement) the existing tone module. Modular setups are also ideal for using alternate controllers, such as a guitar or bass outfitted with a MIDI-compatible pickup (e.g., Roland GK series), or a strap-on remote keyboard you wear like a guitar (as made by Roland). If you’re not a keyboardist, why pay for a keyboard you won’t use? Many tone generators mount in a standard 19-inch wide equipment rack. If you like to tweak controls, check out “tabletop” units (like Dave Smith Instruments Tetra or the Access Virus TI Snow). These have no keyboard but offer readily accessible controls; set them up next to your keyboard, and you’ll have plenty of options for real-time sound tweaking. Note that modular setups require more MIDI savvy to be used to their fullest potential, whereas self-contained synths are “plug-and-play.” Dave Smith's Tetra is a compact, tabletop synthesizer; Akai's MPC Studio Slimline is in the middle of their MPC line. Within these categories, there are sub-genres. One self-contained device, the “groove box” (exemplified by the Akai MPC series) is designed primarily for dance, rap, hip-hop, and other groove-based music. While somewhat more like an overachieving drum machine than a keyboard, a groove box has many elements in common with synthesizers. KEYBOARD ACTION Weighted keyboards feel more like a real piano, as the keys offer a slight resistance to your playing. Some electronic pianos even reduce the resistance as notes get higher—just like a real piano. However, weighted keyboards cost more than unweighted keyboards (and not surprisingly, weigh more too!); and some players prefer unweighted models, because it takes less effort to work the keys. Most synths sport at least a 61-note (five-octave) keyboard, but some extend to 88 keys. Conversely, some analog “lead” synths may be limited to a three-octave keyboard. Fortunately synths can transpose, so a smaller keyboard range can shift up or down as needed. There are even mini-keyboards designed more for portable applications made by Alesis, M-Audio, Arturia, and many others. These typically have a one, two, or three-octave keyboard with pitch bend, modulation wheel, and programmable controls suitable for varying synth parameters. Although designed mostly for use with computers, for bass or lead parts these can be well-suited to driving a tone generator module. TYPES OF SYNTHESIS Companies enjoy assigning proprietary names to synthesis processes, like “Advanced Whoopdedoo Synthesis” or whatever—but they’re just names. Here are the main types of synthesis engines. Analog synthesis. This is the “old school” synthesis that dates back to the early Moog and Buchla synthesizers. Nowadays, except for some high-end products like the Moog Music Voyager series, it's more likely that an “analog” synth actually uses digital technology to emulate analog sounds, like the Roland Gaia synthesizer. Although analog synthesizers can imitate acoustic or electric instruments, their specialty is making “impressionistic,” synthetic sounds. FM synthesis. This uses all-digital technology and has a clearer, brighter, less “warm” sound than analog synths. FM synthesis fell out of favor in the ‘90s after years of overuse, but is making a comeback in software synthesizers (like Native Instruments’ FM8) because it still provides useful, unique timbres. Sample-based synthesis. Samples are recordings of actual instruments that are assigned to notes, which you then trigger with the keyboard. While samples can create realistic instrument sounds, it can take more work to wring expressiveness out of what is essentially a “snapshot” of a sound. Pluck a string ten times; each time the sound will be subtly different. A sample gives the same sound every time. Several advancements help make samples more expressive, such as using samples played with different dynamics. Striking a key softly triggers the “soft” sample; hitting harder triggers the “loud” sample. The more dynamics-sensitive samples per key, the more satisfying the sound. Another option is to have multiple samples of the same note, and hitting successive notes chooses different samples. Another improvement is adding synth-type circuitry to modify the samples. For example, a lowpass filter can reduce a sound’s high frequencies. If the filter frequency responds to the dynamics of your playing, then the sound can get brighter with loud notes, which is characteristic of many acoustic instruments. Modeling. This creates a computer algorithm of an instrument, so technically it’s synthesizing a sound. However, the physics of brass, plucked strings, etc. are well known and if the model includes enough variables, it’s possible to obtain sounds that are as realistic as sampling but aren’t limited to being “sonic snapshots.” Most virtual analog synthesizers use modeling. All together now. Some keyboards combine multiple synth engines, typically sampling and modeling. Korg’s Kronos takes this to an extreme, with nine different synthesis engines. AND NOW, OUR MAIN FEATURE(S) A keyboard’s spec sheet contains a huge number of terms. Here are explanations of some of the most important ones. On-board sequencer A sequencer records your keypresses and controller motions, thus allowing you to record and play back compositions. For songwriting, this is great, and often gets ideas down faster than a conventional recording setup. The two most important characteristics are number of tracks (typically 8 to 32), and the number of events the sequencer can store. Note that an “event” can be a single note, so a figure like 10,000 events might seem like a lot. But moving a modulation wheel or lever from minimum to maximum might generate a hundred or more events. The more events a sequencer can store, the better. Polyphony This defines the number of voices that can sound simultaneously (the reason we don’t say “notes” is because technically, a voice may play back more than one note at a time, e.g., a parallel fifth). 64, 128, 256, and even more voices are common. This might seem strange—after all, you have only ten fingers. But with a piano sound, notes sustain in the background, which uses up voices. Also, if driven by a multitrack sequencer, more polyphony allows fuller arrangements by allowing more notes for each track. Multi-timbral operation This expresses the number of different sounds that a keyboard can generate simultaneously, and is an important spec for keyboards with on-board sequencers, or that you plan to drive with an external MIDI sequencer (e.g., a computer-based program). Most multi-timbral keyboards can do 16 different sounds simultaneously—one for each of the standard 16 MIDI channels. Polyphony and multi-timbral operation are complementary: to play back lots of simultaneous sounds, you need lots of voices available for them. Sample ROM Sample-based synths store their samples in non-volatile ROM chips. Generally, more ROM capacity means either more sounds to choose from, or better quality versions of a lesser number of sounds. Back in the day, four-megabyte sound ROMs used to be considered big—compare that to the Motif XF, which has over 700MB of sounds. Sample import Several sample formats have evolved: the WAV file format for Windows, AIFF for the Mac, and sample formats specific to particular manufacturers (Akai’s format, while ancient, remains viable). The more formats a sampler (or synth with sample expansion) can recognize, the better but these days, most manufacturers are standardizing on WAV format files. Real-time controls Almost all synths have a pitch bend wheel and modulation wheel or lever (the latter might add vibrato, change tone, or other functions, depending on how the sound is programmed). To this basic roster others might add ribbon controllers (slide your finger along a ribbon strip to change a parameter value), data sliders, footpedal options, a joystick, etc. But many synthesizers take this concept one step further by including assignable faders, switches, and knobs that can (with suitable templates) control parameters in popular DAWs. Probably the best example is the integration between Yamaha synthesizers and Steinberg’s Cubase, as Steinberg is a division of Yamaha and there seems to be a lot of communication going on between the two divisions. Storage Options for storing sounds and sequences vary. Many synths now include USB ports so storage can be done to thumb drives, or even hard drives that connect to USB. Yamaha’s Motif XF series has the option to add up to 2GB of onboard, flash memory for storing your own sample sets in non-volatile memory. Hard disk or RAM recording If the keyboard has a hard drive, and can sample, sometimes you can record tracks of vocals, guitars, etc., just like a computer-based hard disk recording system. This is also possible with some synths that are RAM-based. Now we’re talking serious production – a keyboard like this blurs the line between musical instrument and recording studio. Onboard effects Most keyboards include at least rudimentary effects like delay and reverb, but some go much further, including multiple effects that can be used as insert, send, and master effects—just like a mixer. How effects interact with the program or sequencer varies. Usually, you can store a particular effect or set of effects with a particular program. But suppose you have a sequence with multiple instruments, or a multi-timbral setup. Insert effects process individual tracks. Some keyboards also have master effects, which alter any audio, from any source, that appears at the output. Tone controls are good candidates for a master effect so you can, for example, brighten up the high end a bit or make the bass rumble. Send effects (also called Aux effects) can add a particular effect to multiple channels of your choice, so they’re somewhere between insert and master effects in terms of how they process the sound. Interactive algorithms The most sophisticated implementation of this concept is called KARMA, and is available for Korg and now, Yamaha keyboards. It’s hard to explain, but basically, the keyboard analyzes your playing and adds enhancements where appropriate. For example, a bass line might acquire pitch bend and portamento in selected places, or acoustic guitar parts may have “strums” added in for a more realistic sound. Other keyboards, like the Jupiter-80, perform their own type of enhancements (Roland calls the technology “SuperNatural”) that are also intended to enhance expressiveness. This type of “artificial intelligence” makes a difference in how inspiring an instrument can be, as it becomes more of a partner in the music-making process. Roland's SuperNatural technology incorporated in their Jupiter-80 adds exceptional expressiveness. Sample slicing This feature is found mostly in groove boxes, but is also incorporated in some keyboards, such as the Motif. The goal is to allow digital audio to follow tempo if the sequencer tempo changes. This works by slicing samples into smaller pieces, typically at prominent attacks or percussive transients. The sequencer triggers these pieces individually, so if the tempo slows down, the triggers occur further apart and the slices play back further apart to follow the beat. Conversely, with faster tempos, the slices trigger closer together. Arpeggiator An arpeggiator triggers notes sequentially in a pattern (sometimes arpeggiators are polyphonic, and can trigger several parallel patterns). For example, suppose you’re holding down a C major chord with the notes C4-E4-G4-C5. In “up” mode, these might play as C4-E4-G4-C5-C4-E4-G4-C5 etc. In down mode, it would do the reverse, playing C5-G4-E4-C4-C5-G4-E4-C4 etc. Other modes might be up/down, random, or extended, where the notes you hold down repeat over several octaves. Arpeggiators are used a lot in dance and “new age” music, and to add flourishes in just about any type of music. Expandability Given the dizzying rate of technological progress, expandability is key to preserving your investment. Here are some of the possibilities. Expansion card slots. Sample-based synths have a fixed complement of sounds. Adding cards expands this palette. Cards are typically genre- or instrument-specific (e.g., dance music, ethnic instruments, hip-hop, pianos, etc.). USB or FireWire port. With all recent Mac and Windows machines sporting USB ports, they’re used for everything from file transfers between keyboard and computer to providing all the functions of a stand-alone MIDI interface so a program running on the computer, such as a sequencer, can communicate directly with the keyboard. Sometimes these even provide audio interface functions, especially if the keyboard has an external input. Expandable sample memory. More sample memory lets you store larger numbers of longer samples before you run out of room. Expansion usually consists of inserting common, relatively inexpensive memory chips used in desktop computers. Audio input. This can be used for recording your own samples, or tracks into a sequencer, and can also provide signals that the synthesizer can process. Companion software. To simplify creating your own sounds, some keyboards come with Editor software. This puts parameters on-screen and lets you edit them, which is often a faster and more direct approach than going through menu screens on the keyboard itself. What’s more, some software lets you treat the keyboard as a VST or AU plug-in within your DAW. Korg's M3 was one of the first keyboards to include a sophisticated editor to make it easy to create sounds, but it could also be used as a plug-in within your DAW. THE KEY ISSUE Let’s close with the most important issue in choosing a synth: whether you have good chemistry with it. One of my favorite “vintage” synths is Peavey’s DPM-3, which ceased almost two decades ago. It has a measly 16 voices, virtually no expansion options, and a user-hostile sequencer. But I get sounds with it that no other keyboard can produce, and it’s the “secret ingredient” in many of my tunes. If you fall in love with a keyboard, trust your instincts. Remember, these are musical instruments, not just technological marvels. Some of the best synth parts ever recorded were played on a single voice, non-multitimbral, non-expandable Minimoog. If you’re trying out a keyboard and it doesn’t inspire you, move on—even if it has the most amazing spec sheet you’ve ever seen. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. When it comes to recording, let’s get physical By Craig Anderton Until digital recording appeared, every function in analog gear had an associated control: Whether you were tweaking levels, changing the amount of EQ gain, or switching a channel to a particular bus, a physical device controlled that function. Digital technology changed that, because functions were no longer tied to physical circuits, but virtualized as a string of numbers. This gave several advantages: Controls are more expensive than numbers, so virtualizing multiple parameters and controlling them with fewer controls lowered costs. Virtualization also saved space, because mixers no longer had to have one control per function; they could use a small collection of channel strips—say, eight—that could bank-switch to control eight channels at a time. But you don’t get something for nothing, and virtualization broke the physical connection between gear and the person operating the gear. While people debate the importance of that physical connection, to me there’s no question that having a direct, physical link between a sound you’re trying to create and the method of creating that sound is vital—for several reasons. THE ZEN OF CONTROLLERS If you’re a guitar player, here’s a test: Quick—play an A#7 chord. Okay, now list the notes that make up the chord, lowest pitch to highest. Chances are you grabbed the A#7 instantly, because your fingers—your “muscle memory”—knew exactly where to go. But you probably had to think, even if only for a second, to name all the notes making up the chord. Muscle memory is like the DMA (Direct Memory Access) process in computers, where an operation can pull data directly from memory without having to go through the CPU. This saves time, and lets the CPU concentrate on other tasks where it truly is needed. So it is with controllers: When you learn one well enough so that your fingers know where to go and you don’t have to parse a screen, look for a particular control, click it with your mouse, then adjust it, the recording process become faster and more efficient. IMPROVING DAW WORKFLOW Would you rather hit a physical button labeled “Record” when it was time to record, or move your mouse around onscreen until you find the transport button and click on it? Yeah, I thought so. The mouse/keyboard combination was never designed for recording music, but for data entry. For starters, the keyboard is switches-only—no faders. The role of changing a value over a range falls to the mouse, but a mouse can do only one thing at a time—and when recording, you often want to do something like fade one instrument down while you fade up another. Sure, there are workarounds: You can group channels and offset them, or set up one channel to increase while the other decreases, and bind them to a single mouse motion. But who wants to do that kind of housekeeping when you’re trying to be creative? Wouldn’t you rather just have a bunch of faders in front of you, and control the parameters directly? Another important consideration is that your ears do not exist in a vacuum; people refer to how we hear as the “ear/brain combination,” and with good reason. Your brain needs to process whatever enters your ears, so the simple act of critical listening requires concentration. Do you really want to squander your brain’s resources trying to figure out workarounds to tasks that would be easy to do if you only had physical control? No, you don’t. But . . . PROBLEM 1: JUST BECAUSE SOMETHING HAS KNOBS DOESN’T GUARANTEE BETTER WORKFLOW Some controllers try to squeeze too much functionality into too few controls, and you might actually be better off assigning lots of functions to keyboard shortcuts, learning those shortcuts, then using a mouse to change values. I once used a controller for editing synth parameters (the controller was not intended specifically for synths, which was part of the problem), and it was a nightmare: I’d have to remember that, say, pulse width resided somewhere on page 6, then remember which knob (which of course didn’t have a label) controlled that parameter. It was easier just to grab a parameter with a mouse, and tweak. On the other hand, a system like Native Instruments’ Kore is designed specifically for controlling plug-ins, and arranges parameters in a logical fashion. As a result, it’s always easy to find the most important parameters, like level or filter cutoff. PROBLEM 2: IT GETS WORSE BEFORE IT GETS BETTER So do you just get a controller, plug it in, and attain instant software/hardware nirvana? No. You have to learn hardware controllers, or you’ll get few benefits. If you haven’t been using a controller, you’ve probably developed certain physical moves that work for you. Once you start using a controller, those all go out the window, and you have to start from scratch. If you’re used to, say, hitting a spacebar to begin playback, it takes some mental acclimation to switch over to a dedicated transport control button. Which begs the question: So why use the transport control, anyway? Well, odds are the transport controls will have not just play but stop, record, rewind, etc. Once you become familiar with the layout, you’ll be able to bounce around from one transport function to another far more easily than you would with a QWERTY keyboard set up with keyboard shortcuts. Think of a hardware controller as a musical instrument. Like an instrument, you need to build up some “muscle memory” before you can use it efficiently. I believe that the best way to learn a controller is to go “cold turkey”: Forget you have a mouse and QWERTY keyboard, and use the controller as often as possible. Over time, using it will become second nature, and you’ll wonder how you got along without it. But realistically, that process could take days or even months; think of spending this time as an investment that will pay off later. DIFFERENT CONTROLLER TYPES There are not just many different controllers, but different controller product “families.” The following will help you sort out the options, and choose a controller that will aid your workflow rather than hinder it. Custom controllers. These are designed to fit specific programs or software like a glove; examples include Ableton's Push controller, Roland’s V-Studio series (including the 700, 100, and 20 controllers), Steinberg’s Cubase-friendly series of CMC controllers, and the like. The text labels are usually program-specific, the knobs and switches have (hopefully) been laid out ergonomically, and the integration between hardware and software is as tight as Tower of Power’s rhythm section. If a control surface was made for a certain piece of software, it’s likely that will be the optimum hardware/software combination. Ableton's Push controller is an ideal match for Live 9 A different type of controller, Softube's Console 1, is a different type of animal—it has software that emulates an analog channel strip and inserts in a DAW, with a hardware controller that provides a traditional, analog-style one-function-per-control paradigm. The control surface itself provides visual feedback, but if you want more detail, you can also see the parameters on-screen. Softube's Control 1 General-purpose DAW controllers. While designed to be as general-purpose as possible, these usually include templates for specific programs. They typically include hardware functions that are assumed to be “givens,” like tape transport-style navigation controls, channel level faders, channel pan pots, solo and mute, etc. A controller with tons of knobs/switches and good templates can give very fluid operation. Good examples of this are the Mackie Control Universal Pro (which has become a standard—many programs are designed to work with a Mackie Control and many hardware controllers can emulate the way a Mackie Control works), Avid Euphonix Artist series controllers (shown in the opening of this article), and Behringer BCF2000. Mackie Control Universal Pro There are also “single fader” hardware controllers (e.g., PreSonus FaderPort and Frontier Design Group AlphaTrack) which while compact and inexpensive, take care of many of the most important control functions you’ll use. Digital mixers. For recording, a digital mixer can make a great hands-on controller if both it and your audio interface have a multi-channel digital audio port (e.g., ADAT optical “light pipe”). You route signals out digitally from the DAW, into the mixer, then back into two DAW tracks for recording the stereo mix. Rather than using the digital mixer to control functions within the program, it actually replaces some of those functions (particularly panning, fader-riding, EQ, and channel dynamics). As a bonus, some digital mixers include a layer that converts the faders into MIDI controllers suitable for controlling virtual synths, effects boxes, etc. Synthesizers/master keyboards. Many keyboards, like the Yamaha Motif series and Korg Kronos, as well as master controllers from M-Audio, Novation, CME, and others build in control surface support. But even those without explicit control functions can sometimes serve as useful controllers, thanks to the wheels, data slider(s), footswitch, sustain switch, note number, and so on. As some sequencers allow controlling functions via MIDI notes, the keyboard can provide those while the knobs control parameters such as level, EQ, etc. Arturai's KeyLab 49 is part of a family of three keyboard controllers that also serve as control surfaces. Really inexpensive controllers. Korg's nanoKONTROL2 is a lot of controller for the money; it's basic, with volume, pan, mute, solo, and transport controls, but it's also Mackie-compatible. But if you're on an even tighter budget, remember that old drum machine sitting in the corner that hasn’t been used in the last decade? Dust it off, find out what MIDI notes the pads generate, and use those notes to control transport functions—maybe even arm record, or mute particular track(s). A drum machine can make a compact little remote if, for example, you like recording guitar far away from the computer monitor. The “recession special” controller. Most programs offer a way to customize QWERTY keyboard commands, and some can even create macros. While these options aren’t as elegant as using dedicated hardware controllers, tying common functions to key commands can save time and improve work flow. Overall, the hardware controllers designed for specific software programs will almost certainly be your best bet, followed by those with templates for your favorite software. But there are exceptions: While Yamaha’s Motif XS and XF series keyboards can’t compete with something like a Mackie Control, they serve as fine custom controllers for Cubase AI—which might be ideal if Cubase is your fave DAW. Now, let’s look at some specific issues involving control surfaces. MIDI CONTROL BASICS Most hardware control surfaces use MIDI as their control protocol. Controlling DAWs, soft synths, processors, etc. is very similar to the process of using automation in sequencing programs: In the studio, physical control motions are recorded as MIDI-based automation data, which upon playback, control mixer parameters, soft synths, and signal processors. If you’re not familiar with continuous controller messages, they’re part of the MIDI spec and alter parameters that respond to continuous control (level, panning, EQ frequency, filter cutoff, etc.). Switch controller messages have two states, and cover functions like mute on/off. There are 128 numbered controllers per MIDI channel. Some are recommended for specific functions (e.g., controller #7 affects master volume), while others are general-purpose controllers. Controller data is quantized into 128 steps, which gives reasonably refined control for most parameters. But for something like a highly resonant filter, you might hear a distinct change as a parameter changes from one value to another. Some devices interpolate values for a smoother response. MAPPING CONTROLS TO PARAMETERS With MIDI control, the process of assigning hardware controllers to software parameters is called mapping. There are four common methods: Novation's low-cost Nocturn controller features their Automap protocol, which identifies plug-in parameters, then maps them automatically. In this screen shot, the controls are being mapped to Solid State Logic's Drumstrip processor for drums. “Transparent” mapping. This happens with controllers dedicated to specific programs or protocols: They’re already set up and ready to go, so you don’t have to do any mapping yourself. Templates. This is the next easiest option. The software being controlled will have default controller settings (e.g., controller 7 affects volume, 10 controls panning, 72 edits filter cutoff, etc.), and loading a template into the hardware controller maps the controls to particular parameters. MIDI learn. This is almost as easy, but requires some setup effort. At the software, you select a parameter and enable “MIDI learn” (typically by clicking on a knob or switch—ctrl-click on the Mac, right-click with Windows). Twiddle the knob you want to have control the parameter; the software recognizes what’s sent and maps it. Fixed assignments. In this case, either the controller generates a fixed set of controllers, and you need to edit the target program to accept this particular set of controllers; or, the target software will have specific assignments it wants to see, and you need to program your controller to send these controllers. THE “STAIR-STEPPING” ISSUE Rotating a “virtual front panel” knob in a soft synth may have higher resolution than controlling it externally via MIDI, which is limited to 128 steps of resolution. In practical terms, this means a filter sweep that sounds totally smooth when done within the instrument may sound “stair-stepped” when controlled with an external hardware controller. While there’s no universal workaround, some synthesizers have a “slew” or “lag” control that rounds off the square edges caused by transitioning from one level to another. RECONCILING PHYSICAL AND VIRTUAL CONTROLS Controllers with motorized faders offer the advantage of having the physical control always track what the corresponding virtual control is doing. But with any controller that doesn’t use motorized faders, one of the big issues is punching in when a track already contains control data. If the physical position of the knob matches the value of the existing data, no problem: Punch in, grab the knob, and go. But what happens if the parameter is set to its minimum value, and the knob controlling it is full up? There are several ways to handle this. Instant jump. Turn the knob, and the parameter jumps immediately to the knob’s value. This can be disconcerting if there’s a sudden and unintended change—particularly live, where you don’t have a chance to re-do the take! Match-then-change. Nothing happens when you change the physical knob until its value matches the existing parameter value. Once they match, the hardware control takes over. For example, suppose a parameter is at half its maximum value, but the knob controlling the parameter is set to minimum. As you turn up the knob, nothing happens until the knob matches the parameter value. Then as you continue to move the knob, the parameter value follows along. This provides a smooth transition, but there may be a lag between the time you start to change the knob and when it matches the parameter value. Add/subtract. This technique requires continuous knobs (i.e., data encoder knobs that have no beginning or end, but rotate continuously). When you call up a preset, regardless of the knob position, turning it clockwise adds to the preset value, while turning it counter-clockwise subtracts from the value. Motorized faders. This requires bi-directional communication between the control surface and software, as the faders move in response to existing automation values—so there’s always a correspondance between physical control settings and parameter values. This is the great: Just grab the fader and punch. The transition will be both smooth and instantaneous. Parameter nulling. This is becoming less common as motorized faders become more economical. With nulling, there are indicators (typically LEDs) that show whether a controller’s value is above or below the existing value. Once the indicators show that the value matches (e.g., both LEDs light at the same time), punching in will give a smooth transition. IS THERE A CONTROLLER IN YOUR FUTURE? Many musicians have been raised with computers, and are perfectly comfortable using a mouse for mixing. However, it’s often the case that when you sit that person down in front of a controller, and they start learning how to actually use it, they can’t go back to the mouse. In some ways, we’re talking about the same kind of difference as there is between a serial and parallel interface: The mouse can only control one parameter at a time, whereas a control surface lets you move groups of controls, essentially turning your mix from a data-entry task into a performance. And I can certainly tell you which one I prefer! Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Improve your mixes by avoiding these seven mixing taboos By Craig Anderton If you listen to a lot of mixes coming out of home and project studios, after a while you notice a definite dividing line between the people who know what they’re doing, and the people who commit one of more of the Seven Deadly Sins of Mixing. You don’t want to be a mixing sinner, do you? Of course not! So, check out these tips. 1. The Disorienting Room Space This comes from using too many reverbs: A silky plate on the voice, a big room on the snare, shorter delays on guitar . . . concert hall, or concert hell? Even if the listener can’t identify the problem, they’ll know that something doesn’t sound quite right because we’ve all logged a lifetime of hearing sounds in acoustical spaces, so we inherently know what sounds “right.” Solution: Choose one reverb as your main reverb that defines the characteristics of your imaginary “room.” Insert this in an aux bus. If you do use additional reverb on, say, voice, use this second reverb as a channel insert effect but don’t rely on it for all your vocal reverb; make up the difference by sending the vocal to the reverb aux bus to add in a bit of the common room reverb. The end result will sound much more realistic. 2. Failure to Mute All those little pops, snorks, hisses, and hums can interfere with a mix’s transparency. Even a few glitches here and there add up when multiplied over several tracks. Solution: Automate mutes for when vocalists aren’t singing, during the spaces between lead guitar solos, and the like. Automating mutes independently of fader-style level automation lets you use each for what it does best. Your DAW may even have some kind of DSP option that, like a noise gate, strips away all signals below a certain level and deletes these regions from your track (Fig. 1). Fig. 1: Sonar’s “Remove Silence” DSP has been applied to the vocal track along the bottom of the window. 3. "Pre-Mastering" a Mix You want your mix to “pop” a little more, so you throw a limiter into your stereo bus, along with some EQ, a high-frequency exciter, a stereo widener, and maybe even more . . . thus guaranteeing your mastering engineer can’t do the best possible job with a fantastic set of mastering processors (Fig. 2). Fig. 2: I was given this file to master, but what could possibly be done with a file that had already been compressed into oblivion? Solution: Unless you really know what you’re doing, resist the temptation to “master” your mix before it goes to the mastering engineer. If you want to listen with processors inserted to get an idea of what the mix will sound like when compressed, go ahead—but hit the bypass switch before you mix down to stereo (or surround, if that’s your thing). 4. Not Giving the Lead Instrument Enough Attention This tends to be more of a problem with those who mix their own music, because they fall in love with their parts and want them all to be heard. But the listener is going to focus on the lead part, and pay attention to the rest of the tracks mostly in the context of supporting the lead. Solution: Take a cue from your listeners. 5. Too Much Mud A lot of instruments have energy in the lower midrange, which tends to build up during mixdown. As a result, the lows and high seem less prominent, and the mix sounds muddy. Solution: Try a gentle, relatively low-bandwidth cut of a dB or two around 300-500Hz on those instruments that contribute the most lower midrange energy (Fig. 3). Or, try the famous “smile” curve that accentuates lows and highs, which by definition causes the midrange to be less prominent. Fig. 3: Reducing some lower midrange energy in one or more tracks (in this case, using SSL’s X-EQ equalizer) can help toward creating a less muddy, more defined low end. 6. Dynamics Control Issues We’ve already mentioned why you don’t want to compress the entire mix, but pay attention to how individual tracks are compressed as well. Generally, a miked bass amp track needs a lot of compression to make up for variations in amp/cabinet frequency response; compression smoothes out those anomalies. You also want vocals to stand out in the mix and sound intimate, so they’re good candidates for compression as well. Solution: Be careful not to apply too much compression, but too little compression can be a problem, too. Try increasing the compression (i.e., lower threshold and/or higher ratio) until you can “hear” the effect, then back off until you don’t hear the compression any more. The optimum position is often within these two extremes: Enough to make a difference, but not enough to be heard as an “effect.” 7. Mixing in an Acoustically Untreated Room If you’re not getting an accurate read on your sound, then you can’t mix it properly. And it won’t sound right on other systems, either. Solution: Even a little treatment, like bass traps, “clouds” that sit above the mix position, and placing near-field speakers properly so you’re hearing primarily their direct sound rather than any reflected sound can help. Also consider using really good headphones as a reality check. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. It's a dirty job to go from high-res audio to 44/16, but someone's got to do it by Craig Anderton The ultimate form of digital audio used to have a 16-bit word length and 44.1 kHz sampling rate. Early systems even did their internal processing at 16/44.1, which was a problem—every time you did an operation (such as change levels, or apply EQ), the result was always rounded off to 16 bits. If you did enough operations, these roundoff errors would accumulate, creating a sort of "fuzziness" in the sound. The next step forward was increasing the internal resolution of digital audio systems. If a mathematical operation created an "overflow" result that required more than 16 bits, no problem: 24, 32, 64, and even 128-bit internal processing became commonplace (Fig. 1). As long as the audio stayed within the system, running out of resolution wasn't an issue. Fig. 1: Cakewalk Sonar allows choosing 64-bit resolution for the audio engine. These days, your hard disk recorder most likely records and plays back at 24, 32, or 64 bits, and the rest of your gear (digital mixer, digital synth, etc.) probably has fairly high internal resolution as well. But currently, although there are some high-resolution audio formats, your mix usually ends up either online in MP3, AAC, or FLAC format, or in what's still the world's most common physical delivery medium: a 16-bit, 44.1kHz CD. What happens to those "extra" bits? Before the advent of dithering, they were simply discarded (just imagine how those poor bits felt, especially after being called the "least significant bits" all their lives). This meant that, for example, decay tails below the 16-bit limit just stopped abruptly. Maybe you've heard a "buzzing" sort of sound at the end of a fade out or reverb tail; that's the sound of extra bits being ruthlessly "downsized." DITHERING TO THE RESCUE Dithering is a concept that, in its most basic form, adds noise to the very lower-level signals, thus using the data in those least significant bits to influence the sound of the more significant bits. It's almost as if, even though the least significant bits are gone, their spirit lives on in the sound of the recording. Cutting off bits is called truncation, and some proponents of dithering believe that dithering somehow sidesteps the truncation process. But that's a misconception. Dithered or not, when a 24-bit signal ends up on a 16-bit CD, eight bits are truncated and never heard from again. Nonetheless, there's a difference between flat-out truncation and truncation with dithering. SOME AUDIO EXAMPLES Let's listen to the difference between a dithered and non-dithered piece of audio. To obtain these examples, I normalized a snippet of a Beethoven symphony down to an extremely low level using a 16-bit audio engine (not 32-bit floating point or something else that would preserve the fidelity, even at low levels) so the effect of dithering vs. non-dithering would be obvious. I then applied dithering to one of the examples, then normalized both of them back up to an audible level. Dither Beethoven.mp3 is the file without dithering. Dither Gaussian Beet.mp3 is the same file, but with dithering added. Yes, you'll hear a lot of noise, but note how the audio sounds dramatically better. THE TROUBLE WITH TRUNCATION The reason why you hear a buzzing at the end of fades with truncated signals is that the least significant bit, which tries to follow the audio signal, switches back and forth between 0 and 1. This buzzing is called quantization noise, because the noise occurs during the process of quantizing the audio into discrete steps. In a 24-bit recording, the lower 8 bits beyond 16 bits account for 256 different possible levels between the "on" and "off" condition; but once the recording has been truncated, the resolution is no longer there to reproduce those changes. Bear in mind, though, that these are very low-level signals. For that punk rock-industrial-dance mix where all the meters are in the red, you probably don't need even 16 bits of resolution. But when you're trying to record the ambient reverb tail of an acoustic space, you need good low-level resolution. HOW DITHERING WORKS Let's assume a 24-bit recorded signal so we can work with a practical example. The dithering process adds random noise to the lowest eight bits of the 24-bit signal. This noise is different for the two channels in order not to degrade stereo separation. It may seem odd that adding noise can improve the sound, but one analogy is the bias signal used in analog tape. Analog tape is linear (distortionless) only over a very narrow range. We all know that distortion occurs if you hit tape too hard, but signals below a certain level can also sound horribly distorted. The bias signal adds a constant supersonic signal (so we don't hear it) whose level sits at the lower threshold of the linear region. Any low-level signals get added to the bias signal, which boosts them into the linear region, where they can be heard without distortion. Adding noise to the lower eight bits increases their amplitude and pushes some of the information contained in those bits into the higher bits. Therefore, the lowest part of the dynamic range no longer correlates directly to the original signal, but to a combination of the noise source and information present in the lowest eight bits. This reduces the quantization noise, providing in its place a smoother type of hiss modulated by the lower-level information. The most obvious audible benefit is that fades become smoother and more realistic, but there's also more sonic detail. Although adding noise may seem like a bad idea, psycho-acoustics is on our side. Because any noise added by the dithering process has a constant level and frequency content, our ears have an easy time picking out the content (signal) from the noise. We've lived with noise long enough that a little bit hanging around at 90dB or so is tolerable, particularly if it allows us to hear a subjectively extended dynamic range. However, there are different types of dithering noise, which exhibit varying degrees of audibility. The dither may be wideband, thus trading off the lowest possible distortion for slightly higher perceived noise. A narrower band of noise will sound quieter, but lets some extremely low-level distortion remain. SHAPE THAT NOISE! To render dithering even less problematic, noise shaping distributes noise across the spectrum so that the bulk of it lies where the ear is least sensitive (i.e., the higher frequencies). Some noise shaping curves are extremely complex -- they're not just a straight line, but also dip down in regions of maximum sensitivity (typically the midrange). Mastering programs like iZotope Ozone (Fig. 2) and even some DAWs offer multiple "flavors" of dithering. Fig. 2: iZotope's Ozone mastering plug-in has a dithering section with multiple types of dithering, noise shaping options, the ability to choose bit depths from 8 to 24bits, and a choice of dither amount. Again, this recalls the analogy of analog tape's bias signal, which is usually around 100kHz to keep it out of the audible range. We can't get away with those kinds of frequencies in a system that samples at 44.1kHz or even 96kHz, but several noise-shaping algorithms push the signal as high as possible, short of hitting the Nyquist frequency (i.e., half the sample rate, which is the highest frequency that can be recorded and played back at a given sample rate). Different manufacturers use different noise-shaping algorithms; judging these is a little like wine-tasting. Sometimes you'll have a choice of dithering and noiseshaping algorithms so you can choose the combination that works best for specific types of program material. Not all these algorithms are created equal, nor do they sound equal. DITHERING RULES The First Law of dithering is to dither only when converting a high bit-rate source format to one with lower resolution. Typically, this is from your high-resolution master or mix to the 16-bit, mixed-for-CD format. For example, if you are given an already dithered 16-bit file to edit on a high-resolution waveform editor, that 16-bit file already contains dithered data, and the higher-resolution editor should preserve it. When it's time to mix the edited version back down to 16 bits, simply transfer over the existing file without dithering. Another possible problem occurs if you give a mastering or duplication facility two dithered 16-bit files that are meant to be crossfaded. Crossfading the dithered sections could lead to artifacts; you're better off crossfading the two, then dithering the combination. Also, check any programs you use to see if dithering is enabled by default, or enabled accidentally and saved as a preference. In general, you want to leave dithering off, and enable it only as needed. Or consider Steinberg Wavelab, which has an Apogee-designed UV22 plug-in that inserts after the final level control (you always want dithering to be the very last processor in the signal chain, and be fed with a constant signal). Suppose you inserted another plug-in, like the Waves L3 Ultramaximizer (which not only includes dithering but defaults to being enabled when inserted), prior to the UV22. Unless you disable dithering in the L3 Ultramaximizer plug-in (Fig. 3), you'll be "doubling up" on dithering, which you don't want to do. Fig. 3: If you use Wavelab's internal dithering, make sure that any other master effects plug-ins you add don't have dithering enabled (in this screen shot, the Waves dithering has been turned off). However, also note that Wavelab lets you assign plug-ins to always be pre- or post-the final control (or both, if you want them available in either slot). Go to the Options menu, and choose Plug-In Organization. Check where you want the plug-ins available (Fig. 4). Fig. 4: The Waves IDR, which is a dithering algorithm, should be inserted only post the final level control. However the Maximizer processors, which are used as effects or for final processing and dithering, are checked so they're available in both locations. The best way to experience the benefits of dithering is to crank up some really low-level audio and compare different dithering and noise-shaping algorithms. If your music has any natural dynamics in it, proper dithering can indeed give a sweeter, smoother sound free of digital quantization distortion when you downsize to 16 bits. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. Compressors are Essential Recording Tools - Here's How They Work By Craig Anderton Compressors are some of the most used, and most misunderstood, signal processors. While people use compression in an attempt to make a recording "punchier," it often ends up dulling the sound instead because the controls aren't set optimally. Besides, compression was supposed to become an antique when the digital age, with its wide dynamic range, appeared. Yet the compressor is more popular than ever, with more variations on the basic concept than ever before. Let's look at what's available, pros and cons of the different types, and applications. THE BIG SQUEEZE Compression was originally invented to shoehorn the dynamics of live music (which can exceed 100 dB) into the restricted dynamic range of radio and TV broadcasts (around 40-50 dB), vinyl (50-60 dB), and analog tape (40dB to 105 dB, depending on type, speed, and type of noise reduction used). As shown in Fig. 1, this process lowers signal peaks while leaving lower levels unchanged, then boosts the overall level to bring the signal peaks back up to maximum. (Bringing up the level also brings up any noise as well, but you can't have everything.) Fig. 1: The first, black section shows the original audio. The middle, green section shows the same audio after compression; the third, blue section shows the same audio after compression and turning up the output control. Note how softer parts ot the first section have much higher levels in the third section, yet the peak values are the same. Even though digital media have a decent dynamic range, people are accustomed to compressed sound. Compression has been standard practice to help soft signals overcome the ambient noise in typical listening environments; furthermore, analog tape has an inherent, natural compression that engineers have used (consciously or not) for well over half a century. There are other reasons for compression. With digital encoding, higher levels have less distortion than lower levels—the opposite of analog technology. So, when recording into digital systems (tape or hard disk), compression can shift most of the signal to a higher overall average level to maximize resolution. Compression can create greater apparent loudness (commercials on TV sound so much louder than the programs because of compression). Furthermore, given a choice between two roughly equivalent signal sources, people will often prefer the louder one. And of course, compression can smooth out a sound—from increasing piano sustain to compensating for a singer's poor mic technique. COMPRESSOR BASICS Compression is often misapplied because of the way we hear. Our ear/brain combination can differentiate among very fine pitch changes, but not amplitude. So, there is a tendency to overcompress until you can "hear the effect," giving an unnatural sound. Until you've trained your ears to recognize subtle amounts of compression, keep an eye on the compressor's gain reduction meter, which shows how much the signal is being compressed. You may be surprised to find that even with 6dB of compression, you don't hear much apparent difference—but bypass the sucker, and you'll hear a change. Compressors, whether software- or hardware-based, have these general controls (Fig. 2): Fig. 2: The compressor bundled with Ableton Live has a comprehensive set of controls. Threshold sets the level at which compression begins. Above this level, the output increases at a lesser rate than the corresponding input change. As a result, with lower thresholds, more of the signal gets compressed. Ratio defines how much the output signal changes for a given input signal change. For example, with 2:1 compression, a 2dB increase at the input yields a 1dB increase at the output. With 4:1 compression, a 16dB increase at the input gives a 4dB increase at the output. With "infinite" compression, the output remains constant no matter how much you pump up the input. Bottom line: Higher ratios increase the effect of the compression. Fig. 3 shows how input, output, ratio, and threshold relate. Fig. 3: The threshold is set at -8. If the input increases by 8dB (e.g., from -8 to 0), the output only increases by 2dB (from -8 to -6). This indicates a compression ratio of 4:1. Attack determines how long it takes for the compression to take effect once the compressor senses an input level change. Longer attack times let through more of a signal's natural dynamics, but those signals are not being compressed. In the days of analog recording, the tape would absorb any overload caused by sudden transients. With digital technology, those transients clip as soon as they exceed 0 VU. Some compressors include a "saturation" option that mimics the way tape works, while others "soft-clip" the signal to avoid overloading subsequent stages. Yet another option is to include a limiter section in the compressor, so that any transients are "clamped" to, say, 0dB. Decay (also called Release) sets the time required for the compressor to give up its grip on the signal once the input passes below the threshold. Short decay settings are great for special effects, like those psychedelic '60s drum sounds where hitting the cymbal would create a giant sucking sound on the whole kit. Longer settings work well with program material, as the level changes are more gradual and produce a less noticeable effect. Note that many compressors have an "automatic" option for the Attack and/or Decay parameters. This analyzes the signal at any given moment and optimizes attack and decay on-the-fly. It's not only helpful for those who haven't quite mastered how to set the Attack and Decay parameters, but often speeds up the adjustment process for veteran compressor users. Output control. As we're squashing peaks, we're actually reducing the overall peak level. This opens up some headroom, so increasing the output level compensates for any volume drop. The usual way to adjust the output control is to turn this control up until the compressed signal's peak levels match the bypassed signal's peak levels. Some compressors include an "auto-gain" or "auto makeup" feature that increases the output gain automatically. Metering. Compressors often have an input meter, output meter for matching levels between the input and output, and most importantly, a gain reduction meter. (In Fig. 1, the orange bar to the left of the output meter is showing the amount of gain reduction.) If the meter indicates a lot of gain reduction, you're probably adding too much compression. The input meter in Fig. 1 shows the threshold with a small arrow, so you can see at a glance how much of the input signal is above the threshold. ADDITIONAL FEATURES You'll find the above functions on many compressors. The following features tend to be somewhat less common, but you'll still find them on plenty of products. Sidechain jacks are available on many hardware compressors, and some virtual compressors include this feature as well (sidechaining became formalized in the VST 3 specification, but it was possible to do in prior VST versions. A sidechain option lets you insert filters in the compressor's feedback loop to restrict compression to a specific frequency range. For example, if you insert a high pass filter, only high frequecies are compressed—perfect for "de-essing" vocals. The hard knee/soft knee option controls how rapidly the compression kicks in. With a soft knee response, when the input exceeds the threshold, the compression ratio is less at first, then increases up to the specified ratio as the input increases. With a hard knee curve, as soon as the input signal crosses the threshold, it's subject to the full amount of compression. Sometimes this is a variable control from hard to soft, and sometimes it's a toggle choice between the two. Bottom line: use hard knee when you want to clamp levels down tight, and soft when you want a gentler, less audible compression effect. The link switch in stereo compressors switches the mode of operation from dual mono to stereo. Linking the two channels together allows changes in one channel to affect the other channel, which is necessary to preserve the stereo image. Lookahead. A compressor cannot, by definition, react instantly to a signal because it has to measure the signal before it can decide how much to reduce the gain. As a result, the lookahead feature delays the audio path somewhat so the compressor can "look ahead" and see what kind of signal it will be processing, and therefore, react in time when the actual signal hits. Response or Envelope. The compressor can react to a signal based on its peak or average level, but its compression curve can follow different characteristics as well—a standard linear response, or one that more closely resembles the response of vintage, opto-isolator-based compressors. COMPRESSOR TYPES: THUMBNAIL DESCRIPTIONS Compressors are available in hardware (usually a rack mount design or for guitarists, a "stomp box") and as software plug-ins for existing digital audio-based programs. Following is a description of various compressor types. "Old faithful." Whether rack-mount or software-based, typical features include two channels with gain reduction amount meters that show how much your signal is being compressed, and most of the controls mentioned above (FIg. 4). Fig. 4: Native Instruments' Vintage Compressor bundle includes three different compressors modeled after vintage units. Multiband compressors. These divide the audio spectrum into multiple bands, with each one compressed individually (Fig. 5). This allows for a less "effected" sound (for example, low frequencies don't end up compressing high frequencies), and some models let you compress only the frequency ranges that need to be compressed. Fig. 5: Universal Audio's Precision Multiband is a multiband compressor, expander, and gate. Vintage and specialty compressors. Some swear that only the compressor in an SSL console will do the job. Others find the ultimate squeeze to be a big bucks tube compressor. And some guitarists can't live without their vintage Dan Armstrong Orange Squeezer, considered by many to be the finest guitar sustainer ever made. Fact is, all compressors have a distinctive sound, and what might work for one sound source might not work for another. If you don't have that cool, tube-based compressor from the 50s of which engineers are enamored, don't lose too much sleep over it: Many software plug-ins emulate vintage gear with an astonishing degree of accuracy (Fig. 6). Fig. 6: Cakewalk's PC2A, a compressor/limiter for Sonar's ProChannel module, emulates vintage compression characteristics. Whatever kind of audio work you do, there's a compressor somewhere in your future. Just don't overcompress—in fact, avoid using compression as a "fix" for bad mic technique or dead strings on a guitar. I wouldn't go as far as those who diss all kinds of compression, but it is an effect that needs to be used subtly to do its best. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. Make better-sounding "in the box" mixes, thanks to these timely tips By Craig Anderton Lately, there’s been considerable controversy about mixing “inside the box” (ITB)—the process where all your processing, fader moves, and automation are done in the digital domain, inside your computer. In theory, ITB shouldn’t have any problems. But some insist that using analog summing junctions (or a “real” console) for mixing delivers superior sound quality. What’s the truth? I believe that analog and digital, being different technologies, do have different characteristic sounds—so it’s not surprising some people might prefer one over the other. However, while I don’t buy the extreme view that ITB mixing sounds just plain bad, doing a good ITB mix involves some techniques that aren’t relevant with analog. Such as . . . THE TWO KINDS OF RESOLUTION Realize that recording resolution and audio engine resolution are different. Recording resolutions higher than 24 bits are fictional, due to the limitations of A/D technology. But your sequencer’s audio engine needs far greater resolution. This is because a 24-bit piece of audio might sound fine by itself. But when you apply a change to the signal (level, normalization, EQ, anything), multiplying or dividing that 24-bit data will likely produce a result that can’t be expressed with only 24 bits. Unless there’s enough resolution to handle these calculations, roundoffs occur—and they’re cumulative, which can possibly lead to an unpleasant sort of “fuzziness.” As a result, your audio engine’s resolution should always be considerably higher than that of your recording resolution. Fig. 1: You can enable a 64-bit, double-precision audio engine in Cakewalk Sonar. If disabled, any 64-bit files are converted to 32 bits on playback. Today’s sequencers use 32-bit floating point and higher resolutions (all the way up to 64-bit; see Fig. 1), but many earlier sequencers did not. If you’re mixing ITB with a sequencer that’s a few years old, upgrading may include an improved audio engine. Note that engine resolution is independent of your operating system; for example, you can use a 64-bit audio engine with a 32-bit operating system, or a 32-bit audio engine with a 64-bit operating system. GAIN-STAGING MATTERS Because modern audio engines have so much headroom, it’s almost impossible to get distortion just by mixing channels together. Still, many engineers recommend keeping the master fader close to 0 and adjusting gain within individual channels to prevent overloads at the master out, rather than keeping the channel faders high and reducing the master gain to bring the levels down. Part of this is because the master output will eventually feed actual hardware, which is susceptible to overload and therefore, distortion; but some also feel that it’s possible to “stress” audio engines, which adversely affects the sound. ADD “VIRTUAL COUPLING CAPACITORS” Analog consoles (and analog gear in general) rarely had response down to DC, due to the use of coupling capacitors to avoid transferring DC offsets from one stage to the next. But digital technology can create and reproduce subsonic signals, which has the potential to take up bandwidth and reduce headroom—I’ve measured some audio interfaces that go down to 5Hz, which is beyond the range that most speakers can reproduce anyway. Fig. 2: Waves' LinEQ Lowband plug-in, used here with Acoustica's Mixcraft 6, is designed specifically to trim frequencies with a very sharp cutoff. You can emulate the effect of coupling capacitors by inserting a steep low-cut filter in each channel (or at least at the overall output, but each channel is better). Set the filter frequency as high as possible, consistent with retaining a full bass sound (see Fig. 2). For example, a guitar note doesn’t go much below 90Hz, so you can set a sharp cutoff starting at 60Hz; this will tighten up the sound by getting rid of possible low-frequency sounds that have nothing to do with guitar. REMOVE DC OFFSET Remove DC offset from your tracks before you start to mix; some DAWs have “remove DC offset” as part of their DSP menus (Fig. 3). As with subsonics, DC offset reduces headroom. For the full story on DC offset, see the article DC Offset: The Case of the Missing Headroom. Fig. 3: Studio One Pro from PreSonus includes the Mixtool plug-in, which can be inserted in a track and set to block DC offset. DON’T ALWAYS SLAM THE METERS TO ZERO This applies to both recording or mixing. Digital metering does not necessarily show the true peak signal level, as it measures the samples themselves; interpolation may result in higher values than that of the samples themselves, leading to what’s called Inter-Sample Distortion. So, leave a few dB of breathing room for the cleanest sound. (This is less of an issue with higher sample rates, so you might consider “spending” the extra bandwidth to go for 96kHz—and you might also hear better sound quality with plug-ins, particularly distortion-oriented ones like amp simulators.) Solid State Logic offers a free meter plug-in that shows Inter-Sample Distortion, and PreSonus’s Studio One Pro 2 DAW has meters that can be switched to indicate Inter-Sample Distortion. For more information about differences between analog and digital metering, check out the article Everything You Wanted to Know about Digital Metering. NOT ALL PLUG-INS ARE CREATED EQUAL Those EQ plug-ins that come with your host sequencer may be convenient, and these days, probably sound pretty good too. However, specialty plug-ins made for mastering may sound better—although they’ll take a bigger hit from your CPU—and give a smoother, more “analog” sound. CHOOSE YOUR DITHER ALGORITHMS CAREFULLY Although some people think dithering doesn’t matter (and frankly, for music with a limited dynamic range, it pretty much doesn’t), dithering can indeed help some digital mixes sound better (see Fig. 4). However, there are two issues that will affect how you apply dithering. Your host may dither automatically, which you don’t want to do if your mix will be mastered later with a mastering program; and, you’ll often have a choice of dithering algorithms. Their sonic differences may not be obvious, but they can have an almost subconscious influence. Fig. 4: Steinberg Cubase includes Apogee's UV22 High Resolution dithering plug-in; here it's inserted in the final master bus. To evaluate the sound, take a high quality recording of material like a piano chord that decays to nothingness, copy the track, and apply different types of dithering. Cut off the note attack until it’s decayed to an extremely low level, where the dithering comes into play. Normalize each example, and you’ll hear the difference quite clearly. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. A dirty computer can lead to everything from decreased performance to component failures—but be careful about how you clean it by Craig Anderton Dust, dirt, and hair aren’t good for computers—especially if the dust lodges between the fins of heat sinks, thus reducing their ability to dissipate heat. Here’s the right way to clean a computer. 1. If possible, keep your computer off the floor and away from doors and windows. If less dust gets in, there’s less dust to remove. 2. Opening up a computer entails risks. Of course it should be unplugged, but even then, if you drop something into it by mistake or clean it improperly, your wonderful productivity tool can become a doorstop. So—proceed at your own risk. 3. Never use a vacuum cleaner. All but special vacuum cleaners designed for cleaning electronics can create static charges capable of destroying components. 4. Go to a local office supply store, and buy a can of compressed air designed specifically for cleaning electronic gear. Take the computer outside, and spray air into it from a reasonable distance—don’t blast the components—and do so in short bursts. Avoid directing the spray toward hard drives and optical drives. 5. Fans and heat sinks tend to accumulate the most amount of dust. Short bursts on heat sinks from several inches away will do the job, but for fans, hold the fan stationary as you spray it to make sure it doesn’t spin faster than the rated number of RPMs. 6. Bring the computer back inside, but before reassembling it—and only if you’re confident in your maintenance skills—partially remove any connectors and plug them back into to wipe the contacts. You don’t have to take components all the way out; for example, you can push on the little arms at the side of RAM chips to raise the RAM 1/16th of an inch or so up, then push back down again. Do the same with cards and power supply connectors. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Who's stealing your headroom? It may be the archenemy of good audio - DC Offset By Craig Anderton It was a dark and stormy night. I was rudely awakened at 3 AM by the ringing of a phone, pounding my brain like a jackhammer that spent way too much time chowing down at Starbucks. The voice on the other end was Pinky the engineer, and he sounded as panicked as a banana slug in a salt mine. "Anderton, some headroom's missing. Vanished. I can't master one track as hot as the others on the Kiss of Death CD. Checked out the usual suspects, but they're all clean. You gotta help." Like an escort service at a Las Vegas trade show, my brain went into overdrive. Pinky knew his stuff...how to gain-stage, when not to compress, how to master. If headroom was stolen right out from under his nose, it had to be someone stealthy. Someone you didn't notice unless you had your waveform Y-axis magnification up. Someone like...DC Offset. Okay, so despite my best efforts to add a little interest, DC offset isn't a particularly sexy topic. But it can be the culprit behind problems such as lowered headroom, mastering oddities, pops and clicks, effects that don't process properly, and other gremlins. DC OFFSET IN THE ANALOG ERA We'll jump into the DC offset story during the 70s, when op amps became popular. These analog integrated circuits pack a tremendous amount of gain in a small, inexpensive package with (typically) two inputs and one output. Theoretically, in its quiescent state (no input signal), the ins and out are at exactly 0.00000 volts. But due to imperfections within the op amp itself, sometimes there can be several millivolts of DC present at one of the inputs. Normally this wouldn't matter, but if the op amp is providing a gain of 1000 (60dB), a typical 5 mV input offset signal would get amplified up to 5000mV (5 volts). If the offset appeared at the inverting (out of phase) input, then the output would have a DC offset of –5.0 volts. A 5mV offset at the non-inverting input would cause a +5.0 DC offset. There are two main reasons why this is a problem. Reduced dynamic range and headroom. An op amp's power supply isbipolar (i.e., there are positive and negative supply voltages with respect to ground). Suppose the op amp's maximum undistorted voltage swing is ±15V. If the output is already sitting at, say, +5V, the maximum voltage swing is now +10/-20V. However, as most audio signals are usually symmetrical around ground and you don't want either side to clip, the maximum voltage swing is really down to ±10V—a 33\% loss of available headroom. Problems with DC-coupled circuits. In a DC-coupled circuit (sometimes preferred by audiophiles due to superior low frequency response), any DC gets passed along to the next stage. Suppose the op amp mentioned earlier with a +5V output offset now feeds a DC-coupled circuit with a gain of 5. That +5V offset becomes a +25V offset—definitely not acceptable! ANALOG SOLUTIONS With capacitor-coupled analog circuits, any DC offset supposedly won't pass from one stage to the next because the capacitor that couples the two stages together can pass AC but not DC. Still, any DC offset limits dynamic range in the stage in which it occurs. (However, if the coupling capacitor is leaky or otherwise defective, some DC may make it through anyway.) There are traditionally two ways to deal with op amp offsets. Use premium op amps that have been laser-trimmed to provide minimum offset.Include a trimpot that injects a voltage equal and opposite to the inherent input offset. In other words, with no signal present, you measure the op amp output voltage while adjusting the trimpot until the voltage is exactly zero. Some op amps even provide pins for offset control so you don't have to hook directly into one of the inputs. (Note: As trimpot settings can drift over time, if you have analog gear with op amps, sometimes it's worth having a tech check for offsets and re-adjust the trimpot setting if needed.) DIGITAL DC OFFSET In digital-land, there are two main ways DC offset can get into a signal. Recording an analog signal with a DC offset into a DC-coupled systemMore commonly, inaccuracies in the A/D converter or conversion subsystem that produce a slight output offset voltage. As with analog circuits, a processor that provides lots of gain (like a distortion plug-in) can turn a small amount of offset into something major. In either case, offset appears as a signal baseline that doesn't match up with the "true" 0 volt baseline (Fig. 1). Fig. 1: With these two drum hits, the first one has a significant amount of DC offset. The second has been corrected to get rid of DC offset, and as more headroom is available, it can now be normalized for more level if desired. Digital technology has also brought about a new type of offset issue that's technically more of a subsonic problem than "genuine" DC offset, but nonetheless causes some of the same negative effects. As one example, once I transposed a sliding oscillator tone so far down it added what looked like a slowly-varying DC offset to the signal, which drastically limited the headroom (Fig. 2). Fig. 2: The top signal is the original normalized version, while the lower one has been processed by a steep low-cut filter at 20Hz, then re-normalized. Note how the level for the lower waveform is much "hotter." In addition to reduced headroom, there are two other major problems associated with DC offset in digitally-based systems. When transitioning between two pieces of digital audio, one with an offset and one without (or with a different amount of offset), there will be a pop or click at the transition point. Effects or processes requiring a signal that's symmetrical about ground will not work as effectively. For example, a distortion plug-in that clips positive and negative peaks will clip them unevenly if there's a DC offset. More seriously, a noise gate or "strip silence" function will need a higher (or lower) threshold than normal in order to be higher than not just the noise, but the noise plus the offset value. DIGITAL SOLUTIONS There are three main ways to solve DC offset problems with software-based digital audio editing programs. Most pro-level digital audio editing software includes a DC offset correction function, generally found under a "processing" menu along with functions like change gain, reverse, flip phase, etc. This function analyzes the signal, and adds or subtracts the required amount of correction to make sure that 0 really is 0. Many sequencing programs also include DC offset correction as part of a set of editing options (Fig. 3). Fig. 3. Like many programs, Sonar's audio processing includes the option to remove DC offset from audio clips. Apply a steep high-pass filter that cuts off everything below 20Hz or so. (Even with a comparatively gentle 12dB/octave filter, a signal at 0.5Hz will still be down more than 60dB). In practice, it's not a bad idea anyway to nuke the subsonic part of the spectrum, as some processing can interact with a signal to produce modulation in the below 20Hz zone. Your speakers can't reproduce signals this low and they just use up bandwidth, so nuke 'em. Select a 2—10 millisecond or so region at the beginning and end of the file or segment with the offset, and apply a fadein and fadeout. This will create an envelope that starts and ends at 0, respectively. It won't get rid of the DC offset component within the file (so you still have the restricted headroom problem), but at least you won't hear a pop at transitions. CASE CLOSED Granted, DC offset usually isn't a killer problem, like a hard disk crash. In fact, usually there's not enough to worry about. But every now and then, DC offset will rear its ugly head in a way that you do notice. And now, you know what to do about it. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. When it’s time to mix a recording, you need a strategy By Craig Anderton Mixing is not only an art, it’s the crucial step that turns a collection of tracks into a finished piece of music. A good mix can bring out the best in your music—it spotlights a composition’s most important elements, adds a few surprises to excite the listener, and sounds good on anything from a portable MP3 player with nasty earbuds to an audiophile’s dream setup. Theoretically, mixing should be easy: you just adjust the knobs until everything sounds great. But this doesn’t happen by accident. Mixing is as difficult to master as playing a musical instrument, so let’s take a look at what goes into the mixing process. POINTS OF REFERENCE Start by analyzing well-mixed recordings by top-notch engineers and producers such as Bruce Swedien, Roger Nichols, Shelly Yakus, Steve Albini, Bob Clearmountain, and others. Don’t focus on the music, just the mix. Notice how—even with a "wall of sound"—you can pick out every instrument because each element of the music has its own space. Also note that the frequency response balance will be uniform throughout the audio spectrum, with enough highs to sound sparkly but not screechy, sufficient bass to give a satisfying bottom end without turning the mix into mud, and a midrange that adds presence and definition. One of the best mixing tools is a CD player and a really well-mixed reference CD. Patch the CD player into your mixer, and A-B your mix to the reference CD periodically. If your mix sounds substantially duller, harsher, or less interesting, listen carefully and try to isolate the source of any differences. A reference CD also provides a guideline to the correct relative levels of drums, vocals, etc. Match the CD’s level to the overall level of your mix by matching the peak levels of both signals. If your mix sounds a lot quieter even though its peaks match the reference CD’s peak levels, that probably means that the reference has been compressed or limited a fair amount to restrict the dynamic range. Compression is something that can always be done at the mastering stage—in fact, it probably should be, because a good mastering suite will have top-of-the-line compressors and someone who is an ace at applying them. PROPER MONITORING LEVELS Loud, extended mixing sessions are tough on the ears. Mixing at low levels keeps your ears "fresher" and minimizes ear fatigue; loud mixes may get your juices flowing, but they make it more difficult to hear subtle level variations. Many project studios have noise constraints, so mixing through headphones might seem like a good idea. Although headphones are excellent for catching details that you might not hear over speakers, they are not necessarily good for general mixing because they magnify some details out of proportion. It’s better to use headphones for reality checks. THE ARRANGEMENT Scrutinize the arrangement prior to mixing. Solo project studio arrangements are particularly prone to "clutter" because as you lay down the early tracks, there’s a tendency to overplay to fill up all the empty space. As the arrangement progresses, there’s not a lot of room for overdubs. Remember: the fewer the number of notes, the greater the impact of each note. As Sun Ra once said, "Space is the place." MIXING: THE 12-STEP PROGRAM Although there aren’t any rules to recording or mixing, until you develop your own mixing "style" it’s helpful to at least have a point of departure. So, here’s what has worked for me. You "build" a mix over time by making a variety of adjustments. There are (at least!) twelve major steps involved in creating a mix, but what makes mixing so difficult is that these steps interact. Change the equalization, and you also change the level because you’re boosting or cutting some element of the sound. In fact, you can think of a mix as an "audio combination lock" since when all the elements hit the right combination, you end up with a good mix. Let’s look at these twelve steps, but remember, this is just one person’s way of mixing—you might discover a totally different approach that works better for you. Step 1: Mental Preparation Mixing can be tedious, so set up an efficient workspace. If you don’t have a really good office chair with lumbar support, consider a trip to the local office supply store. Keep paper and a log book handy for taking notes, dim the lighting a little bit so that your ears become more sensitive than your eyes, and in general, psych yourself up for an interesting journey. Take periodic breaks (every 45-60 minutes or so) to "rest" your ears and gain a fresher outlook on your return. This may seem like a luxury if you’re paying for studio time, but even a couple minutes of down time can restore your objectivity and, paradoxically, complete a mix much faster. Step 2: Review The Tracks Listen at low volume to scope out what’s on the multitrack; write down track information, and use removable stick-on labels or erasable markers to indicate which sounds correspond to which mixer channels. Group sounds logically, such as having all the drum parts on consecutive channels. Step 3: Put On Headphones and Fix Glitches Fixing glitches is a "left brain" activity, as opposed to the "right brain" creativity involved in doing a mix. Switching back and forth between these two modes can hamper creativity, so do as much cleaning up as possible—erase glitches, bad notes, and the like—before you get involved in the mix. Listen on headphones to catch details, and solo each track. If you’re sequencing virtual tracks, this is the time to thin out excessive controller information, check for duplicate notes, and avoid overlapping notes on single-note lines (such as bass and horn parts). Fig. 1: Sony's Sound Forge can clean up a mix by "de-noising" tracks. Also consider using a digital audio editor to do some digital editing and noise reduction (although you may need to export these for editing, then re-import the edited version into your project). Fig. 1 shows a file being "de-noised" in Sony's Sound Forge prior to being re-imported. Low-level artifacts may not seem that audible, but multiply them by a couple dozen tracks and they can definitely muddy things up. Step 4: Optimize Any Sequenced MIDI Sound Generators With sequenced virtual tracks, optimize the various sound generators. For example, for more brightness, try increasing the lowpass filter cutoff instead of adding equalization at the console. Step 5: Set Up a Relative Level Balance Between the Tracks Avoid adding any processing yet; concentrate on the overall sound of the tracks—don’t become distracted by left-brain-oriented detail work. With a good mix, the tracks sound good by themselves, but sound their best when interacting with the other tracks. Try setting levels in mono at first, because if the instruments sound distinct and separate in mono, they’ll open up even more in stereo. Also, you may not notice parts that "fight" with others if you start off in stereo. Step 6: Adjust Equalization (EQ) EQ can help dramatize differences between instruments and create a more balanced overall sound. Fig. 2 shows the EQ in Cubase; in this case, it's being applied to a clean electric guitar sound. There's a slight lower midrange dip to avoid competing with other sounds in the region, and a lift around 3.7kHz to give more definition. Fig. 2: Proper use of EQ is essential to nailing a great mix. Work on the most important song elements first (vocals, drums, and bass) and once these all "lock" together, deal with the more supportive parts. The audio spectrum has only so much space; ideally, each instrument will stake out its own "turf" in the audio spectrum and when combined together, will fill up the spectrum in a satisfying way. (Of course, this is primarily a function of the tune’s arrangement, but you can think of EQ as being part of the arrangement.) One of the reasons for working on drums early on the mix is that a drum kit covers the audio spectrum pretty thoroughly, from the low thunk of the kick drum to the sizzle of the cymbal. Once that’s set up, you’ll have a better idea of how to integrate the other instruments. EQ added to one track may affect other tracks. For example, boosting a piano part’s midrange might interfere with vocals, guitar, or other midrange instruments. Sometimes boosting a frequency for one instrument implies cutting the same region in another instrument; to have vocals stand out more, try notching the vocal frequencies on other instruments instead of just boosting EQ on the voice. Think of the song as a spectrum, and decide where you want the various parts to sit. I sometimes use a spectrum analyzer when mixing, not because ears don’t work well enough for the task, but because the analyzer provides invaluable ear training and shows exactly which instruments take up which parts of the audio spectrum. This can often alert you to an abnormal buildup of audio energy in a particular region. If you really need a sound to "break through" a mix, try a slight boost in the 1 to 3kHz region. Don’t do this with all the instruments, though; the idea is to use boosts (or cuts) to differentiate one instrument from another. To place a sound further back in the mix, sometimes engaging the high cut filter will do the job—you may not even need to use the main EQ. Also, applying the low cut filter on instruments that veer toward the bass range, like guitar and piano, can help trim their low end to open up more space for the all-important bass and kick drum. Step 7: Add Any Essential Signal Processing "Essential" doesn’t mean "sweetening," but processing that is an integral part of the sound (such an echo that falls on the beat and therefore changes the rhythmic characteristics of a part, distortion that alters the timbre in a radical way, vocoding, etc.). Step 8: Create a Stereo Soundstage Now place your instruments within the stereo field. Your approach might be traditional (i.e., the goal is to re-create the feel of a live performance) or something radical. Pan mono instruments to a particular location, but avoid panning signals to the extreme left or right. For some reason they just don’t sound quite as substantial as signals that are a little bit off from the extremes. Fig. 3 shows the Console view from Sonar. Note that all the panpots are centered, as recommended in step 5, prior to creating a stereo soundstage. Fig. 3: When you start a mix, setting all the panpots to mono can pinpoint sounds that interfere with each other; you might not notice this if you start off with stereo placement. As bass frequencies are less directional than highs, place the kick drum and bass toward the center. Take balance into account; for example, if you’ve panned the hi-hat (which has a lot of high frequencies) to the right, pan a tambourine, shaker, or other high-frequency sound somewhat to the left. The same concept applies to midrange instruments as well. Signal processing can create a stereo image from a mono signal. One method uses time delay processing, such as stereo chorusing or short delays. For example, if a signal is panned to the left, feed some of this signal through a short delay and send its output to the another channel panned to the right. However, it’s vital to check the signal in mono at some point, as mixing the delayed and straight signals may cause phase cancellations that aren’t apparent when listening in stereo. Stereo placement can significantly affect how we perceive a sound. Consider a doubled vocal line, where a singer sings a part and then doubles it as closely as possible. Try putting both voices in opposite channels; then put both voices together in the center. The center position gives a somewhat smoother sound, which is good for weaker vocalists. The opposite-channel vocals give a more defined, distinct sound that can really help spotlight a good singer. Step 9: Make Any Final Changes to the Arrangement Minimize the number of competing parts to keep the listener focused on the tune, and avoid "clutter." You may be extremely proud of some clever effect you added, but if it doesn’t serve the song, get rid of it. Conversely, if you find that a song needs some extra element, this is your final opportunity to add an overdub or two. Never fall in love with your work until it’s done; maintain as much objectivity as you can. You can also use mixing to modify an arrangement by selectively dropping out and adding specific tracks. This type of mixing is the foundation for a lot of dance music, where you have looped tracks that play continuously, and the mixer sculpts the arrangement by muting parts and doing major level changes. Step 10: Audio Architecture Now that we have our tracks set up in stereo, let’s put them in an acoustical space. Start by adding reverberation and delay to give the normally flat soundstage some acoustic depth. Generally, you’ll want an overall reverb to create a particular type of space (club, concert hall, auditorium, etc.) but you may also want to use a second reverb to add effects, such as a gated reverb on toms. But beware of situations where you have to drench a sound with reverb to have it sound good. If a part is questionable enough that it needs a lot of reverb, redo the part. Step 11: Tweak, Tweak, and Re-Tweak Now that the mix is on its way, it’s time for fine-tuning. If you use automated mixing, start programming your mixing moves. Remember that all of the above steps interact, so go back and forth between EQ, levels, stereo placement, and effects. Listen as critically as possible; if you don’t fix something that bothers you, it will forever haunt you every time you hear the mix. While it’s important to mix until you’re satisfied, it’s equally important not to beat a mix to death. Once Quincy Jones offered the opinion that recording with synthesizers and sequencing was like "painting a 747 with Q-Tips." A mix is a performance, and if you overdo it, you’ll lose the spontaneity that can add excitement. You can also lose that "vibe" if you get too detailed with any automation moves. A mix that isn’t perfect but conveys passion will always be more fun to listen to than one that’s perfect to the point of sterility. As insurance, don’t always erase your old mixes—when you listen back to them the next day, you might find that an earlier mix was the "keeper." In fact, you may not even be able to tell too much difference between your mixes. A veteran record producer once told me about mixing literally dozens of takes of the same song, because he kept hearing small changes which seemed really important at the time. A couple of weeks later he went over the mixes, and couldn’t tell any difference between most of the versions. Be careful not to waste time making changes that no one, even you, will care about a couple days later. Step 12: Check Your Mix Over Different Systems Before you sign off on a mix, check it over a variety of speakers and headphones, in stereo and mono, and at different levels. The frequency response of the human ear changes with level (we hear less highs and lows at lower levels), so if you listen only at lower levels, mixes may sound bass-heavy or too bright at normal levels. Go for an average that sounds good on all systems. With a home studio, you have the luxury of leaving a mix and coming back to it the next day when you’re fresh, after you’ve had a chance to listen over several different systems to decide if any tweaks need to be made. One common trick is to run off some reference CDs and see what they sound like in your car. Road noise will mask any subtleties, and give you a good idea of what elements "jump out" of the mix. I also recommend booking some time at a pro studio to hear your mixes. If the mix sounds good under all these situations, your mission is accomplished. Craig Anderton is Executive Editor of Electronic Musician magazine and Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Is Windows saying your drive is protected, or you can’t override security attributes, and you’re stuck in “read-only” land? File this hot tip for future reference and save yourself frustration by Craig Anderton From time to time, for reasons unknown to mere mortals, Windows will arbitrarily prevent you from writing to a hard drive because it says the drive is write-protected, or some other such error message. This seems to happen mostly to external USB drives (like what you use for storing audio, samples, etc.) and figuring out a solution isn’t obvious—no matter what you do in the drive's "Security" tab, nothing seems to work. Fortunately, there’s an easy fix (this assumes you have administrator privileges). Click on the start button, and type CMD.EXE in the search box. A command line prompt opens. Type diskpart then enter (note: what you need to type is shown in yellow for clarity). Type list volume then enter. You’ll see a list of drives, each with a number. Type select volume # (in this case, it was volume 7), then enter. Type attributes disk clear readonly then enter. When you see “Disk attributes cleared successfully,” you’re done. Close the command prompt box, and now you can write with impunity to your formerly locked drive. Craig Anderton is Executive Editor of Electronic Musician magazine and Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Why be normal? Use your footpedal to control parameters other than volume and wah By Craig Anderton A lot of guitar hardware multieffects, like the Line 6 POD HD500, Roland ME-70, DigiTech iPB-10 and RP1000, Vox ToneLab ST, and Zoom G3X (Fig. 1) have a footpedal you can assign to various parameters. Fig. 1: Many multieffects, like Zoom's G3X, have built-in pedals. However, if not, some have an expression pedal jack so you can still use a pedal with the effects. If you're into amp sims, you're covered there too: Native Instruments' Rig Kontrol has a footpedal you can assign to any amp sim's parameters, and IK Multimedia's StealthPedal (Fig. 2) also works as a controller for amp sim software, not just IK's own AmpliTube. Fig. 2: IK's StealthPedal isn't only a controller, but includes jacks for plugging in a second expression pedal, as well as a dual footswitch. In most multieffects, volume and wah are the no-brainer, default pedal assignments. However, there are a whole lot of other parameters that are well-suited to pedal control. Doing so can add real-time expressiveness to your playing, and variety to your sound. ASSIGNING PEDALS TO PARAMETERS Some multieffects make this process easy: They have patches pre-programmed to work with their pedals. But sometimes the choices are fairly ordinary and besides, the manufacturer's idea of what you want to do may not be the same as what you want to do. So, it pays to spend a little time digging into the manual so you can figure out how to assign the pedal to any parameter you want. Effects with a computer interface are usually the easiest for making assignments, and they're certainly easiest to show in an article due to the ease of taking screen shots. For example, with DigiTech's iPB-10, you can use the iPad interface to assign the expression pedal to a particular parameter. In Fig. 3, the pedal has been assigned to the Screamer effect Drive parameter. Fig. 3: The iPB-10 pedal now controls the Screamer effect's Drive parameter. Note that you can set a minimum and maximum value for the pedal range; in this case, it's 8 and 58 respectively. This example shows the POD HD500 Edit program, set to the Controllers page. Here, the EXP-1 (main expression pedal) controller has been assigned to delay Feedback (Fig. 4). Fig. 4: It's easy to assign the HD500's pedal to various parameters using the POD HD500 Edit program. Note that like the iPB-10, you can set minimum and maximum values for the pedal range. Most amp sims have a "Learn" option. For example, with Guitar Rig, you can control any parameter by right-clicking on it and selecting "Learn" (Fig. 5). Fig. 5: The Chorus/Flanger speed control is about to "learn" the controller to which it should respond, like a pedal that generates MIDI controller data. With learn enabled, when you move a MIDI controller (like the StealthPedal mentioned previously), Guitar Rig will "learn" that the chosen parameter should respond to that particular controller's motion. Often these assignments are stored with a preset, so the pedal might control one parameter in one preset, and a different parameter in another. THE TOP 10 PEDAL TARGETS Now that we've covered how to assign a controller to parameters, let's check out which parameters are worth controller. Some parameters are a natural for foot control; here are ten that can make a big difference to your sound. Distortion drive This one's great with guitar. Most of the time, to go from a rhythm to lead setting you step on a switch, and there's an instant change. Controlling distortion drive with a pedal lets you go from a dirty rhythm sound to an intense lead sound over a period of time. For example, suppose you're playing eighth-note chords for two measures before going into a lead. Increasing distortion drive over those two measures builds up the intensity, and slamming the pedal full down gives a crunchy, overdriven lead. Chorus speed If you don't like the periodic whoosh-whoosh-whoosh of chorus effects, assign the pedal so that it controls chorus speed. Moving the pedal slowly and over not too wide a range creates subtle speed variations that impart a more randomized chorus effect. This avoids having the chorus speed clash with the tempo. Echo feedback Long, languid echoes are great for accenting individual notes, but get in the way during staccato passages. Controlling the amount of echo feedback lets you push the number of echoes to the max when you want really spacey sounds, then pull back on the echoes when you want a tighter, more specific sound. Setting echo feedback to minimum gives a single slapback echo instead of a wash of echoes. Echo mix Here's a related technique where the echo effect uses a constant amount of feedback, but the pedal sets the balance of straight and echoed sounds. The main differences compared to the previous effect are that when you pull back all the way on the pedal, you get the straight signal only, with no slapback echo; and you can't vary the number of echoes, only the relative volume of the echoes. Graphic EQ boost Pick one of the midrange bands between 1 and 4 kHz to control. Adjust the scaling so that pushing the pedal all the way down boosts that range, and pulling the pedal all the way back cuts the range. For solos, boost for more presence, and during vocals, cut to give the vocals more "space" in the frequency spectrum. Reverb decay time To give a "splash" of reverb to an individual note, just before you play the note push the pedal down to increase the reverb decay time. Play the note, and it will have a long reverb tail. Then pull back on the pedal, and subsequent notes will have the original, shorter reverb setting. This works particularly well when you want to accent a drum hit. Pitch transposer pitch For guitarists, this is like having a "whammy bar" on a pedal. The effectiveness depends on the quality of the pitch transposition effect, but the basic idea is to set the effect for pitch transposed sound only. Program the pedal so that when it's full back, you hear the standard instrument pitch, and when it's full down, the pitch is an octave lower. This isn't an effect you'd use everyday, but it can certainly raise a few eyebrows in the audience as the instrument's pitch slips and slides all over the place. By the way, if the non-transposed sound quality is unacceptable, mix in some of the straight sound (even though this dilutes the effect somewhat). Pitch transposer mix This is a less radical version of the above. Program the transposer for the desired amount of transposition – octaves, fifths, and fourths work well – and set the pedal so that full down brings in the transposed line, and full back mixes it out. Now you can bring in a harmony line as desired to beef up the sound. Octave lower transpositions work well for guitar/bass unison effects, whereas intervals like fourths and fifths work best for spicing up single-note solos. Parametric EQ frequency The object here is to create a wah pedal effect, although with a multieffects, you have the option of sweeping a much wider range if desired. Set up the parametric for a considerable amount of boost (start with 10 dB), narrow bandwidth, and initially sweep the filter frequency over a range of about 600 Hz to 1.8 kHz. Extend this range if you want a wider wah effect. Increasing the amount of boost increases the prominence of the wah effect, while narrowing the bandwidth creates a more intense, "whistling" wah sweep. Increasing the output of anything (e.g.., input gain, preamp, etc.) before a compressor This allows you to control your instrument's dynamic range; pulling back on the pedal gives a less compressed (wide dynamic range) signal, while pushing down compresses the signal. This restricts the dynamic range and gives a higher average signal level, which makes the sound "jump out." Also note that when you push down on the pedal, the dynamics will change so that softer playing will come up in volume. This can make a guitar seem more sensitive, as well as increase sustain and make the distortion sound smoother. And there you have the top ten pedal targets. There are plenty of other options just waiting to be discovered—so put your pedal to the metal, and realize more of the potential in your favorite multieffects or amp sim. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Why be normal? Use your footpedal to control parameters other than volume and wah By Craig Anderton A lot of guitar hardware multieffects, like the Line 6 POD HD500, Roland ME-70, DigiTech iPB-10 and RP1000, Vox ToneLab ST, and Zoom G3X (Fig. 1) have a footpedal you can assign to various parameters. Fig. 1: Many multieffects, like Zoom's G3X, have built-in pedals. However, if not, some have an expression pedal jack so you can still use a pedal with the effects. If you're into amp sims, you're covered there too: Native Instruments' Rig Kontrol has a footpedal you can assign to any amp sim's parameters, and IK Multimedia's StealthPedal (Fig. 2) also works as a controller for amp sim software, not just IK's own AmpliTube. Fig. 2: IK's StealthPedal isn't only a controller, but includes jacks for plugging in a second expression pedal, as well as a dual footswitch. In most multieffects, volume and wah are the no-brainer, default pedal assignments. However, there are a whole lot of other parameters that are well-suited to pedal control. Doing so can add real-time expressiveness to your playing, and variety to your sound. ASSIGNING PEDALS TO PARAMETERS Some multieffects make this process easy: They have patches pre-programmed to work with their pedals. But sometimes the choices are fairly ordinary and besides, the manufacturer's idea of what you want to do may not be the same as what you want to do. So, it pays to spend a little time digging into the manual so you can figure out how to assign the pedal to any parameter you want. Effects with a computer interface are usually the easiest for making assignments, and they're certainly easiest to show in an article due to the ease of taking screen shots. For example, with DigiTech's iPB-10, you can use the iPad interface to assign the expression pedal to a particular parameter. In Fig. 3, the pedal has been assigned to the Screamer effect Drive parameter. Fig. 3: The iPB-10 pedal now controls the Screamer effect's Drive parameter. Note that you can set a minimum and maximum value for the pedal range; in this case, it's 8 and 58 respectively. This example shows the POD HD500 Edit program, set to the Controllers page. Here, the EXP-1 (main expression pedal) controller has been assigned to delay Feedback (Fig. 4). Fig. 4: It's easy to assign the HD500's pedal to various parameters using the POD HD500 Edit program. Note that like the iPB-10, you can set minimum and maximum values for the pedal range. Most amp sims have a "Learn" option. For example, with Guitar Rig, you can control any parameter by right-clicking on it and selecting "Learn" (Fig. 5). Fig. 5: The Chorus/Flanger speed control is about to "learn" the controller to which it should respond, like a pedal that generates MIDI controller data. With learn enabled, when you move a MIDI controller (like the StealthPedal mentioned previously), Guitar Rig will "learn" that the chosen parameter should respond to that particular controller's motion. Often these assignments are stored with a preset, so the pedal might control one parameter in one preset, and a different parameter in another. THE TOP 10 PEDAL TARGETS Now that we've covered how to assign a controller to parameters, let's check out which parameters are worth controller. Some parameters are a natural for foot control; here are ten that can make a big difference to your sound. Distortion drive This one's great with guitar. Most of the time, to go from a rhythm to lead setting you step on a switch, and there's an instant change. Controlling distortion drive with a pedal lets you go from a dirty rhythm sound to an intense lead sound over a period of time. For example, suppose you're playing eighth-note chords for two measures before going into a lead. Increasing distortion drive over those two measures builds up the intensity, and slamming the pedal full down gives a crunchy, overdriven lead. Chorus speed If you don't like the periodic whoosh-whoosh-whoosh of chorus effects, assign the pedal so that it controls chorus speed. Moving the pedal slowly and over not too wide a range creates subtle speed variations that impart a more randomized chorus effect. This avoids having the chorus speed clash with the tempo. Echo feedback Long, languid echoes are great for accenting individual notes, but get in the way during staccato passages. Controlling the amount of echo feedback lets you push the number of echoes to the max when you want really spacey sounds, then pull back on the echoes when you want a tighter, more specific sound. Setting echo feedback to minimum gives a single slapback echo instead of a wash of echoes. Echo mix Here's a related technique where the echo effect uses a constant amount of feedback, but the pedal sets the balance of straight and echoed sounds. The main differences compared to the previous effect are that when you pull back all the way on the pedal, you get the straight signal only, with no slapback echo; and you can't vary the number of echoes, only the relative volume of the echoes. Graphic EQ boost Pick one of the midrange bands between 1 and 4 kHz to control. Adjust the scaling so that pushing the pedal all the way down boosts that range, and pulling the pedal all the way back cuts the range. For solos, boost for more presence, and during vocals, cut to give the vocals more "space" in the frequency spectrum. Reverb decay time To give a "splash" of reverb to an individual note, just before you play the note push the pedal down to increase the reverb decay time. Play the note, and it will have a long reverb tail. Then pull back on the pedal, and subsequent notes will have the original, shorter reverb setting. This works particularly well when you want to accent a drum hit. Pitch transposer pitch For guitarists, this is like having a "whammy bar" on a pedal. The effectiveness depends on the quality of the pitch transposition effect, but the basic idea is to set the effect for pitch transposed sound only. Program the pedal so that when it's full back, you hear the standard instrument pitch, and when it's full down, the pitch is an octave lower. This isn't an effect you'd use everyday, but it can certainly raise a few eyebrows in the audience as the instrument's pitch slips and slides all over the place. By the way, if the non-transposed sound quality is unacceptable, mix in some of the straight sound (even though this dilutes the effect somewhat). Pitch transposer mix This is a less radical version of the above. Program the transposer for the desired amount of transposition – octaves, fifths, and fourths work well – and set the pedal so that full down brings in the transposed line, and full back mixes it out. Now you can bring in a harmony line as desired to beef up the sound. Octave lower transpositions work well for guitar/bass unison effects, whereas intervals like fourths and fifths work best for spicing up single-note solos. Parametric EQ frequency The object here is to create a wah pedal effect, although with a multieffects, you have the option of sweeping a much wider range if desired. Set up the parametric for a considerable amount of boost (start with 10 dB), narrow bandwidth, and initially sweep the filter frequency over a range of about 600 Hz to 1.8 kHz. Extend this range if you want a wider wah effect. Increasing the amount of boost increases the prominence of the wah effect, while narrowing the bandwidth creates a more intense, "whistling" wah sweep. Increasing the output of anything (e.g.., input gain, preamp, etc.) before a compressor This allows you to control your instrument's dynamic range; pulling back on the pedal gives a less compressed (wide dynamic range) signal, while pushing down compresses the signal. This restricts the dynamic range and gives a higher average signal level, which makes the sound "jump out." Also note that when you push down on the pedal, the dynamics will change so that softer playing will come up in volume. This can make a guitar seem more sensitive, as well as increase sustain and make the distortion sound smoother. And there you have the top ten pedal targets. There are plenty of other options just waiting to be discovered—so put your pedal to the metal, and realize more of the potential in your favorite multieffects or amp sim. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Create all kinds of vintage phase shifter sounds with multi-stage parametric EQ By Craig Anderton Phase shifters work by creating frequency response peaks and notches with all-pass filter cicuits, then sweeping those notches across the frequency spectrum. The sweeping action creates that characteristic “whoosing” sound associated with phasers. However, you don’t always need a phase shifter, as it’s possible to emulate these effects through parametric equalization, grouping, and automation. In this example, we’ll show how to apply this technique using Cakewalk Sonar, and record the “phase shifter” sweeps as automation. Enable the ProChannel EQ’s two middle bands, then turn Q to around 8.0 and Level to minimum to create notches. If you’re using an older, pre-ProChannel version of Sonar, the Sonitus:fx Equalizer can create the same effects; similarly, if you’re using a different DAW, most will include parametric equalization that’s suitable for this application. Also note that you’re not limited to two notches. Three or four notches give a more intense phasing effect, but also, experiment with different Q settings. Also, some phase shifters used positive feedback to create a sharper, more “whistling” sound. To emulate this sound, use three or four narrow peaks instead of notches. You can even combine peaks and notches (e.g., peak, notch, peak, notch, each set an octave higher than the previous) to create novel, but “phase shifter-like,” effects. Set the Freq controls so they’re about two octaves apart, like 500Hz and 2kHz. Right-click on each Freq control and assign it to a group (e.g., group X). When grouped, moving one control causes other grouped controls to track each other. Right-click on each Freq control and select Automation Write Enable. Start playback. Move the Frequency controls for the desired phaser “motion,” and Sonar will write automation data to the track. Note that Sonar, as well as Cubase, let you draw periodic modulation envelopes (triangle, sine, sawtooth, etc.) which make periodic sweeping effects very easy to implement. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...