Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. You may not be as "locked in" to a loop's sound as you thought you were by Craig Anderton Yes, loops are convenient because you can use them “out of the box.” But why be normal? Or for that matter, put up with a loop that’s close to what you want, but not perfect? There are quite a few tricks you can use with loops to make them groovier, courtesy of digital audio editing . . . so let’s start. SELECTIVE REMOVAL A common DJ technique to take parts out of a mix—for example, remove the kick when it’s time to chill out the feel a bit, or take out the low-level percussion when you need a sparser, lighter sound. Of course, you can’t do that with a complete loop—or can you? To remove a kick, use your digital audio editor to cut the bass response using a highpass filter with a very steep slope (e.g., 48dB/octave; see Fig. 1). If you don’t have any filter with a steep enough slope to remove the kick, or at least minimize it, try using a low-frequency shelf (a 100Hz corner frequency makes a good starting point). Apply the EQ process several times to steepen the slope, or alternately, insert several EQs in series (Fig. 1). Repeat as necessary until you’ve marginalized the kick. Fig. 1: Inserting several EQs with shallow slopes in series can make for a much steeper slope. Another removal technique involves using a noise gate to remove low-level signals. This can also “tighten” and “sharpen” a loop, as only the percussive peaks remain. You can apply this to the loop, or perhaps better yet, use a noise gate plug-in for real-time processing where you vary the noise gate threshold in real time. An even more interesting option is available courtesy of Adobe Audition’s Frequency Space Editing option or Roland's R-Mix software. Either of these lets you cut or process specific slices of frequencies at specific levels; you can even do things like surgically remove a kick drum from a loop (Fig. 2). Really. Fig. 2: Roland's R-Mix allows processing specific portions of the frequency spectrum. In this screen shot, the kick drum range has been isolated inside the red frame, and is about to be removed. In fact, here are two audio examples that show what you can do. The first one is a loop with the kick intact, the second has the kick removed. BEAT EMPHASIS For one tune I’d found a loop that was perfect—except for an overly busy kick, as I needed more of a slammin,’ “four-on-the-floor” vibe. The solution was remarkably simple: boost the bass region on every beat. Boosting about the first 25\% of the beat (in other words, the same duration as a 16th note falling on the beat) seemed about right. This emphasized the kick when it fell exactly on the beat, but didn’t affect the other kick hits, as they simply receded a bit more into the background. You can even use this technique to “synthesize” the feel of a kick drum in loops that don’t have one. For example, I had a conga loop that would have been ideal for a break, but it lacked a kick. Mixing a kick into the loop didn’t sound right—I needed more contrast between the “break” and what had come before, and the kick created a sort of sameness. So, I just boosted the bass frequencies on each beat. This added a cool bass “thump” that kept the beat going, without actually sounding like a kick. This technique isn’t just restricted to bass frequencies; you can use something similar when you want to emphasize beats 2 and 4. But this time, boost the upper midrange a bit (at around 1 - 4kHz, depending on the application) to boost instruments like snare. Again, emphasizing the first 25\% of the beat seems to work best. However, be careful not to boost too much—often 2 or 3dB of emphasis is all that’s needed. Second, if you do boost, this could create distortion unless you drop the overall level prior to boosting. KEEPING IT CLEAN With any kind of boosting, though, make sure the region you boost begins and ends on a zero crossing. If you’re not familiar with the concept, a waveform typically crosses over a point of zero amplitude (the zero crossing point) as it transitions from negative to positive, or vice-versa. When splicing waveforms together, or processing specific regions, the region boundaries should fall on zero crossings. Otherwise, there may be an abrupt level change that causes a pop or click. While splicing or processing on zero crossing boundaries won’t guarantee a click-free signal, failing to do so virtually guarantees you’ll hear some kind of artifact. Although it’s sometimes possible to zoom way in and use a pencil tool or equivalent to “draw out” clicks that result from processing—especially if you have a lot of time on your hands, and significant amounts of patience— a click removal algorithm intended for de-clicking vinyl can often do the job better and faster. It’s difficult to give general guidelines, because different types of de-clicking algorithms work very differently. The best approach is to experiment (make liberal use of the undo command!) until the click disappears. WHY BE NORMAL? Sure, you can just use canned loops. But why not put your own stamp on it? Try some of these techniques, and your music won’t sound like everyone else’s. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Bouncing can go beyond track “freezing” By Craig Anderton In the days of four-track tape recorders, one of the tricks that made decent multitracking possible was bouncing, where you’d mix three of the tracks down into the fourth. You could then erase the three original tracks, and record three more in their place. Noise? Distortion? Yes, but it was all we had. Bouncing’s legacy lives on in track “freezing,” which essentially bounces the audio from a soft synth into a hard disk audio track instead of playing back directly from the instrument. This saves CPU power because it takes less juice to play a hard disk track than perform a zillion real-time calculations to approximate the sound of, say, a vintage Minimoog. But bouncing has other uses, too — and that’s what this article is about. HOW TO BOUNCE Hard disk recording programs implement bouncing in different ways, but the basic principle is the same. Note that if the track being bounced includes plug-in processors, their sound will be part of the bounce unless you can specify otherwise; and if you bounce multiple tracks, they’ll likely mix together. Mute any track or bus that’s not supposed to be bounced. Note: if you’re “bouncing” a soft synth track to turn it into audio, you may need to bounce the MIDI track driving the soft synth as well as the soft synth audio output. Select the section you want to bounce. Generally, the more you’ve selected to bounce, the more time it takes to calculate and execute the bounce. There’s no need to bounce the entire track if you need to bounce only a section. Play back what you’ve selected and observe the track’s meters (or bus meters, if that’s where the bounce is coming from). Check that there’s no clipping; otherwise, trim levels as necessary prior to bouncing. Read any documentation to determine the available bounce functions. Generally, you’ll have two options: Bouncing creates a new hard disk audio track, or it exports the track to an audio file, which you can then import into the program. Initiate the bounce. Play back the bounced track to make sure there aren’t any glitches, overloads, etc. BOUNCING APPLICATIONS Now that you know how to bounce, here are two very useful applications. Back up soft synth parts. When backing up a project, rendering a soft synth to a audio file via bouncing (Fig. 1) provides a “safety net” if you need to call up a project in the future, and for some reason (e.g., lack of compatibility with a newer operating system) you can’t load the soft synth. If you’ve inserted a plug-in processor after the soft synth, consider bouncing two versions—one with the effect, so you can reproduce the sound as planned, and one without in case you want to change the effect later. Fig. 1: A track is being bounced in Cakewalk Sonar, which gives the option to turn several parameters on or off during the bouncing process. Here, Track FX are enabled so that any effects are part of the bounced sound; however, you might want to create a second backup track without effects—just in case. The “backwards tape” effect. To resurrect this classic effect, duplicate the track to which you want to add reverse processing, then reverse this copy (look for reverse under a program’s DSP menu). Next, bounce the reversed track through reverb (no dry sound, only processed) to another track. Delete the copied/reversed track; it’s not needed any more. Finally, reverse the reverb track that was bounced, and make sure it lines up with the original track. Don’t forget you can get creative with the reverb track—pitch shift it, slide it forward or backward in time to line up correctly (or incorrectly), and so on. Create a stereo master. Bounce everything down to two tracks, and voilà — there’s your final mix. So why not just export to an AIFF or WAV file? You can, and eventually will. But there’s an advantage to this approach. Suppose you listen back to the track, and decide the piano needs to come up a tiny bit in one section. Rather than start over from scratch or mess with automation, just set the piano level as desired, select the region where you want the piano, and bounce just that section into the track with the final mix. The splice points should be sample-accurate, so you should hear no click or transition as the old mix transitions into or out of the new section, unless level changes occur in the middle of a note. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. Before you play with your new gear, make sure you keep a record of its vital stats by Craig Anderton When you buy a piece of gear, of course the first thing you want to do is have fun with it! But think about the future: At some point, it's going to need repairs, or you might want to sell it, or it might (and I sure hope it doesn't) get stolen. As a result, it's a good idea to plan ahead and do the following. 1. Buy some kind of storage system for saving all the various things that come packed with the gear. This includes rack ears you might use someday if you rack mount it, the owner's manual or a CD-ROM containing any documentation, any supplementary "read me" pieces of paper, that audio or MIDI adapter you don't think you'll use but you'll need someday, and the like. For storage, I use stackable sets of plastic drawers you can buy inexpensively just about anywhere; for gear that comes only with paper and no bulky accessories, I have files in a filing cabinet packed with manuals and such. A more modern solution for downloadable files is to have a “manual bookshelf” in your iPad. 2. Register your purchase. Sometimes it's a hassle to do this, but it's important to establish a record for warranty work. For software, it can mean the difference between paying for an upgrade and getting one for free, because a new version came out within a short period of time after you purchased the program. I always check the "Keep me notified of updates" box if available; sure, you'll get some commercial offers and such, but you'll also be among the first to find out that an update is available. 3. Record any serial numbers, authorization codes, etc. Also record your user name and password for the company's web site, as with software, that's often what you need to access downloads and upgrades. Also record when and where you purchased the gear, and how much you paid. I keep all this information on my computer, and copy it to a USB stick periodically as backup. 4. For software, retain all firmware and software updates. If you ever have to re-install a program, it may not be possible to upgrade from, say, Version 1 to Version 3—you may need to go through Version 2 first. I keep all upgrades on an data drive in my computer, and backed up to an external hard drive. With all this info at your fingertips, if you ever go to sell the gear, you'll be very glad you had these records. What's more, if any problems crop up with your gear, you'll be well-prepared to deal with them. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. It's not easy to do a great mastering job - but the sooner you start, the sooner you'll get better at it by Craig Anderton The art of mastering is the process of taking your mixes, adding any final polish (e.g., altering the tone, doing dynamics control, making sure levels are consistent, etc.) and in the case of an album, assembling the various cuts so they create a cohesive listening experience. You may even do things like shorten intros or solos, add reverb, or other more drastic changes—whatever it takes to produce a great-sounding recording. However, the requirement for years of expertise hasn’t changed, and you still need good ears—and a serious skill set—to do mastering well. But the way to acquire years of expertise is to roll up your sleeves, start mastering, and learn the ins and outs of how the process works. If your mixes end up setting better after you’ve mastered them, you’re on your way. We’ll get into specific mastering techniques in subsequent issues of the HC Confidential Newsletter, but before you even boot up your computer, it’s important to create a proper environment for mastering. Let’s look at what you need to think about as you get ready to start mastering. THE IMPORTANCE OF YOUR ROOM The single most important piece of gear for mastering is not a plug-in, rack processor, or software, but an acoustically-treated room. It doesn’t matter how good your speakers are, or what kind of a system you’re running, if you can’t accurately evaluate the sound because of room problems. Conversely, good room acoustics can show problems in other elements of the signal chain. Acoustics is one of the main reasons why so many mixes that come out of project studios aren’t “transportable”: You mix something that sounds fine in your studio, but doesn’t anywhere else. That’s because you’ve mixed and then mastered to compensate for the deficiencies in your room, and other rooms will have different deficiencies. If you have an accurate response in your room, then the deviations won’t be as great when you play back your music in other rooms. Fig. 1: This illustration shows a standing-wave condition, where a wave reflects back from a wall out of phase, thus canceling the original waveform. At other frequencies, the reflection can just as easily reinforce the original waveform. These frequency response anomalies affect how you hear the music as you mix. Acoustic treatment is a topic that could take up a book, and in fact, there’s a good one by Mitch Gallagher titled Acoustic Design For The Home Studio (Thomson Course Technology, ISBN-10: 159863285X, ISBN-13: 978-1598632859). While you can certainly improve matters yourself, if you have the budget, it’s worth calling a professional. When a friend and musical collaborator of mine, Spencer Brewer, was building a studio, he wisely allocated a significant portion of his budget to hiring a professional studio designer who was well-versed in acoustic treatment. Over the years, gear has come and gear has gone, but his fine-sounding room has remained the constant for all his work. There are many sources of information on acoustics other than the above-mentioned book. Web sites for companies like Real Traps, Auralex, Primacoustic, and others often have a wealth of information and ideas on how best to tune a room. I do have a couple pieces of advice, though. If you don’t have an acoustically-treated room and don’t think you need to do anything, for a real ear-opener set up an audio level meter (e.g., the kind made by Radio Shack for monitoring workplace noise levels; catalog numbers are 33-2055 and 33-4050). Sit with it in the middle of your room, run a sine wave test tone oscillator through the speakers, and watch the meter. Unless you have great monitors and an acoustically tuned room, that meter will fluctuate like a leaf in a tornado. Speakers by themselves do not have perfectly flat responses, but they look like a ruler compared to the average untreated room. Fig. 2: Placing a speaker with its back against the wall often gives an apparent increase in bass; placing it in a corner accentuates the bass even more. You don’t even need a level meter to conduct this test: Play a steady tone around 5 kHz or so, then move your head around. You’ll hear obvious volume fluctuations. (If you can’t hear the 5 kHz tone, then perhaps it’s time to look for a different line of work!) These variations occur because as sound bounces around off walls, the reflections become part of the overall sound, creating cancellations and additions. Another example of how acoustics affects sound is when you place a speaker against a wall, which seems to increase bass. Here’s why: Any sounds emanating from the rear of the speaker, or leaking from the front (bass frequencies are very non-directional), bounce off the wall. Because a bass note’s wavelength is so long, the reflection will tend to reinforce the main wave. This is a greatly simplified explanation, but it gets the principle across. As the walls, floors, and ceilings all interact with speakers, it’s important that any speakers be placed symmetrically within a room. Otherwise, if (for example) one speaker is 3 feet from a wall and another 10 feet from a wall, any reflections will be wildly different and affect the response. Some people try to compensate for room anomalies by inserting a graphic equalizer just before their power amp and “tune” the equalization to adjust for room anomalies. While this sounds good in theory, if you deviate at all from the “sweet spot” where the microphone was, the frequency response will be off. Also, heavily equalizing a poor acoustical space simply gives you a heavily equalized poor acoustical space. Like noise reduction, which works best on signals that don’t have a lot of noise, room tuning works best on rooms that don’t have serious response problems because you’ve already addressed any underlying problems. THE MONITOR FACTOR After the room (and your ears, of course), speakers are the most important element in mastering—again because you have to trust what you’re hearing. Traditional studios have large monitors mounted at a considerable distance (6 to 10 ft. or so) from the mixer, with the front flush to the wall, and an acoustically-treated control room to minimize response variations. The “sweet spot”—the place where room acoustics are most favorable—is designed to be where the engineer sits at the console. In smaller, project studios, near-field monitors have become the standard way to monitor. With this technique, small speakers sit around 3 to 6 feet from the mixer’s ears, with the head and speakers forming a triangle. Fig. 3: When using near field monitors, the speakers should point toward the ears and be at ear level. If slightly above ear level, they should point downward toward the ears. Near field monitors reduce (but do not at all eliminate) the impact of room acoustics on the overall sound, as the speakers’ direct sound is far greater than the reflections coming off the room surfaces. As a side benefit, because of their proximity to your ears, near field monitors do not have to produce a lot of power. This also relaxes the requirements for the amps feeding them. However, placement in the room is still an issue. If placed too close to the walls, there will be a bass build-up. High frequencies are not as affected because they are more directional. If the speakers are free-standing and placed away from the wall, back reflections from the speakers bouncing off the wall could cause cancellations and additions for the reasons mentioned earlier. You’re pretty safe if the speakers are more than 6 ft. away from the wall in a fairly large listening space (this places the first frequency null point below the normally audible range), but not everyone has that much room. My solution, crude as it is, has been to mount the speakers a bit away from the wall on the same table holding the mixer, and pad the walls behind the speakers with as much sound-deadening material as possible. Nor are room reflections the only problem; if placed on top of a console, reflections from the console itself can cause inaccuracies. To get around this, in my studio the near-fields fit to the side of the mixer, and are slightly elevated. This makes as direct a path as possible from speaker to eardrum. ABOUT NEAR-FIELD MONITORS There are lots of near-field monitors available, in a variety of sizes and at numerous price points. Most are two-way designs, with (typically) a 6” or 8” woofer and smaller tweeter. While a 3-way design that adds a separate midrange driver might seem like a good idea, adding another crossover and speaker can complicate matters. A well-designed two-way system will beat a so-so 3-way system. There are two main monitor types, active and passive. Passive monitors consist of only the speakers and crossovers, and require outboard amplifiers. Active monitors incorporate any power amplification needed to drive the speakers from a line level signal. I generally prefer powered monitors because the engineers have (hopefully!) tweaked the power amp and speaker into a smooth, efficient team. Issues such as speaker cable resistance become moot, and protection can be built into the amp to prevent blowouts. Powered monitors are often bi-amped (e.g., a separate amp for the woofer and tweeter), which minimizes intermodulation distortion and allows for tailoring the crossover points and frequency response for the speakers being used. However, there’s of course nothing wrong with hooking up passive monitors (which are less expensive than active equivalents) to your own amps. Just make sure your amp has adequate headroom. Any clipping that occurs in the amp generates lots of high-frequency harmonics (ask any guitarist who uses distortion), and sustained clipping can burn out tweeters. One important point is that monitors have improved dramatically over the years, yet prices have spiraled downward; it’s now possible to get a truly fine set of speakers for well under a thousand dollars. If you’d like to do a little window shopping, here’s a good place to start and get an idea of what’s available. IS THERE A “BEST” MONITOR? On the net, you’ll see endless discussions on which near-fields are best. Although it’s a cliché that you should audition several speakers and choose the model you like best, I believe you can’t choose the perfect speaker, because such a thing doesn’t exist. Instead, you choose the one that’s as neutral and accurate as humanly possible. While some people advise that you choose a speaker that colors the sound the way you prefer, that’s the approach to take with the hi-fi speakers in your living room, not mastering tools. Choosing a speaker is an art. I’ve been fortunate enough to hear my music over some hugely expensive, very-close-to-perfect systems in mastering labs and high-end studios, so I know exactly what it should sound like. My criterion for choosing a speaker is simple: Whatever makes my “test” CD sound the most like it did over the high-end speakers wins. If you haven’t had the same kind of listening experiences, book 30 minutes or so at some really good studio and bring along one of your favorite CDs (you can probably get a price break because you’re not asking to use a lot of the facilities). Listen to the CD and get to know what it should sound like, then compare any speakers you audition to that standard. For example, if the piano on your mix sounds a little understated on the expensive speakers, choose speakers where the piano is equally understated. One caution: if you’re A-B comparing a set of speakers and one set is slightly louder than the other (even a fraction of a dB can make a difference), you’ll likely choose the louder one as sounding better. Make sure the speaker levels are matched as closely as possible in order to make a valid comparison. LEARNING YOUR SPEAKER AND ROOM Ultimately, because your own listening situation is likely to be at least slightly imperfect, you need to “learn” your system’s response. For example, suppose you master something in your studio that sounds fine, but in a high-end studio with accurate monitoring, the sound is bass-heavy. That means your monitoring environment is shy on the bass, so you boosted the bass to compensate (this is a common problem in project studios with small rooms). When mastering in the future, you’ll know to mix the bass lighter than normal in order to have it come out okay. Compare midrange and treble as well. If vocals jump out of your system but lay back in others, then your speakers might be “midrangey.” Again, compensate by mixing midrange-heavy parts back a little bit. HEADPHONES, HI-FI SPEAKERS, AND SATELLITE SYSTEMS Musicians on a budget often wonder about mixing over headphones, as $100 will buy you a great set of headphones, but not much in the way of speakers. Although mixing exclusively on headphones is not a good idea, I highly recommend keeping a good set of headphones around as a reality check (not the open-air type that sits on your ear, but the kind that totally surrounds your ear). Sometimes you can get a more accurate bass reading using headphones than you can with near-fields. Careful, though: It’s easy to blast your ears with headphones and not know it. Watch those volume levels (and be real careful about accidentally setting up a feedback loop—a loud enough squeal could cause permanent hearing damage). As to hi-fi speakers, here’s a brief story. For almost 15 years, I mixed over a set of trusted bookshelf speakers in my home studio. These were some of the least sexy-sounding and most boring speakers in the world. But they were neutral and flat, and more importantly, I had “learned” them during the process of taking my mixes to many pro studios for tweaking or mastering. In fact, when listening over expensive speakers, the sound was almost always exactly what I expected, with one exception: Signals below about 50 Hz simply vanished on my speakers. Therefore, with instruments like orchestral kick drums, I had to mix visually by checking the meters, then verifying the mix at another facility. Thankfully, I’ve since upgraded to “real” monitors! So while I don’t recommend it, you can use hi-fi speakers if you absolutely must, assuming they’re relatively flat and unbiased (watch out; some consumer-oriented speakers “hype” the high and low ends). However, they often aren’t meant to take a lot of power, so be careful not to blow them out. One other tip: Follow the manufacturer’s instructions about whether speakers should be mounted horizontally or vertically; it does make a difference. Lately, “satellite” systems have appeared where the near-fields are physically very small—in fact, too small to produce adequate bass (some would argue that no 6” or 8” speaker can really produce adequate bass, but sometimes we need to reconcile finances and space with the laws of physics). To compensate, a third element, the “subwoofer,” adds a fairly large speaker, and is crossed over at a very low frequency so that it reproduces only bass notes. This speaker usually mounts on the floor, against a wall; in some respects placement isn’t too critical because bass frequencies are relatively non-directional. Can you use satellite-based systems to make your computer audio sound great? Yes. If you’re living space is tight, is this a good way to make your hi-fi setup less intrusive? Yes. Would you mix your major label project over them? Well, I wouldn’t. Perhaps you could learn these systems over time as well, but I personally have difficulty with the disembodied bass when it comes to critical mixes. TESTING ON MULTIPLE DELIVERY SYSTEMS Finally, no matter how good your speakers and acoustics, before signing off on a mastering job run off a “proofing” CD or two and listen through anything you can—car stereo speakers, hi-fi bookshelf speakers, big-bucks studio speakers, boom boxes, headphones, etc. This gives an idea of how well the song will translate over a variety of systems. If all is well, great—mission accomplished. But if the CD sounds overly bright on, say, five out of eight systems, consider pulling back on the brightness just a bit. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Make your drum loops come alive By Craig Anderton Drum loops: Boring. Repetitive. Yawn. The cliché is that drum loops are boring and repetitive. This isn’t really surprising, because many times, they are. But you don’t have to succumb to dumb drums—there are lots of ways to make drum loops anything but a yawner. ADD INDIVIDUAL DRUM HITS Drum loop libraries often include individual drum hits. So, you can set up another track adjacent to the track containing the loop, and drag in some additional snare or kick hits. The occasional off-beat hit can liven up a part by adding an element of surprise, or increasing emphasis as needed. REMOVE, THEN REPLACE Programs like Adobe Audition and Wavelab can cut specific frequency and amplitude ranges, as monitored in a spectral view. Use this function to remove the kick part from a loop while retaining the other drum sounds, then overdub a kick part with more variations and interest. I’ve also been able to remove some percussion sounds, like triangle and clave. This technique is not a panacea; it pretty much demands a dry loop, as reverb is such a diffuse sound it’s hard to pin down and remove. Otherwise, this type of editing can be extremely effective. USE REAL CYMBALS One technique is to use drum loops that don’t have cymbals. Then mic some cymbals, set up to do an overdub, and play the cymbal part. Not only will the cymbal’s sound provide a richness that’s difficult for a sample to provide, you can add variety to the loops by using real cymbals. USE MULTITRACK DRUM LOOP LIBRARIES Multitrack drum libraries, such as the Discrete Drums line carried by Sonoma Wireworks, require a little more work to apply than standard drum libraries—but the results are well worth it. One of the biggest advantages is that because individual drums are on separate tracks, it’s easy to add dynamics to just one sound. You can also add timbral changes, such as pulling back a bit on the snare’s treble during quiet parts, then increasing it a shade when you want the part to cut a little more. Another option involves altering the room mic levels to complement the song. To make the sound bigger, bring up the room mic tracks a bit; reduce them for a more intimate sound. Furthermore, you can use a program like Drumagog to replace particular drum sounds, such as the kick or snare. Drumagog works by detecting when a drum hit occurs, then generating a trigger to play a different drum sound. Assuming separate source tracks, replacing sounds is usually easy. Finally, you can shift the track timing: Lag the snare track a bit behind the beat to create a more loose, laid-back vibe, or push the snare a bit for a more insistent “feel.” LOOP VARIATIONS WITH CHOPPING Chopping a loop into pieces and rearranging them can work wonders. For example, cut a 16th note from the loop’s beginning , then paste it in for the two 16th notes that precede the loop. While you’re at it, draw in a level curve so they build up to the loop itself (Fig. 1). The end result is a seductive lead-in. Fig. 1: The loop beginning (highlighted in black) has been copied and pasted twice just before the loop, providing a cool lead-in. You can also chop internally to the loop; for example, swap the 2nd and 3rd beats to add some variation. Or, “intensify” a part by chopping an eighth note hit in half, throwing away the second half, and repeating the first part twice (Fig. 2). In this example, you get two 16th note hits instead of a single 8th note hit. Fig. 2: Cut up a loop, then rearrange the pieces to add variety and interest (the cut and copied pieces are highlighted in yellow for clarity). CHANGE THE TEMPO If you’re using REX or Acid-compatible loops (and their “stretch markers” are placed properly), you’re in luck because they’ll follow reasonable tempo changes. Real musicians simply do not maintain a rock steady tempo—not necessarily because they can’t, but because they manipulate the “groove” to add emotional impact. Pulling back the tempo a bit can help emphasize the vocals in a sensitive verse, while speeding up a little bit provides the rhythmic equivalent of modulating upward by a semitone. THE VIRTUES OF AUTOMATION Dynamically varying the drum loop levels and timbre via host automation can help restore some of the dynamics that are taken away by repeating a loop over and over again. Even better, assign some of these parameters to a hardware control surface so you can manipulate the dynamics in real time, and do a “performance.” It does take some extra effort to make a loop really shine, but when you hear how much these techniques can add to a loop, you’ll make that effort. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Your computer is often the lifeblood of your recording world, not to mention your personal life - and an ounce of protection is worth a ton of cure by Craig Anderton Somewhere in the world, there are thunder and lightning storms happening right now. Is your computer ready to deal with the damage that can occur from a nearby strike? The most important element in computer protection is an uninterruptible power supply, which isolates your computer from the AC line, keeps your computer on long enough so you can give it an orderly shutdown in the case of a power outage, and maintains a constant supply voltage. Don’t cheap out and get a “surge suppressor” barrier strip; it’s (arguably) better than nothing, but the $100 or so for a true uninterruptible power supply will pay for itself many, many times over if it saves your computer—and your valuable data—from a meltdown. However, it’s also important to note that surges can come through cable, DSL, and phone lines that terminate at your computer. So, choose an uninterruptible power supply that also lets you run phone or cable lines through it, and incorporates a surge suppressor for those lines. Finally, when I lived in Florida and would get bombarded regularly with lightning, I made a special IEC line cord where I cut off the AC prongs, but left the ground line connected. If I was going to be away from the computer for a long time, I’d swap out the standard AC cord with the custom one so that nothing from the AC line could get into the computer, but the chassis was still grounded. I’m not sure if this helped, but I’m pretty sure it didn’t hurt. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Breathe More Life into Static Samples by Craig Anderton Despite the analog/retro craze and the appearance of plug-ins, sample playback synths such as the Korg M3, Yamaha Motif, Roland Fantom series, and their kind remain popular. Furthermore, plug-ins such as IK Multimedia's SampleTank and Digidesign's Xpand! virtualize this type of instrument, making sample playback synths (along with the many soft samplers that have appeared in recent years) popular on the desktop too. But these instruments can do so much more than just play back samples; and with a little tweaking, you can obtain far more expressive patches. Sometimes even a simple parameter change or two is all you really need to customize a sound to fit your needs. Like what, you say? Well, like... LFO WAVEFORM CROSSFADES Using an LFO to crossfade between two waves (each must be followed by its own DCA) provides a less static, more animated sound if you choose related waveforms (e.g., two different organ sounds, 5\% and 50\% pulse waves, two different basses, etc.). Fig. 1 shows this technique applied to the Cakewalk's z3ta+. Two oscillators have two different waves (Vintage Square 1 and Triangle), but the real action happens under the Modulation Matrix toward the lower left. Fig. 1: Using an LFO to crossfade between two different waveforms can give a more animated sound. Note how LFO1 drives both Osc1 Level and Osc2 Level, but does so with two different curves. One is positive bipolar linear (B-LIN+), the other, negative bipolar linear (B-LIN-); as one oscillator gets louder, the other gets softer. An LFO setting of 0.5 - 1.5Hz seems about right. Note that you may need to tweak the oscillator levels a bit so that there's no noticeable level variation between the two. A PEAK EXPERIENCE Synths can often generate strong peaks, and unless you tame them, they may create havoc when recording. Proper synth programming can help; for example, even though detuned (chorused) oscillators sound fat, there's a substantial output boost when the chorused waveform peaks occur simultaneously. To reduce this, drop one oscillator's level about 30\% - 50\% compared to the other. The sound will remain fat, yet the peaks won't be as drastic. High-resonance filter settings are also a problem if you hit a note at the filter's resonant frequency. Try adding a limiter at the output to cut peaks down to size (use as fast an attack as possible). VELOCITY PANNING Having an LFO pan an instrument sound back and forth is usually pretty gimmicky (although this can work with short percussive sounds, as you don't hear them long enough to detect an audible sweep). However, one panning technique can sound quite natural: Modulate panning with velocity. When you first hit a note its stereo position will depend on the velocity, but as it sustains, it will retain its location in the stereo field until replayed. SINE AND TRIANGLE WAVES: SONIC HELPERS Although some think sine and triangle waves are the most boring waveforms in the world, they actually have many uses. For a fuller acoustic guitar or piano sound, layer a sine wave along with the lower notes. To attenuate the sine wave at higher notes, modulate the wave's amplitude negatively according to keyboard note position (i.e., the higher you play on the keyboard, the lower the level). Also keep the overall level low-just enough to provide a subtle psycho-acoustic boost. In fact, sine and triangle waves can add more depth to almost any sample because digitally-generated waveforms can have more presence than digitally-recorded sounds. For example, harp samples may lack a bit of "you are there" presence due to mic limitations, room acoustics, etc. Layer a triangle wave with the harp (adjust the triangle's amplitude envelope so that it "tracks" the harp); the triangle wave provides presence, while the sample provides detail and realism. Initially set the triangle wave to the lowest possible level, then bring it up slowly to taste. Keep it subtle-we're talking background reinforcement, not something obvious. Here's another triangle trick: To add some male voices to an ethereal female choir, layer a triangle wave tuned an octave lower. This gives a powerful bottom end that sounds like guys singing along. To maintain the ethereal quality in the upper registers, consider modulating the triangle wave amplitude according to keyboard position so that the triangle wave is not apparent on higher notes. MORE RESPONSIVE PARAMETERS "Doubling" modulation routings can make a parameter more responsive. For example, most keyboards have a global pressure control, adjustable for heavy, light, or moderate action. I usually choose moderate, but occasionally need a patch to have a lighter, more responsive feel. Assigning pressure twice to the same parameter (such as overall level or filter cutoff; most parameters can accept more than one modulation source) increases the sensitivity for just that parameter. The controllers will sum together, thus creating more change for a given amount of pressure. This same trick works for velocity. STRONGER ATTACKS To strengthen an instrument's attack, take advantage of the fact that bass sounds (slap bass, synth bass, plucked acoustic bass, etc.) tend to have fairly complex attacks. Transpose the bass wave up an octave, and layer it behind the primary sound. You'll probably want to add a fairly rapid decay to the bass so that its sustain doesn't become a major part of the sound. BETTER STRINGS THROUGH LAYERING String synthesizers of the 70s, based on sawtooth or pulse waves, created rich, syrupy string sounds that weren't super-lifelike, but nonetheless sounded pretty cool. Sampled strings may sound more realistic, but often lack the smoothness of analog simulations. For the best of both worlds, dial up a sawtooth or pulse wave, and adjust its envelope for as realistic a string sound as possible. Now layer it behind a string section sample, and the synthesized waveform will "fill in the cracks" in the digital waveform. PITCHED PERCUSSIVE TRANSIENTS Percussion instruments, when played across a keyboard, acquire a sense of pitch. Layering these with conventional melodic samples can yield hybrid sounds that are melodic, but have complex and interesting transients. Cowbell is one of my favorite samples for this application. Claves, triangle dropped down an octave, struck metal, and just about any other pitchable percussion can also give good results. The above suggestions are just the tip of the iceberg. Sample playback synths can be a rich source of sounds that exceed your expectations, but you have to get in there and do some parameter value tweaking. Go ahead and mess around-you have nothing to lose but sounds that are like everybody else's. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Let's get dirty... By Craig Anderton Yes, it's all the rage: Lo-fi music, where you mess up sound not because you lack experience or can't afford good gear, but because you want to mess it up - and mess it up good. In this article, we'll consider when bad things happen to good basses, and why that can be fun. MAKING THE CONNECTION I like lo-fi sometimes, but there are limits - I still want the bass, along with the kick, to be the driving low-frequency force behind a tune. However, a lot of lo-fi boxes, while delivering cool effects, take away the rich low end we need to preserve. Lo-fi works best for me when placed "on top" of the main bass sound, like chocolate syrup on a sundae. This requires a parallel effects connection so that the effect is added to the bass (Fig. 1). Fig. 1: Splitting a bass signal preserves the low end, while letting you add effects "on top of" this main sound. With hard disk/plug-in effects-based recording, you often end up adding effects on mixdown. The simplest solution here is to copy your main bass track to another track, and use plug-ins to process the copied track. If you're using a mixer (hardware or software), you'll probably want to pan the two sounds to center, unless you use stereo bass and stereo effects. There's one caution: Some lo-fi effects might affect phase enough to thin out the bass sound. Flip the phase switch of the channel with the effects, and listen carefully in mono. If the sound is fuller, leave the phase flipped. If it's thinner, go back to standard phase. LO-FI OPTIONS Here are some of my favorite effects for turning basses into instruments of mass destruction. Distortion. You can always obtain distortion by overloading an amp, but distortion boxes and plug-ins are often more flexible. For hard disk recording applications, guitar and bass amp simulators like Native Instruments' Guitar Rig, IK Multimedia's Ampeg SVX and AmpliTube 2, Waves GTR, Universal Audio's Nigel, and iZotope's Trash are ideal for this task. And of course, when it comes to hardware, there are a zillion options for distortion. The biggest problem with distortion is that it generates a ton of harmonics, which can tilt your instrument's spectrum too much into the treble zone, thus producing a thin sound. I recommend following distortion with a high-cut filter so you have some control over the high-end/low-end balance. Ring modulator. A ring modulator has two inputs. You plug your bass into one, and some other signal source (anything from a steady tone - the usual choice - to drums or program material) into the other input. The ring modulator output then generates two tones: the sum of the input frequencies, and the difference. For example, if you're playing A = 110 Hz on the bass and feed in a 500 Hz tone into the other input, the output will consist of two tones: 610 Hz and 390 Hz. Because these are mathematically (not harmonically) related, the resulting tone is "clangorous" and has characteristics of a gong, bell, or similar enharmonic percussive instrument. Ring modulators are good for having crazy sounds going on in the background of your main line. They add a sort of goofy, non-pitched effect that uncenters the tonal center of whatever you're playing. Hardware ring modulators aren't easy to find; probably the best is the Moogerfooger Ring Modulator. But software plug-ins are plentiful, including free ones. Go to the net and search on "Ring modulators" and "plug-ins," and you'll find plenty of options to try out. Fig. 2: Ableton Live's Redux effect gives nasty bit decimation effects, and throws in "downsampling" for good measure. Bit decimation. I don't know of any hardware box that performs this function, but bit reduction is a fairly common plug-in type for digital audio (Fig. 2). The concept is to reduce the number of bits used to represent a signal. For example, 16 bits gives over 65,000 steps of amplitude resolution - good enough to encode a signal with excellent fidelity. Cut that down to 4 bits, and you have only 16 steps of resolution. This turns nice, round waveforms into weird stair-step shapes that generate lots of strange harmonics. There?s also a certain "graininess" to the sound, and a kind of bizarre, ringing effect. Sample rate conversion. High sample rates give better fidelity, but conversely, really low sample rates give worse fidelity. The Redux plug-in shown in Fig. 2 has a "Downsampling" option that works similarly by arbitrarily removing samples. For example, if it's set to "1," every sample at the input passes through to the output; if set to "4," three out of every four samples are discarded on their way to the output. Fig. 3: DigiTech's BP200 bass processor includes a useful octave divider effect. Pitch shifting. Technologically speaking, pitch shifting is a hard effect to create; with budget effects, the sound quality is usually not all that great. But with newer effects (Fig. 3), by setting the pitch shift to one octave lower and playing high up on the neck, you can get a "growl" that definitely has its uses. The sound may end up being somewhat diffused, but when you need a huge bass sound, this could be the ticket. As to software, there are plenty of pitch-shifting options, but not all are real-time. Your best bet is to use octave divider effects found in amp simulator software. Demented plug-ins. Software plug-ins have inspired a wide range of nastifiers. Some of these add vinyl scratches and pops to sounds, some are designed to emulate overdriven analog tape, and some have no real hardware equivalents (one of my favorites is Native Instruments' Spektral Delay). The more complex the plug-in, the greater the odds that you can push the controls into creating crude, lewd, and rude effects. WHY ON EARTH... ...would anyone want to make ugly sounds? Well, maybe you recently joined Slipknot, or maybe you just have a sense of humor. Or maybe you're tired of excessive attention to detail and want something more raw and rough. In any event, relax your standards from time to time - you may discover some unusual sounds that end up being "keepers." Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Exploring the Art of Filthy Signal Mutation by Craig Anderton I like music with a distinctly electronic edge, but also want a human "feel." Trying to resolve these seemingly contradictory ideals has led to some fun experimentation, but one of the more recent "happy accidents" was finding out what happens when you apply heavy signal processing to multitracked drums played by a human drummer. I ended up with a sound that slid into electronic tracks as easily as a debit card slides into an ATM machine, yet with a totally human feel. This came about because Discrete Drums, who make rock-oriented sample libraries of multitracked drums (tracks are kick, snare, stereo toms, stereo room mic tracks, and stereo room ambience), received requests for a more extreme library for hip-hop/dance music. I had already started using their CDs for this purpose, and when I played some examples of loops I had done, they asked whether I'd like to do a remixed sample CD with stereo loops. Thus, the "Turbulent Filth Monsters" project was born, which eventually became a sample library (originally distributed by M-Audio, and now by Sonoma Wire Works). Although I used the Discrete Drums sample library CDs and computer-based plug-ins, the following techniques also apply to hardware processors used in conjunction with drum machines that have individual outs, or multitracked drums recorded on a multitrack recorder (or sample CD tracks bounced over to a multitrack). Try some of these techniques, and you'll create drum sounds that are as unique as a fingerprint - even if they came from a sample CD. EFFECTS AUTOMATION AND REAL TIME CONTROL Editing parameters in real time lets you "play" an effect along with the beat. This is a good thing. However, it's unlikely that you'll be able to vary several parameters at once while mixing the track down to a loop, so you'll want to record these changes as automation. Hardware signal processors can often accept MIDI controllers for automation. If so, you can sync a sequencer up to whatever is playing the tracks. Then, deploy a MIDI control surface (like the Mackie Control, Novation Nocturn, etc.) to record control data into the sequencer. Once in the sequencer, edit the controller data if needed. If the processor cannot accept control signals, then you'll need to make these changes in real time. If you can do this as you mix, fine. Otherwise, bounce the processed signal to another track so it contains the changes you want. Software plug-ins for DAWs are a whole other matter, as there are several possible automation scenarios: Use a MIDI control surface to alter parameters, while recording the data to a MIDI track (hopefully this will drive the effect on playback) Twiddle the plug-in's virtual knobs in real time, and record those changes within the host program Use non-real time automation envelopes Record data that takes the form of envelopes, which you can then edit Use no automation at all. In this case, you can send the output through a mixer and bounce it to another track while varying the parameter. This can require a little after-the-fact trimming to compensate for latency (i.e., delay caused by going through the mixer then returning back into the computer) issues. For example, with VST Automation (Fig. 1), a plug-in will have Read and Write Automation buttons. Fig. 1: Click on the Write Automation button with a VST plug-in, and when you play or record, tweaking controls will write automation into your project. If you click on the Write Automation button, any changes you make to automatable parameters will be written into your project. This happens regardless of whether the DAW is in record or playback mode. PARALLEL EFFECTS In many cases, you want any effects to be in parallel with the main drum sound. For example, if you put ring modulation or wah-wah on a kick drum, you'll lose the essential "thud" that fills out the bottom. With a hard disk recorder, parallel effects are easy to do: Copy the track and add the effects to the copy (Fig. 2). Fig. 2: Ring Thing, a free download from DLM, is processing a copy of the drum track. The processed track is mixed in with the original drum track at a lower level. With a hardware mixer, it's also not hard to do parallel processing because you can split the channel to be processed into two mixer inputs, and insert the effect into one of the input channel strips. THESE ARE A FEW OF MY FAVORITE FX Okay, we're set up for real time control and are playing back some drum tracks. Here are some of my favorite nasty drum processors. Ring Modulator. A ring modulator has two inputs, for a carrier and modulator. The output provides the sum and difference of the two signals while suppressing the originals. For example, if you feed in a 400 Hz carrier and 1 kHz modulator, the output will consist of a 600 Hz and 1.4 kHz tone mixed together. Most plug-in ring modulators dedicate the carrier input to an oscillator that's part of the plug-in, with the track providing the modulator input. A hardware ring modulator - if you can find one - may include a built-in carrier waveform, or have two "open" inputs where you can plug in anything you want. The ring modulator produces a "clangorous," metallic, enharmonic sound (sounds good already, eh?). I like to use it mostly as a parallel effect on toms and kick; a snare signal, or room sounds, are complex enough that adding further complexity usually doesn't help. Having a steady carrier tone can get pretty annoying (although it has its uses for electro-type music), so I like to vary the frequency in real time. Envelope followers and LFOs - particularly tempo-synched LFOs - are good choices, although you can always tweak the frequency manually. With higher frequencies, the sound becomes kind of toy-like; lower frequencies can give more power if you zero in on the right frequency range. Envelope-Controlled Filter. This is another favorite for individual drum sounds. Again, you'll probably want to run this in parallel unless you seek a thinner sound. High resonance settings make the sound more "dinky," whereas low resonance can give more "thud" and depth. For hardware, you'll likely need a stomp box, where envelope-controlled filters are plentiful (the Boss stomp boxes remain a favorite, although if you can find an old Mutron III or Funk Machine, those work too). For plug-ins, many guitar amp sims have something suitable (e.g,, the Wah Wah module in Waves GTR Solo; see Fig. 3). Fig. 3: This preset for Waves GTR Solo adds funkified wah effects to drum tracks. The Delay adds synched echos, the Amp module adds some grit, and the Compressor at the output keeps levels under control. I also like using the wah effect in IK Multimedia's AmpliTube 2 guitar amp plug-in, which is also great for... Distortion. Adding a little bit of grit to a kick drum can make it punch through a track, but I've also added heavy distortion to the room mic sound while keeping the rest of the drums clean. This "muddies up" the sound in an extremely rude way, yet the clean sounds running in parallel keep it from becoming a hopeless mess. Distortion doesn't do much for snares, which are already pretty dirty anyway. But it can increase the snare's apparent decay by bringing up the low-level decay at the end. Guitar amp distortion seems particularly useful because of the reduced high end, which keeps the sound from getting too "buzzy," and low end rolloff, which avoids muddiness. Guitar amp plug-ins really shine here as well; I particularly like iZotope's Trash (Fig. 4), as it's a multiband (up to four bands) distortion unit. Fig. 4: In this preset, iZotope's Trash is set up to deliver three bands of distortion. This means you can go heavy on, say, lower midrange distortion, while sprinkling only a tiny bit of dirt on the high end. It's also good for mixed loops because multiband operation prevents excessive intermodulation distortion. Feedback. And you thought this technique was just for guitarists...actually, there are a couple ways to make drums feed back. For hardware, one technique is to send an aux bus out to a graphic equalizer, then bring the graphic EQ back into the channel, and turn up the channel's aux send so some signal goes back into the EQ. Playing with individual sliders can cause feedback in the selected frequency range, but this requires a really light touch - it's easy to get speaker-busting runaway feedback. Adding a limiter in series with the EQ is a good idea. My favorite feedback technique uses the Ohm Force Predatohm plug-in, which was already shown in Fig. 1. This is a multiband distortion/compression plug-in with feedback frequency and amount controls. But the killer feature is that all parameters are automatable. You can tweak the amount control rhythmically to give a taste of feedback before it retreats. Similarly, you can alter the frequency with amount set fairly high. As the frequency sweeps through a range where there's lots of audio energy, feedback will kick in - but as it sweeps past this point, the feedback disappears. LET'S NOT FORGET THE TRULY WEIRD A vocoder (Fig. 5) is a great processor for drums, as there are several possible ways to use it. Fig. 5: The Vocoder in Ableton Live. In this example, drums are modulating a guitar's power chord. You have several choices of carriers for the vocoder (circled in green), including internal noise, the modulator (so the modulator signal feeds both the modulator and carrier ins), or pitch tracking, where the carrier is a monophonic oscillator that tracks the modulator signal's pitch. One is to use the room ambience as the carrier, and a submix of the kick, snare, and toms as the modulator. As the drums hit, they bring in sections of the ambience, which if you've been paying attention so far, is probably being run through some weird effect of its own. Another trick I did was bring in an ambience track from a different drum part and modulate that instead. You can also use the drums to "drumcode" something like a bunch of sawtooth waves, a guitar power chord, whatever. These sounds then lose their identities and become an extension of the drums. Both hardware and software vocoders are fairly common. Generally the most whacked-out processors come in plug-in form, such as the GRM Tools series, the entire Ohm Force line (their Hematohm frequency shifter is awesome with drums), Waves' tasty modulation effects like the Enigma and MondoMod, PSP's Vintage Warmer (a superb general-purpose distortion device), and too many others to mention here - go online, and download some demos. Also, let's not forget some of those old friends that can learn new tricks, like flanger, chorus, pitch shifters, and delay - extreme amounts of modulation or swept delays can go beyond their stereotyped functions. Emagic's Logic is also rich in plug-ins, many of which can be subverted into creating filthy effects. The possibilities they open up are so mind-boggling I get tingly all over just thinking about it. SO WHAT'S THE PAYOFF? Drum loops played by a superb human drummer, with all those wonderful little timing nuances that are the reason drum machines have not taken over the world, will give your tracks a "feel" that you just can't get with drum machines. But if you add on really creative processing, the sounds will be so electronified that they'll fit in perfectly with more radical instruments synths, highly processed vocals, and technoid guitar effects. So, get creative - you'll have a good time doing it, and your recordings won't sound like million others. What good are all these great new toys if you don't exploit them? Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Expand your sound with parallel effects By Craig Anderton Back when hardware was king, creating parallel effects was pretty easy. You'd send an output into a Y-cable, split it to two different effects, and there you had it: Instant parallel processing, where one signal could take two different paths. One obvious use was creating stereo effects out of a mono source; for example, the parallel processing could consist of two chorus devices, or delays set to different delay times, each panned to opposite sides of the stereo field. Digital Audio Workstation (DAW) software is a different story. In almost all cases, the program will assume you want to put the effects in series, one right after another (Figs. 1 and 2). Fig. 1: In Avid Pro Tools, each channel has five series inserts, A-E. The first channel has three effects inserted, the second channel has one effect inserted, while the third channel has no effects inserted. Fig. 2: PreSonus Studio One Pro allows unlimited inserts, but you can also expand them to show a thumbnail of the settings, or collapse to take up less space. Channels 1 and 3 show expanded effects, while channel 2 shows three collapsed effects. There are a some exceptions to serial inserts of effects; Mackie's Tracktion lets you insert complex combinations of series and parallel effects with a track, and Ableton Live, starting with version 6, lets you create instrument racks with parallel effects (Fig. 3). But for most programs, you'll need to get a little creative. Fig. 3: Ableton Live makes it easy to create parallel effects chains. In this example, the parallel effects include a chain of series compression and saturation to create distortion (shown), as well as a parallel chain with delay and another with reverb. DO YOU COPY? One way to achieve parallel effects is to copy (clone) the track to which you want to apply the parallel effects, resulting in several parallel audio tracks. You then apply effects to these tracks as needed. For example, suppose you want to add a parallel effect to a piano track, where a noise gate lets through only the peaks; furthermore, this goes to a reverb that's panned far left. Meanwhile, a second noise gate sends a different set of peaks through a short delay, to a different reverb that's panned far right. You could do this with aux sends, but there's an alternative. For this example, we need three parallel tracks: Straight piano only Straight piano + noise gate + reverb1 (panned left) Straight piano + noise gate + delay + reverb2 (panned right) Copy the straight piano track two times for a total of three piano tracks. The first track is the "straight," unprocessed track. In the second track, insert the noise gate and reverb, then pan the track toward the left. For the third track, insert the noise gate, delay, and second reverb, then pan that track toward the right. (Of course, you could also slide the third track behind a bit in time to create the delay, but sometimes it's a lot more convenient to just dial in a delay, particularly if you need to sync to tempo.) Because tracks in today's DAWs are aligned with sample accuracy (and assuming the effects paths have delay compensation), you won't hear any flamming, comb filtering, or other undesirable effects when you combine the tracks. "VIRTUAL MICS" WITH PARALLEL EQ Here's a real-world example of using parallel effects to create a wider stereo image (Fig. 4). In some ways pianos are fun to record, because they generate sound over a wide area. Stick a couple mics in the right places, and you'll end up with some great stereo imaging. But other instruments, such as classical guitar, accordian, percussion, etc. don't have a wide stereo image if you hear them from more than a few feet away—although up close, it can be a different story. If you're facing a guitarist, your right ear picks up on some of the finger squeaks and string noise from the guitarist's fretting hand. Your left ear picks up some of the body's "bass boom;" although not as directional as the high-frequency finger noise, it still shifts the lower spectra somewhat to the left. Meanwhile, the main guitar sound fills the room, providing the acoustic equivalent of a "center channel." Fig. 4: These three EQ curves (shown in Sonar), when panned as described and mixed for the proper balance, create a much larger image that belies the fact a recording was done with a single mic. This all became very clear to me when recording a guitar/keyboard duo, where the keyboard had a nice spread but the guitar kept getting shoved to the center of the image. What to do? I tried using two mics on the guitar, but the phasing issues were unacceptable. Then I thought about what made the sound "wider" as you got closer, and a solution suggested itself. I've also used the following technique to stretch a piano and organ's image beyond what I could obtain simply by using two mics; in fact, this basic principle works for most sound sources where the bass doesn't need to be in the middle of the stereo image. The first step in simulating the effect of being close to the guitar was to copy the original guitar track to two more tracks. The first clone provided the "squeak" component by including a highpass filter that cut off the low end starting around 1kHz. This was panned toward the right. The second clone for the "boom" channel used a lowpass filter with a sharp cutoff from 400Hz on up. This was panned to the left. Adding these two tracks to the main track pulled out some of the "finger squeaks" and "boom" components that were in the original sound, and positioned them in a more realistic stereo location. This also stretched the stereo image somewhat. And because these signals were extracted from one mic, there were none of the phasing problems associated with multiple mics. As to mixing these three elements, the drastic amounts of high and lowpass filtering on the cloned channels brought their overall levels way down, even without touching the channel fader. If you isolate these tracks, it seems as if their impact would be non-existent due to the low level and restricted frequency range. But if you mix them in with the main channel, the entire sound comes to life. -HC- Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Understanding Digital Reverb Will Allow You to Optimize Your Reverberant Space for the Best Possible Sound By Craig Anderton There's nothing like the sound of real reverb, such as what you hear in a cathedral or symphonic hall. That's because reverb is made up of a virtually infinite number of waves bouncing around within a space, with ever-changing decay times and frequency responses. For a digital reverb to synthesize this level of complexity is a daunting task, but the quality and realism of digital reverb continues to improve. Today's digital reverbs come in two flavors: convolution and synthetic (also called algorithmic). A convolution reverb is sort of like the digital reverb equivalent of a sampling keyboard, as it's based on capturing a sonic "fingerprint" of a space (called an impulse) and applying that fingerprint to a sound. Convolution reverbs are excellent at re-creating the sound of a specific acoustical space. Synthetic reverbs model a space via reverberation algorithms. These algorithms basically set up "what if" situations: what would a reverb tail sound like if it was in a certain type of room of a certain size, with a certain percentage of reflective surfaces, and so on. You can change the digital reverb sound merely by plugging in some different numbers—for example, by deciding the room is 50 feet square instead of 200 feet square. Even though digital synthetic reverbs don't sound exactly like an acoustic space, they do offer some powerful advantages. First, an acoustic space has one "preset"; a digital reverb offers several. Second, digital reverb is highly customizable. Not only can you use this ability to create a more realistic ambience, you can create some unrealistic—but provocative—ambiences as well. However, the only way to unlock the true power of digital reverb is to understand how its parameters affect the sound. Sure, you can just call up a preset and hope for the best. But if you want world-class reverb, you need to tweak it for the best possible match to the source material. By the way, although we'll concentrate on the parameters found in synthetic reverbs, many convolution reverbs have similar parameters. DIGITAL REVERB PARAMETERS The digital reverb effect has two main elements: The early reflections (also called initial reflections) consist of the first group of echoes that occur when sound waves hit walls, ceilings, etc. (The time before these sound waves actually hit anything is called pre-delay.) These reflections tend to be more defined and sound more like "echo" than "reverb." The decay, which is the sound created by these waves as they continue to bounce around a space. This "wash" of sound is what most people associate with digital reverb. Following are the types of parameters you'll find on higher-end digital reverbs. Lower-cost models will likely have a subset of these. Room size. This affects whether the paths the waves take while bouncing around in the virtual room are long or short. If the digital reverb sound has flutter (a periodic warbling effect that sounds very unrealistic), vary this parameter in conjunction with decay time (described next) for a smoother sound. Decay time. This determines how long it takes for the reflections to run out of energy. Remember that long reverb times may sound impressive on instruments when soloed, but rarely work in an ensemble context (unless the arrangement is very sparse). Decay time and room size tend to have certain "magic" settings that work well together. Preset digital reverbs lock in these settings so you can't make a mistake. For example, it can sound "wrong" to have a large room size and short decay time, or vice-versa. Having said that, though, sometimes those "wrong" settings can produce some cool effects, particularly with synthetic music where the goal isn't necessarily to create the most realistic sound. Damping. If sounds bounce around in a hall with hard surfaces, the reverb's decay tails will be bright and more defined. With softer surfaces (e.g., wood instead of concrete, or a hall packed with people), the reverb tails will lose high frequencies as they bounce around, producing a warmer sound with less "edge." A processor has a tougher time making accurate calculations for high frequency sounds, so if your reverb produces an artificial-sounding high end, just concede that fact and introduce some damping to create a warmer sound. High and low frequency attenuation. These parameters restrict the frequencies going into the reverb. If your digital reverb sounds metallic, try reducing the highs starting at 4—8kHz. Remember, many of the great-sounding digital plate reverbs didn't have much response over 5kHz, so don't fret too much about a digital reverb effect that can't do great high frequency sizzle. Having too many lows going through digital reverb can produce a muddy, indistinct sound that takes focus away from the kick and bass. Try attenuating from 100—200Hz on down for a tighter low end. Early reflections diffusion (sometimes just called diffusion). This is one of the most critical digital reverb controls for creating an effect that properly matches the source material. Increasing diffusion pushes the early reflections closer together, which thickens the sound. Reducing diffusion produces a sound that tends more toward individual echoes. For percussive instruments, you generally want lots of diffusion to avoid the "marbles bouncing on a steel plate" effect caused by too many discrete echoes. However, for vocals and other sustained sounds, reduced diffusion can give a beautiful reverberant effect that doesn't overpower the source. With too much diffusion, the voice may lose clarity. Note that there may be a second diffusion control for the reverb decay. With less versatile digital reverbs, both diffusion parameters may be combined into a single control. Early reflections pre-delay. It takes a few milliseconds before sounds hit the room surfaces and start to produce reflections. This parameter, usually variable from 0 to 100ms or so, simulates this effect. Increase the reverb parameter's duration to give the feeling of a bigger space; for example, if you've dialed in a large room size, you'll probably want to employ a reasonable amount of pre-delay. Reverb density. Lower densities give more space between the digital reverb's first reflections and subsequent reflections. Higher densities place these closer together. Generally, as with diffusion, I prefer higher densities on percussive content, and lower densities for vocals and sustained sounds. Early reflections level. This sets the early reflections level compared to the overall digital reverb decay. The object here is to balance them so that the early reflections are neither obvious, discrete echoes, nor masked by the decay. Lowering the early reflections level also places the listener further back in the room, and more toward the middle. High frequency decay and low frequency decay. Some digital reverbs have separate decay times for high and low frequencies. These frequencies may be fixed, or there may be an additional crossover parameter that sets the dividing line between the lows and highs. These controls have a huge effect on the overall reverb character. Increasing the low frequency decay creates a bigger, more "massive" sound. Increasing high frequency decay gives a more "ethereal" type of effect. An extended high frequency decay, which is generally not found in nature, can sound great on vocals as it adds more reverb to sibilants and fricatives, while minimizing reverb on plosives and lower vocal ranges. This avoids a "muddy" reverberant effect and doesn't compete with the vocals. ONE REVERB OR MANY? I tend not to use a lot of digital reverb, and when I do, it's to simulate an acoustic space. Although some producers like putting different digital reverbs on different tracks, I prefer to insert reverb in an aux bus, and use different send amounts to place the sound source in the reverberant space (more send places the sound further back; less send places it more up front). For this type of "program material" application, I'll use fairly high diffusion coupled with a decent amount of high frequency damping. The only exceptions to this are when I want an "effect" on drums, like gated reverb, or need a separate reverb for the voice. Voices often benefit from a bright, plate-like effect with less diffusion and damping. In general, I'll send some vocal into the room reverb and some into the "plate," then balance the two so that the vocal reverb blends well with the room sound. REALITY CHECK The most difficult task for a digital reverb is to create realistic first reflections. If you have a nearby space with hard surfaces like a tile bathroom, basement with hard concrete surfaces, or even just a room with a tiled floor, place a speaker in the room and feed it with an aux bus output. Then add a microphone in the space to pick up the reflections. Blend in the real first reflections with the decay from a digital reverb, and the result often sounds a lot more like a real reverb chamber. DOUBLE YOUR (REVERB) PLEASURE I've yet to find a way to make a bad digital reverb plug-in sound good, but you can make a good digital reverb plug-in sound even better: "Double up" two instances of reverb (each on their own aux bus), set the parameters slightly differently to create a more "surrounding" stereo image instead of a point source, then pan one digital reverb somewhat more to the left and the other more to the right. You can even do this with two different reverbs. The difference may be subtle, but it can definitely improve the sound. Curious what this sounds like? Click here to download the sound of one digital reverb, and click here to download the sound of two reverbs combined together. The difference is very subtle (it's best to listen with headphones), but as with most tweaks involving audio, these differences add up over the course of many tracks in a multitracked production. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. These Techniques Are the Key to Truly Groovacious Beats By Craig Anderton You don't have to do hip-hop, rap, or dance music to have the need to program beats—even singer/songwriters want something better than a metronome to play against. But there are beats that just sit there lifeless, making you wish you hadn't gotten into an argument with your drummer and caused him to quit the band, and there are beats that jump out of the speakers, grab you by your butt, and get you moving. So what's the "it" factor that makes the difference? I've been programming beats for years, as well as worked with people who are beatmasters, and closely observed dance floor posteriors during this time to see which beats have the highest Booty-Movement Factor (BMF). Now, you too can benefit from this selfless research. THE MOST IMPORTANT RULE OF BEAT-MAKING If you stop reading after this paragraph, at least you'll have this article's most important take-away: Always program your beats while other instruments are playing. You don't need much; playing against a bassline, percussive synth part, and maybe a pad will do the job. Playing along with other instruments keeps your beats from getting off on some mutant tangent, and lets them play well with others. The bass part is especially important. Program the bass first, and if it's a line that makes you want to move (or even better, makes you want to hum it too), the drums will fall together perfectly. If you're into the dubstep thing of course the bass wobble is crucial . . . but that's why sync-to-tempo was invented. If the wobble locks with the drums, the sound hits like a laser instead of diffusing like a flashlight. When creating beats, be honest with yourself. If you don't start moving around like a gerbil in heat when your drum loop plays back, the people on the dance floor won't either. Don't waste time fixing something that doesn't work. Start over from scratch, and remember you're there to have fun. If you're not having fun, your listeners won't either. PLOP-FLOAT-TEASE Most loops are either one, two, or four measures. Each kind has a different personality, so get to know your beats, and use them for what they do best. One measure: Aside from daytime television, there are few things more boring than a one-measure loop repeated by itself over and over and over again. So, a one-measure loop's mission in life is to provide a background for other beats, percussion parts, or goofy sounds, so you can put together layers that work together. The best one-measure loops are plain and normal. Clever syncopations, if played over and over, are like a house guest who just won't leave. A simple kick/snare/closed hat combo rules. For one-measure beats, simple = good. Two measures: Two-measure loops are cool because they're like aerobics—one measure breathes in, the next breathes out. The structure I use for two-measure beats is something I call "plop-float-tease." This is important, so look at Fig. 1, which shows this pattern programmed in Reason's ReDrum module. Then click to hear an example of a beat that uses this approach. Fig.1: A two-measure loop, programmed in Reason's ReDrum module. "Plop" means a heavy downbeat. Make the velocity on the kick drum a little higher, increase the kick treble a bit so it hits harder, layer a low tom hit with the kick...anything that makes the sound plop. You want people to feel, not just hear, the downbeat. "Float" is the middle section. This is more like the one-measure concept; you want something that's fairly neutral and keeps the beat progressing, without calling a lot of attention to itself. "Tease" disrupts that normal flow and sets you up for the next plop. This can be some tom hits, removing the kick and hats for a couple of beats while you slip in something else, a breakbeat, whatever. When you apply beatus interruptus, when the beginning of the loop hits again, you have a strong downbeat that re-syncs the dancer's butts/brains. Four measures: These are good for "canned" beats and sample libraries for the rhythmically-challenged, because if they're programmed well, they can stand on their own without a lot of extra layering. For these, I have a favorite structure that combines some of the one- and two-measure concepts. Measure 1 is simple (but has a plop at the downbeat), measure 2 is simple but with a tiny tease leading into measure 3 (which usually repeats measure 1), then measure 4 adds a major tease on the last couple beats to make a fill. IT DON'T MEAN A THING IF IT AIN'T GOT THAT SWING Swing lengthens the first note of an equal-valued pair of notes, and shortens the second one to compensate. Swing is crucial for a high BFM, especially for hip-hop type tempos that are 100 BPM or less. Eurotechno robot stuff usually doesn't work well with swing—don't even bother. Also if you're using other loops with your swung drum loop, they probably don't have swing added, so the parts will start arguing. Fortunately, many DAWs let you apply the groove from one part to another (Fig. 2), and getting everything to swing together can yield near-magical results. Fig. 2: Ableton Live has a library of swing and groove options, and makes it easy to apply them - just drag the groove you want into a file (or use the "hot swap" feature to step through the grooves for quick auditioning) In any event, at 85-90 BPM injecting swing is like taking Vitamin Beat. Even a little bit, like 55\% swing, will make a difference. THE ART OF THE CYMBAL Cymbals are musical one-night stands, because you want them to show up, party, and leave. So make loops without cymbals, then add one-shot (single event, non-looped) cymbals on a separate track. If you find a really good cymbal sample, copy it to create one tuned lower and one tuned higher. That way you'll have two variations on a good-sounding cymbal, which beats hunting through a bunch of samples to find variations that sound decent. Or you can avoid samples altogether; I have some of my father's actual acoustic cymbals (he was a fine jazz drummer), and sometimes I'll mic those and record them with the track rather than use samples. The improvement is so substantial it sometimes fools people into thinking all the drums are "real." However, although a lot of cymbal samples just don't cut it, I recently discovered some new faves when I reviewed, of all things, Yamaha's Motif 10th Anniversary Pack. It comes bundled with Zildjian's Gen16 Intelligent Percussion Digital Vault Z-Pack, which has 14 beautifully-multisampled Zildjian cymbals, hosted by a customized version of FXpansion's BFD Eco (Fig. 3). Fig. 3: Zildjian's Gen16 "digital vault" cymbals sound wonderful, and articulate very well. This isn't a "lite" version of BFD Eco, and it will load other BFD Eco libraries. One of my favorite tricks with these is to use Sonar's Velocity MIDI plug-in to restrict velocities to a particular window of multisamples; for example, scaling all velocities to between 100 and 127 gives great crashes, but you can choose lower-velocity windows for subtler, more ethereal sounds if you've of the chill persuasion. PERCUSSION TIPS Don't get fancy with the kick, snare, and hats. You need a rock-solid foundation so dancers can feel the groove. But you also need some ear candy on the top, which is percussion's job description. You can go pretty nuts with congas, shakers, tambourines, and similar instruments as long as there's a solid foundation. They help propel the beat, and are a way to sneak in double-time and triplet parts. These elements are important because they raise the overall energy a level. This is necessary for the "plop" and "tease" phases mentioned earlier. You want that extra energy to kick off a loop, or get people all excited before the next downbeat. Double-time percussion can also provide those all-important variations that occur during the 4th, 8th, and 12th beats of a four-measure section. But when it comes to adding percussion, behave yourself: Keep the levels sane. Percussion instruments have a lot of treble, especially tambourine, and they'll make your ears bleed unless you keep the volume mixed relatively low. You can always boost the treble if you need to kick these up a bit, but if they're too bright and you need to cut the treble, you'll muddy the snare, kick, hats, etc. Use velocity a lot to vary dynamics. Level variations keep the parts from getting annoying. And be aware that drum/percussion samples often include multiple variations—if you're doing congas, for example, there will be at least two main conga samples, and maybe a slap hit. Use 'em all. AND NOW, A WORD FROM BIG AL Albert Einstein once said that E=MC2, which means that if you get enough mass moving fast enough, it becomes energy. That's the whole point of beats. Get those bodies moving, and you'll create a lot of energy. More energy = more dancing = more sweating = more people going to the bar for drinks = more money for the club owner = job security for you. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Here's How to Squeeze Your Main Squeeze By Craig Anderton Dynamics processing with studio-oriented processors? Been there, done that. But have you re-visited it lately in a guitar context? Dynamics control for vocals or program material is very different compared to guitar. Much of this is because there are many ways to use dynamics processing for guitar (or bass). So, let's take a look at the different ways to apply dynamics, with examples of suggested settings. For an introduction to compression, check out the article Compressors Demystified. If you're already up to speed, let's give a few basics on how to set up studio processors with guitar (however, note that these same basic techniques work with plug-in software compressors as well as hardware). THE INTERFACE SPACE "Stomp box" dynamics processors, while designed specifically for guitar, are more limited than rack-mount studio hardware - but the latter have issue levels with guitar. Interfacing involves one of four approaches: Use the instrument input. If the processor has an "instrument" input, you're golden. Plug the guitar directly into the processor, then run it into the mixer, amp modeler, guitar amp (assuming you can adjust the output level to avoid total overload), or whatever. Look for an instrument input impedance above 100kilohms, and preferably above 220kilohms, to avoid dulling high frequencies and reducing level. But too high an impedance (in the 5-10Megohm range) reaches a point of diminishing returns, because now the input may be too sensitive and prone to noise pickup. A 1Megohm impedance is a good compromise setting. Use a preamp or suitable direct box. Adding a preamp or direct box (assuming it has an appropriately high input impedance) before the processor will preserve the guitar signal's fidelity and allow for best level matching. If you're driving a guitar amp, you may be able to use the dynamics processor's output control to add some extra overdrive, but don't go overboard (or do, if you like really nasty sounds!). Insert into your guitar amp's effects loop. If you want to record with your guitar amp but are using a line-level processor, patch it into the guitar amp's effects loop. The loop should be able to provide line levels for the send (goes into the processor's input) and return (comes from the processor's output). If you're using a hardware mixer, insert the dynamics processor into your mixer's channel inserts. This will also match levels properly, although you'll still have to figure out how to interface the guitar with the mixer. The choices are the same as above: If the mixer has an instrument input, great. If not, use a preamp, direct box, etc. between the guitar and mixer. THE TECHNOLOGY HYPE Tube vs. solid state. Optical vs. VCA. Peak vs. RMS detection. Manual vs. automatic attack/decay settings. Dynamics processors inspire endless debates, but the truth is (as usual) in the ear of the beholder. Nonetheless, there are situations where these characteristics matter, as noted in subsequent sections. Now that you're set up, consider the "Big Three" most common ways to use dynamics processing (Fig. 1). We'll be referencing control settings to gain reduction indications on the gain reduction meter - a crucial visual feedback element in any dynamics processor. We'll also assume that the input signal you're feeding into your processor uses the full input range (i.e., the peak levels are just short of distortion). Fig. 1: Refer to these four images to see how various processes affect the waveform. #1: SUSTAIN The object of sustain is to bring up low levels as the string decays. In Fig. 1, note how the second waveform from the top has a squashed attack, and much higher amplitude decay, compared to the uncompressed waveform. Here are some suggested settings. Gain reduction meter: The meter should show a large amount of gain reduction (e.g., 10-16dB), and the gain should remain fairly reduced as the string decays. Threshold: Set this to a low value, like -20dB. That will allow compression to remain in effect, even at low signal levels. Ratio: Start with 10:1, and move up from there. This is an instance where large ratios are a good idea. Attack: Set a short attack time so that if the note is toward the end of its decay and you hit another note, there won't be a big pop or peak at the new note attack. With analog compressors, you'll never get a true 0 attack time - you need digital look-ahead for that. Fortunately, the transient may be so short that you can clip the transient, yet not notice any significant distortion. Release: This should be fairly long, like 200ms or so. Watch the gain reduction meter - play a note, than mute it sharply. The gain reduction meter should drift back to 0 gain reduction over about a second, not "snap" back quickly to 0. Opto vs. VCA: I'd suggest VCA to minimize attack time. However, if there's an opto option, you may like the way it colors the sound. #2: BIGGER SOUND In this case you don't want to "hear" the compressor doing its thing, but just give the guitar a level boost while sounding as uncolored as possible. The third waveform down in Fig. 1 has the same basic dynamics as the uncompressed signal, but with a little less attack amplitude and a slightly "lifted" decay. Gain reduction meter: For the most authentic sound, don't reduce gain more than -3 to -6dB. The gain reduction meter motion should also be fairly "tight," without a lot of drifting. Threshold: Set to a value around -6dB, which should be enough to have an effect without sounding "compressed." Ratio: Lower ratios will sound more transparent. Even ratios below 2:1 (e.g., 1.5:1) can be useful. In any event, it's doubtful you'll want to go much above 4:1. Attack: As you're not imposing huge amounts of compression, adding a little attack time (around 10-40ms) will allow a more percussive, thus more natural-sounding, attack. If you hear "popping" instead, either reduce the attack time, raise the threshold, reduce the ratio, or try a combination of all three. Release: Try 50ms or less. You want a smooth, but rapid, drift back to no gain reduction after you stop playing. Opto vs. VCA: Try using an Opto setting, as this can give a nice "character" to the sound. #3: CONTROLLING TRANSIENTS The classic example is slap bass, where there's a huge initial transient followed by a much lower level. If you set levels to accommodate the transient, the sustain will be too low; set levels for the sustain, and the transient will likely produce a nasty pop. Here's what to do for maximum transient control; pull back from these settings if the effect is too drastic. In Fig. 1, the bottom waveform uses transient control. Note the greatly reduced attack, which allows bringing up the entire waveform's level without clipping. But also note that the decay's shape is essentially the same as the uncompressed signal. Gain reduction meter: This should snap to the maximum amount of gain reduction, then snap back to 0 fairly rapidly after the transient is over. Threshold: Set this to a high value, like -3 to -6dB. You want to affect just the initial transient. Ratio: Use a high ratio - over 10:1 - if the transient is strong and needs taming. Higher ratios will push the gain reduction meter further into the reduced gain zone. Attack: If possible, set this to zero as you want to clamp the transient as rapidly as possible. Release: This should be fairly short (20-50ms). The gain reduction meter should return rapidly to 0 gain reduction after the transient is over. Opto vs. VCA: Definitely VCA, you want the fastest possible attack. HEY, WHAT ABOUT NOISE? Many dynamics processors also include dynamic expansion (basically the inverse of a compressor, where gain drops off rapidly below a certain level) or noise gating. In general, I prefer dynamic expansion for its smooth decay characteristics. However, some gates include attack and decay controls, making it easier to simulate the effect of using an expander. With most compressors, the easiest way to adjust the amount of reduction is to hit a string or chord, then wait until the level reaches the lowest desired level. Quickly turn the noise reduction threshold control until you can see that expansion is active, and you should be pretty close to the right setting. DOUBLE YOUR PLEASURE Patching two compressors in series, with both set for small amounts of compression, can give a significant amount of compression but sound less obvious than using a single compressor to give the same amount of compression. The first stage essentially "pre-conditions" the signal so that the second compressor doesn't have to work so hard. If you have a stereo compressor that can be set to dual mono operation, you can patch the two individual compression channels in series. With plug-ins, you can just insert two in series in a track. The drawback is that unlike standard compression, where you have to adjust only one set of controls, an ˆ la carte approach requires adjusting both sets of compressor controls. While this might seem like a disadvantage, most of the time you'll set them to similar settings anyway. WINDOW SHOPPING To get an idea of what's out there in compressor-land, click here and you'll see the choices are huge, ranging from under a hundred dollars to thousands (and thousands!) of dollars. But realistically, for the type of application we're describing here, you don't need anything too fancy - it's not like you're using the compressor to re-master vintage recordings for audiophile releases. Besides, these days technology is at a level where even fairly inexpensive devices can deliver excellent results. In any event, all the above tips are just guidelines. Experiment with your dynamics processor, and you may find yet another way to exploit these perhaps unglamorous, but extremely useful, devices. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. Let's get dirty... By Craig Anderton Yes, it's all the rage: Lo-fi music, where you mess up sound not because you lack experience or can't afford good gear, but because you want to mess it up - and mess it up good. In this article, we'll consider when bad things happen to good basses, and why that can be fun. MAKING THE CONNECTION I like lo-fi sometimes, but there are limits - I still want the bass, along with the kick, to be the driving low-frequency force behind a tune. However, a lot of lo-fi boxes, while delivering cool effects, take away the rich low end we need to preserve. Lo-fi works best for me when placed "on top" of the main bass sound, like chocolate syrup on a sundae. This requires a parallel effects connection so that the effect is added to the bass (Fig. 1). Fig. 1: Splitting a bass signal preserves the low end, while letting you add effects "on top of" this main sound. With hard disk/plug-in effects-based recording, you often end up adding effects on mixdown. The simplest solution here is to copy your main bass track to another track, and use plug-ins to process the copied track. If you're using a mixer (hardware or software), you'll probably want to pan the two sounds to center, unless you use stereo bass and stereo effects. There's one caution: Some lo-fi effects might affect phase enough to thin out the bass sound. Flip the phase switch of the channel with the effects, and listen carefully in mono. If the sound is fuller, leave the phase flipped. If it's thinner, go back to standard phase. LO-FI OPTIONS Here are some of my favorite effects for turning basses into instruments of mass destruction. Distortion. You can always obtain distortion by overloading an amp, but distortion boxes and plug-ins are often more flexible. For hard disk recording applications, guitar and bass amp simulators like Native Instruments' Guitar Rig, IK Multimedia's Ampeg SVX and AmpliTube 2, Waves GTR, Universal Audio's Nigel, and iZotope's Trash are ideal for this task. And of course, when it comes to hardware, there are a zillion options for distortion. The biggest problem with distortion is that it generates a ton of harmonics, which can tilt your instrument's spectrum too much into the treble zone, thus producing a thin sound. I recommend following distortion with a high-cut filter so you have some control over the high-end/low-end balance. Ring modulator. A ring modulator has two inputs. You plug your bass into one, and some other signal source (anything from a steady tone - the usual choice - to drums or program material) into the other input. The ring modulator output then generates two tones: the sum of the input frequencies, and the difference. For example, if you're playing A = 110 Hz on the bass and feed in a 500 Hz tone into the other input, the output will consist of two tones: 610 Hz and 390 Hz. Because these are mathematically (not harmonically) related, the resulting tone is "clangorous" and has characteristics of a gong, bell, or similar enharmonic percussive instrument. Ring modulators are good for having crazy sounds going on in the background of your main line. They add a sort of goofy, non-pitched effect that uncenters the tonal center of whatever you're playing. Hardware ring modulators aren't easy to find; probably the best is the Moogerfooger Ring Modulator. But software plug-ins are plentiful, including free ones. Go to the net and search on "Ring modulators" and "plug-ins," and you'll find plenty of options to try out. Fig. 2: Ableton Live's Redux effect gives nasty bit decimation effects, and throws in "downsampling" for good measure. Bit decimation. I don't know of any hardware box that performs this function, but bit reduction is a fairly common plug-in type for digital audio (Fig. 2). The concept is to reduce the number of bits used to represent a signal. For example, 16 bits gives over 65,000 steps of amplitude resolution - good enough to encode a signal with excellent fidelity. Cut that down to 4 bits, and you have only 16 steps of resolution. This turns nice, round waveforms into weird stair-step shapes that generate lots of strange harmonics. There?s also a certain "graininess" to the sound, and a kind of bizarre, ringing effect. Sample rate conversion. High sample rates give better fidelity, but conversely, really low sample rates give worse fidelity. The Redux plug-in shown in Fig. 2 has a "Downsampling" option that works similarly by arbitrarily removing samples. For example, if it's set to "1," every sample at the input passes through to the output; if set to "4," three out of every four samples are discarded on their way to the output. Fig. 3: DigiTech's BP200 bass processor includes a useful octave divider effect. Pitch shifting. Technologically speaking, pitch shifting is a hard effect to create; with budget effects, the sound quality is usually not all that great. But with newer effects (Fig. 3), by setting the pitch shift to one octave lower and playing high up on the neck, you can get a "growl" that definitely has its uses. The sound may end up being somewhat diffused, but when you need a huge bass sound, this could be the ticket. As to software, there are plenty of pitch-shifting options, but not all are real-time. Your best bet is to use octave divider effects found in amp simulator software. Demented plug-ins. Software plug-ins have inspired a wide range of nastifiers. Some of these add vinyl scratches and pops to sounds, some are designed to emulate overdriven analog tape, and some have no real hardware equivalents (one of my favorites is Native Instruments' Spektral Delay). The more complex the plug-in, the greater the odds that you can push the controls into creating crude, lewd, and rude effects. WHY ON EARTH... ...would anyone want to make ugly sounds? Well, maybe you recently joined Slipknot, or maybe you just have a sense of humor. Or maybe you're tired of excessive attention to detail and want something more raw and rough. In any event, relax your standards from time to time - you may discover some unusual sounds that end up being "keepers." Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Whether for Recording or Live Performance, the Right EQ is Crucial by Craig Anderton Equalization is one of the most important and powerful tools in the recording enthusiast’s arsenal, yet too many people adjust equalization with their eyes – not their ears. For example, one time after doing a mix, I noticed the client writing down all the EQ settings I had made. When I asked why, he said it was because he liked the EQ and wanted to use the same settings on these instruments in future mixes. Wrong! EQ is a part of the mixing process; just as levels, panning, and reverb are different for each mix, EQ should be custom-tailored for each mix as well. But to do that, you need to understand how to find the magic EQ frequencies for particular types of musical material, as well as what tool to use for what application. There are three main applications for EQ: Problem-solving Emphasizing or de-emphasizing an instrument in a mix Altering a sound’s personality Each application requires specialized techniques and approaches. PROBLEM-SOLVING EQ can fix a variety of problems, and the tool of choice is usually a parametric equalizer, which consists of a limited number (typically 1-8) of frequency bands (Fig. 1). For each band, you can change not just the degree of boost or cut, but also the frequency at which this boosting or cutting occurs, as well as how wide a range of frequencies is affected – from very sharp to very broad. (“Pseudo-parametric” equalizers omit the bandwidth control, and the lack of this can seriously hamper the experienced EQ aficionado. Arguably, manufacturers err on the side of too narrow a fixed bandwidth, which makes it difficult to do subtle changes.) Fig. 1: The PSP Audioware MasterQ is a high-quality equalizer plug-in for digital audio workstations with seven frequency-altering stages. Going from left to right, these are a highpass filter, low frequency shelving EQ, three parametric stages, high frequency shelving EQ, and a lowpass filter. As one example of how you’d use a parametric EQ, slicing a sharp notch at 60Hz (50Hz in Europe) can knock hum out of a signal; trimming the high frequencies can remove hiss. Another common problem is an instrument with a resonance or peak that interferes with other instruments, or causes level-setting difficulties. Following is a procedure that takes care of this situation. Several years ago I produced an album by the late classical guitarist Linda Cohen (Angel Alley, which was re-released on CD). She had a beautiful instrument with a full, rich sound that projected very well on stage, thanks to a strong body resonance in the lower midrange that caused a major level peak. However, recording was a different matter from playing live. If levels were set so the peaky, low frequency notes didn’t overload the recording media, the higher guitar notes sounded weak by comparison. Although compression/limiting was always an option, it altered the guitar’s attack; while this effect might have gotten lost in an ensemble, it stuck out with a solo instrument. A more natural-sounding answer was to use EQ to apply a frequency cut equal and opposite to the natural boost, thus leveling out the response. But there’s a trick to finding problem frequencies so you can alter them; the following works like a charm. Turn down the monitor volume – the sound might get nasty and distorted during the following steps. Set the EQ for lots of boost (10-12dB) and fairly narrow bandwidth (around a quarter-octave or so). As the instrument plays, slowly sweep the frequency control. Any peaks will jump out due to the boosting and narrow bandwidth. Some peaks may even distort. Find the loudest peak and cut the amplitude until the peak falls into balance with the rest of the instrument sound. You may need to widen the bandwidth a bit if the peak is broad, or use narrow bandwidth for single-frequency problems such as hum. This technique of boost/find the peak/cut can help remove midrange “honking,” strident resonances in wind instruments, and much more. Of course, sometimes you want to preserve these resonances so the instrument stands out, but many times applying EQ to reduce peaks allows instruments to sit more gracefully in the track. Digital workstation EQ, as found in hard disk recording systems, can be particularly effective due to its precision. In one of my more unusual projects, I needed to remove boat motor noise from some whale samples. Motor noise is not broadband, but exists at multiple frequencies. Applying several extremely sharp and narrow notches at different frequencies took out each component of the noise, one layer at a time, until the motor noise was completely gone. This type of problem-solving also underscores a key principle of EQ: it’s often better to cut than boost. Boosting uses up headroom; cutting opens up headroom. In the example of solving the classical guitar resonance problem, cutting the peak allowed for bringing up the overall gain to record a much higher overall level. EMPHASIZING INSTRUMENTS The same technique of finding and cutting specific frequencies can also eliminate “fighting” between competing instruments. For example, while mixing a Spencer Brewer track for Narada records, there were two woodwind parts with resonant peaks around the same frequency. When playing en ensemble they would load up that part of the frequency spectrum, which also made them difficult to differentiate. Here’s a workaround: Find, then reduce, the peak on one of the instruments to create a more even sound. Note the amount of cut and bandwidth that was applied to reduce the peak. Using a second stage of EQ, apply a roughly equal and opposite boost at either a slightly higher or slightly lower frequency than the natural peak. Both instruments will now sound well- articulated, and because each peaks in a different part of the spectrum, they will tend not to step on each other. NEW SONIC PERSONALITIES EQ can also change a sound’s character – for example, turn a brash rock piano sound into something more classical. This type of application requires relatively gentle EQ, possibly at several different points in the audio spectrum. Musicians often summarize an instrument’s character with various subjective terms. The following correlates these terms to various parts of the frequency spectrum (this is, of course, a very subjective interpretation). 200Hz and under: bottom 200-500Hz: warmth 500Hz-1500Hz: definition 1500Hz-4000Hz: articulation, presence 4000-10,000Hz: brightness 10,000-20,000Hz: sheen, air For example, to add warmth, try applying a gentle boost (3dB or so) somewhere in the 200-500 Hz range. However, as in the previous case, remember that if possible, cutting is preferable to boosting – for example, if you need more brightness and bottom, try cutting the midrange rather than boosting the high and low ends (Fig. 2). Fig. 2: The midrange is reduced somewhat around 500Hz, thus making the high and low ends of the spectrum more prominent. OTHER EQ TIPS Problem-solving and character-altering EQ should be applied early on in the mixing process, as they will influence how the mix develops. But wait to apply most EQ until the process of setting levels begins; remember, EQ is all about changing levels – albeit in specific frequency ranges. Any EQ changes you make will alter the overall instrumental balance. Another reason for waiting a bit is that instruments EQ’ed in isolation to sound great may not sound all that wonderful when combined. If every track is equalized to leap out at you, there’s no room left for a track to “breathe.” Also, you will probably want to alter EQ on some instruments so that they take on more supportive roles. For example, during vocals consider cutting the midrange a bit on supporting instruments (e.g., rhythm guitar) to open up more space in the audio spectrum for vocals. Finally, remember that EQ often works best when applied subtly. Even one or two dB of change can make a significant difference. However, inexperienced engineers often do something such as increase the bass too much, which makes the sound too muddy, so they increase the treble, and now the midrange sounds weak, so that gets turned up...you get the idea. One of your best “reality checks” is an equalizer’s bypass switch. Use it often to make sure you haven’t lost control of the original sound! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Take Your Guitar Recording to the Next Level by "Re-Amping" Your Guitar Track By Craig Anderton Although discontinued, DigiTech's Genesis3 remains a versatile, cost-effective studio processor. In fact, it's one of a select group of signal processors that makes it easy to "re-amp" your guitar track with a hard disk recording system's plug-ins. What's re-amping? Guitarists usually like to record through their favorite amp and effects. However, suppose on mixdown you realize that the chorusing was waaaaay too intense, or the stomp box's reverb effect isn't as good as the one produced by your high-end rack reverb. If you recorded only the processed sound, you're stuck. The concept behind re-amping is to let the guitarist play through a comfortable setup, and record the results -- while also recording the guitar's signal before it goes to the effects and amp. Then, on mixdown, you can feed the straight signal through plug-ins (or send it to an audio output that feeds a miked amp). Either replace the original sound, or blend the two. FIRST, MAKE A DATE WITH AN UPDATE The Genesis3 firmware has been updated several times since it was first introduced, and V1.1 made re-amping much easier. You can download the latest version of the cross-platform GenEdit editing software (V1.61A) from www.digitech.com/software; this version includes the latest firmware update (V1.4) and an easy-to-use installer. You can also download the firmware separately if desired, but the editor is well worth the download time -- it simplifies programming, and accesses some parameters unavailable from the front panel. The GenEdit software is the secret to getting the most out of DigiTech's Genesis3 guitar processor; it has three pages for parameter adjustments, and a browser for factory and user programs. Updating is simple; hook up your computer's MIDI in andout to the Genesis3, and run the updater. The instructions are clear, and the process worked flawlessly. Major props to DigiTech for keeping on top of upgrades, and providing truly useful editing software. RE-AMPING CONNECTIONS This tip exploits the "Dry Track" feature, which determines the signal that feeds the SPDIF out. There are three options: Off: Dry Track sends the total sound (including amp models, effects, etc.). 1: The S/PDIF out taps the signal prior to the time-based effects, but after the amp models and noise gate. 2: The out sends the straight guitar signal only, with no effects. This is what we want. To access the Dry Track function: Press Amp Save and Store simultaneously Press Edit (or Tap-It) until the display shows DRYTRK Turn the data wheel and select option 2. Now make your connections. You'll need a SPDIF input along with analog ins to send to your hard disk recorder (with an analog-based studio, you can probably find something that converts a SPDIF input signal to an analog out, like a audio interface's converters). With my digital mixer, I feed the Genesis3 SPDIF out to the SPDIF in (channels 15 and 16), while the analog outs go to analog ins 13 and 14. Each stereo pair feeds a separate bus for recording, so this technique requires four guitar tracks (unless you know in advance you want only the straight signal, or record two mono signals). RECORDING Because the straight signal goes into your recorder directly via digital, sound quality is preserved. Meanwhile, the analog outs provide your main guitar sound (or feed an amp that you record into the recorder instead of the Genesis3 outs). During mixdown, you can now take the straight signal and apply guitar amp plug-ins like Native Instruments' Guitar Rig, Waves G|T|R, IK Multimedia's Amplitube, or Peavey's ReValver, and tweak the sound as desired. Even some feedback effects are preserved, because if there was interaction with the amp while the guitarist was playing that made the strings sustain, that sustained sound will exist in the straight track. I've also done tricks like recording Dry Track Option 1 (post-model/no effects sound) along with the analog outs. This allows changing the balance between the effects and the modeled guitar amp sound during mixdown. All in all, the Dry Track feature is very cool -- check it out. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. ReCycle can do a lot more than create REX files…like signal processing. By Craig Anderton Propellerheads' ReCycle is a cross-platform software tool for creating REX files, which can stretch tempo and pitch independently. The process works by cutting a digital audio file into multiple slices, usually with the cut points at steep attack transients. These slices are triggered by MIDI; as the tempo of the host DAW or sequencer slows down, the slices are triggered further apart, thus slowing down the phrase. Faster tempos trigger the slices closer together. (Pitch transposition is a separate process that, while not as effective as time-stretching, is satisfactory for relatively small transposition ranges.) The REX time-stretching process is optimized for sounds like drums, percussion, and other instruments with sharp, defined transients. But for groove-oriented music, ReCycle can also be a wonderful processor for imparting "synchro-sonic" (beat-related) characteristics to sustained sounds. I used this technique a lot with guitar power chord samples on the Technoid Guitars sample CD so they could pulse with the beat instead of just sustain -- think of it as 21st century, synchronized tremolo. However, this technique also works with vocals, bass, string parts, and just about anything that sustains. We'll start with creative signal chopping, then move on to some other ReCycle processing tricks. RHYTHMIC CHOPPING Suppose there's some audio in your DAW program, like a sustained chord, phrase with multiple chords, vocals, or the like, and you want to create a 16th note tremolo effect that chops the sound. Define the region of audio you want to process (make sure the boundaries fall precisely on beats, and note the total duration in measures + beats). Then, export the region as AIFF or WAV audio, 16 or 24 bits. Open the file in ReCycle, then: 1. Go View > Show Grid. Enter the duration in bars. 2. A grid appears, with sub-divisions for 16th notes. 3. Go Process > Add Slices to Grid. This adds slices at every 16th note. 4. Click on the "Preview Toggle" icon. 5. Click "Play." You'll hear the original sound with small gaps, and possibly clicks, at the slices. If you like this sound, fine. But let's click on the Envelope icon, then refine the sound further with the envelope options. 6. Use the envelope Decay control to edit the slice decay time (try around 300 ms to start). When you click on "Play," you'll hear a decaying sound every 16th note. 7. Experiment with the Attack control. A 1.0 ms setting minimizes clicking at the slice points; longer settings give an "attack delay" effect, where each slice fades up to maximum. 8. If you plan to stretch the audio to a different tempo, slow it to the minimum tempo you expect to use, and adjust the Stretch control to obtain the best possible sound quality. Also check it at the highest anticipated tempo, and choose a Stretch setting that gives a good overall compromise sound. Done! If you save this file, it will be in REX format, suitable for use in supporting programs like Reason, Cubase, Logic, Sonar, etc. However, you don't have to save it as a REX file; you can export as a WAV, AIFF, or SD II file, then simply bring it back into the program from which it came. To do this: 1. Go Process > Transmit As One Sample before saving. Otherwise, each slice will be saved as an individual file. 2. Go File > Export, and choose the file type. 3. Click on Save. ADVANCED CHOPPING Slices need not be on rigid boundaries, nor constrained to the grid. So, you can easily create syncopated patterns, or place the emphasis on certain beats -- like having the longest slice on the downbeat. Just place the marker where you want a slice to start, or use the hide function (click on the Hide icon, which looks like an X) to turn off a marker if you want a longer slice. There's even a way to create gaps and stuttering effects, because ReCycle lets you mute any number of slices. Select the Pencil tool, then click on the marker that begins a slice to be muted (or Shift-click for multiple markers). Click on the Silence Selected icon or go Process > Silence Selected. This mutes the slice following each selected marker. If you export the file, there will be silence wherever the slice was muted. This shows a file before and after processing. The top file is the original audio; note where the markers were placed in ReCycle. The lighter section represents a slice that was muted. The lower file shows the results of exporting as an AIFF file, with a fairly short Decay setting. The red lines were added to emphasize how the audio waveform lines up with the ReCycle markers. Note how the original sustained chord is now a series of rhythmic pulses. ANOTHER GAPPING OPTION You can also create gap/stutter effects very easily, albeit not with the same kind of rhythmic precision as the previously-described methods, just by editing the Gate Sensitivity parameter. This applies a gating effect to individual slices that turns on at the start of the slice, and mutes the signal when the level falls below the Sensitivity threshold. Actually, the process is a little more complex than this, because even slices with all levels below the threshold may have a little signal present at the start of the slice. But don't worry about the fine points; just play the file, adjust the Sensitivity, and if it makes groovacious sounds, you're set. SLICE JUMBLING Although you'll usually want to save a file as a single entity, if you uncheck Process > Transmit as One Sample, each slice will be saved as an individual file. These can then be re-assembled in your DAW. For the most foolproof results when re-assembling, before saving slice at equal intervals like 8th or 16th notes (although the adventurous are welcome to experiment). These slices are numbered, so it's easy to bring them back into your DAW in the original order (for best results, choose an appropriate snap value; some snap functions allow snapping to event boundaries, which makes it simple to "butt splice" the various slices together). But why be normal? Change the order of the slices within your DAW, remove slices, duplicate slices, snap them to different beats, etc. THE OTHER KIND OF NORMALIZATION ReCycle can normalize a file (usually done before slicing, but it can be done afterward as well). However, go Process > Normalize and you'll see two options: normalize Each Slice or the Whole File. If you normalize all individual slices, this can bring up soft parts and alter dynamics in interesting ways. A HOME FOR TRANSIENTS Finally, don't overlook the Transient Shaper as another useful processor. Superficially it resembles a compressor, but it works on a different principle. Rather than burn up a lot of words here describing it, I suggest learning about it by checking out the supplied presets, and tweaking them to see how the controls affect the sound. Note that using these processors may change the gain, possibly causing clipping. Periodically check the meters in the lower right corner of the main sample window, and adjust the Gain control for the highest signal level short of distortion. Yes, ReCycle can do a lot more than create REX files -- especially for beat-oriented music. Mess around with it, and the results may surprise you. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. Good Things Come in Small Packages . . . and So Do Surprises By Craig Anderton I like little boxes. I like little boxes that do lots of things even more. When Dave Smith introduced Evolver, a lot of keyboard players salivated over the prospect of a compact, inexpensive, great-sounding monophonic synth with an artful blend of analog and digital technology. Almost lost in the shuffle was the fact that it had guitar- and studio-friendly stereo inputs, with pretty potent processing power. It may look like a synthesizer, and it is. But it's also a very hip signal processor. So does it replace a traditional multieffects? No; for example, there's no reverb. But if you do recording and play guitar -- or even better, record and double on keyboards and guitar -- this box addresses all of those needs, and then some. PROCESSING MODULES The main attraction for processing is two analog lowpass resonant filters (not digital emulations), switchable between 2 and 4-pole operation. There's one for each channel. Additional per-channel digital highpass filters are good for trimming out some boominess from a guitar signal; there's also distortion, which can go pre- or post-filter. Of particular interest to guitarists are two control sources: envelope follower and peak detector. There are three envelopes (triggerable from the input signal), tempo-syncable LFOs that respond to incoming MIDI tempo information, tunable feedback loops for each channel, and a three-tap monaural delay. Delay can be tempo-synched, with the feedback path routable through the filter; and of course, you can use the internal sequencer to control parameters like filter cutoff. To get the most out of Evolver, you'll need to jack into the matrix . . . of parameters, that is. Select the Row containing a parameter to be edited, find the parameter's Column, then turn the Column's associated knob to change the parameter value. Once you get into the unit, this is actually a pretty painless process. But if you prefer, computer editors are available from the Dave Smith web site, along with templates for the Peavey PC-1600 (which turns out to be a surprisingly effective way to edit). PREPARE FOR GUITAR . . . Odds are your guitar is mono, so plug into the Left jack, as that's where the envelope follower and peak hold modules derive their trigger. The "guitar" factory programs (Bank 3, 20-29) have the Ext In parameter (Row 8, Column 7) preset to L, or Left input only. This setting is saved for each patch, so if you create your own programs or overwrite existing programs with guitar setups, make sure to set this parameter to L. Also, you may need to adjust the Input Gain (second Main row, Column 3) depending on the patch and your instrument's output level. . . . BUT PRETEND IT'S A SYNTH The guitar programs give only a taste of Evolver's talents: For wicked guitar fun, choose a synth patch that messes with filtering, and adapt it to guitar. Here's an example of how to adapt a synth patch, using Factory Patch 19 in Bank 1. (Make sure your guitar is plugged into the left input before proceeding.) Turn down all oscillator outputs. The level controls are located at Rows 1 and 2, Columns 4 and 8. Turn up the Ext In volume (Row 8, Column 6) so the guitar gets mixed in to the signal path instead of the oscillators. Increase the Input Gain control if needed. Hit the Start/Stop switch to start the sequencer. You'll hear pulsating, filtered effects; adjust the tempo to suit. Now let's double-time it with some echo. Go to Row 5, Column 4 and set Time to St2. Turn up the delay Level in Column 5 to around 50 or so. Row 3 also has some useful parameters: Play around with Columns 3, 4, and 7 (Attack, Decay, and Resonance, respectively). This single example just scratches the surface; my own experiments with this instrument continue to, uh, evolve. But if you thought the guitar patches represent Evolver's total contribution to guitar processing, you have some pleasant surprises in store. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. It's Like Viagra for Live Performance by Craig Anderton Jennifer Hudson did it while singing the national anthem at the Super Bowl. Kiss does it. Even classical musicians playing at the President's inaugural do it. Sometimes it seems everyone uses backing tracks to augment their live sound. So why not you? Yes, it's sorta cheating. But somewhere between something innocuous like playing to a drum machine, and lip-synching to a pre-recorded vocal rather than singing yourself, there's a "sweet spot" where you can enhance what is essentially a live performance. A trio might sequence bass lines, for example, or a drummer might add pre-recorded ethnic percussion. However, you want something bullet-proof, easy to change on the fly if the audience's mood changes, and simple. I SYNC, THEREFORE I AM If a drummer's playing acoustic drums and a sequencer's doing bass parts, the drummer will have to follow the sequencer. But what happens if there's no bass to follow at the beginning of a song, or it drops out? The solution is in-ear monitors (besides, monitor wedges are so 20th century!). Assuming whatever's playing the backing part(s) has more than one output available, one channel can be an accented metronome that feeds only the in-ear monitors, while the other channel contains the backing track. If there are only two outputs the backing track will have to be mono, but that doesn't matter too much for live performance. BACKING TRACK OPTIONS The simplest backup is something that plays in the background (e.g., drum machine, pre-recorded backing track on CD, iPod, MP3 player, etc.), and you play to it. RAM-based MP3 players are super-reliable. They don't care about vibration, don't need maintenance, and have no start-up time. However, you can get CD players with enough anti-skip memory to handle tough club environments (just don't forget to clean your CD player's lens if you play smoky clubs). Another advantage of a simple stereo playback device is potential redundancy: Bringing another CD/MP3 player for backup is cheap and easy to swap out. The biggest drawback is musical rigidity. Want to take another eight bars in the solo? Forget it. A few drum machines give you some latitude (even the venerable Alesis SR-16 can switch between patterns and extend them), but with most players, what you put in is what you get out. To change song orders, just use track forward/backward to find the desired track. But the backup track player will always have to start off the song, or you'll need to hit Play at just the right time to bring it in. But these days, it's also possible to use machines designed specifically to play backing tracks - like the Boss JS-10 eBand (Fig. 1). This can play back WAV or MP3 files from an SD card (32GB will give you around 50 hours of playing time - perfect for Grateful Dead tribute bands). You can also create song files specific to the JS-10. THE LAPTOP FACTOR As many of the parts you'll use for backing tracks probably started in a computer sequencer, it makes sense to use it for your backing tracks. This is also the most flexible option; for example, if you sequence your backing track using Ableton Live (or most other hosts), you can change loop points on-the-fly and have a section repeat if you want to extend a solo (Fig. 2). Cool. It's also easy to mute or solo tracks for additional changes. Fig. 2: Move Live's loop locators (the looped portion is shown in red for clarity) on the fly to repeat a portion of music. As to reliability, though, computers can be scary. Few laptops are built to rock and roll specs, although there are exceptions. Connectors are flimsy, too; at least build a breakout box with connectors that patch into your computer, then plug the cables that go to the outside world into the breakout box. Secure your laptop (and the breakout box) to your work surface. Tape down any cables so no one can snag them. On the plus side, the onboard battery will carry you through if the power is iffy, or if someone trips over the AC cord while passing out drunk. Not, of course, that something like that could ever happen at a live performance... THE iPAD OPTION For less rigorous needs, an iPad will tale care of you. In fact, the SyncInside app ($8.99 from the App Store; see Fig. 3) lets you hook up a USB interface using the camera connector kit, and can output stereo tracks as well as a click through headphones (assuming your interface is up to the task). Fig. 3: The SyncInside iPad app was designed specifically for playing backing tracks in live performance situations. OneTrack is another iOS app for playing backing tracks, but it works with iPhone and iPod touch as well as an iPad. iOS solutions can also be convenient because nothing's better for live performance than redundancy. If you have an iPhone and an iPad, then an app like OneTrack can live in both places - if one device dies, you're still good to go. THE SEQUENCER SOLUTION A reliable solution, and very flexible solution, is the built-in sequencer in keyboard workstations (e.g., Roland Fantom, Yamaha Motif, Korg Kronos, etc.). If you're already playing keyboard, hitting a Play button is no big deal. You may also be able to break a song into smaller sequences, creating a "playlist" you can trigger on the fly to adapt to changes in the audience's mood; and with a multitrack sequence, you have the flexibility to mute and mix the various tracks if you want to get fancy (Fig. 4). What's more, as most workstation keyboards have separate outs, sending out a separate click to headphones will probably be pretty simple. Fig. 4: Yamaha's workstations have sophisticated sequencing options, as evidenced in this screen from the Motif XS. Another option is arranger keyboards. Casio's WK-6500 isn't an arranger keyboard in the strictest sense, as it's also a pretty complete synthsizer workstation (Fig. 5). Fig. 5: If you're looking for a keyboard-based backing track solution, arranger keyboards, and keyboards with auto-accompaniment like the Casio WK-6500, will often give you want you want. However, it does include auto-accompniment features and drum patterns with fills, ends, and so on. And with a 76-key keyboard, you can enhance your backing tracks with real playing. How's that for a concept? (The price is right, too - typically under $300.) THE IMPORTANCE OF AN EXIT STRATEGY With live backing tracks, always have an exit strategy. I once had a live act based around some, uh, unreliable gear, so I patched an MP3 player with several funny pieces of audio recorded on it into my mixer. (One piece was a "language lesson," set to music, that involved a word we can't mention here; another had a segment from the "How to Speak Hip" comedy album.) If something needed reloading, rebooting, or troubleshooting, I'd hit Play on the player. Believe me, anything beats dead air! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Compressors are Essential Recording Tools - Here's How They Work By Craig Anderton Compressors are some of the most used, and most misunderstood, signal processors. While people use compression in an attempt to make a recording "punchier," it often ends up dulling the sound instead because the controls aren't set optimally. Besides, compression was supposed to become an antique when the digital age, with its wide dynamic range, appeared. Yet the compressor is more popular than ever, with more variations on the basic concept than ever before. Let's look at what's available, pros and cons of the different types, and applications. THE BIG SQUEEZE Compression was originally invented to shoehorn the dynamics of live music (which can exceed 100 dB) into the restricted dynamic range of radio and TV broadcasts (around 40-50 dB), vinyl (50-60 dB), and analog tape (40dB to 105 dB, depending on type, speed, and type of noise reduction used). As shown in Fig. 1, this process lowers signal peaks while leaving lower levels unchanged, then boosts the overall level to bring the signal peaks back up to maximum. (Bringing up the level also brings up any noise as well, but you can't have everything.) Fig. 1: The first, black section shows the original audio. The middle, green section shows the same audio after compression; the third, blue section shows the same audio after compression and turning up the output control. Note how softer parts ot the first section have much higher levels in the third section, yet the peak values are the same. Even though digital media have a decent dynamic range, people are accustomed to compressed sound. Compression has been standard practice to help soft signals overcome the ambient noise in typical listening environments; furthermore, analog tape has an inherent, natural compression that engineers have used (consciously or not) for well over half a century. There are other reasons for compression. With digital encoding, higher levels have less distortion than lower levels—the opposite of analog technology. So, when recording into digital systems (tape or hard disk), compression can shift most of the signal to a higher overall average level to maximize resolution. Compression can create greater apparent loudness (commercials on TV sound so much louder than the programs because of compression). Furthermore, given a choice between two roughly equivalent signal sources, people will often prefer the louder one. And of course, compression can smooth out a sound—from increasing piano sustain to compensating for a singer's poor mic technique. COMPRESSOR BASICS Compression is often misapplied because of the way we hear. Our ear/brain combination can differentiate among very fine pitch changes, but not amplitude. So, there is a tendency to overcompress until you can "hear the effect," giving an unnatural sound. Until you've trained your ears to recognize subtle amounts of compression, keep an eye on the compressor's gain reduction meter, which shows how much the signal is being compressed. You may be surprised to find that even with 6dB of compression, you don't hear much apparent difference—but bypass the sucker, and you'll hear a change. Compressors, whether software- or hardware-based, have these general controls (Fig. 2): Fig. 2: The compressor bundled with Ableton Live has a comprehensive set of controls. Threshold sets the level at which compression begins. Above this level, the output increases at a lesser rate than the corresponding input change. As a result, with lower thresholds, more of the signal gets compressed. Ratio defines how much the output signal changes for a given input signal change. For example, with 2:1 compression, a 2dB increase at the input yields a 1dB increase at the output. With 4:1 compression, a 16dB increase at the input gives a 4dB increase at the output. With "infinite" compression, the output remains constant no matter how much you pump up the input. Bottom line: Higher ratios increase the effect of the compression. Fig. 3 shows how input, output, ratio, and threshold relate. Fig. 3: The threshold is set at -8. If the input increases by 8dB (e.g., from -8 to 0), the output only increases by 2dB (from -8 to -6). This indicates a compression ratio of 4:1. Attack determines how long it takes for the compression to take effect once the compressor senses an input level change. Longer attack times let through more of a signal's natural dynamics, but those signals are not being compressed. In the days of analog recording, the tape would absorb any overload caused by sudden transients. With digital technology, those transients clip as soon as they exceed 0 VU. Some compressors include a "saturation" option that mimics the way tape works, while others "soft-clip" the signal to avoid overloading subsequent stages. Yet another option is to include a limiter section in the compressor, so that any transients are "clamped" to, say, 0dB. Decay (also called Release) sets the time required for the compressor to give up its grip on the signal once the input passes below the threshold. Short decay settings are great for special effects, like those psychedelic '60s drum sounds where hitting the cymbal would create a giant sucking sound on the whole kit. Longer settings work well with program material, as the level changes are more gradual and produce a less noticeable effect. Note that many compressors have an "automatic" option for the Attack and/or Decay parameters. This analyzes the signal at any given moment and optimizes attack and decay on-the-fly. It's not only helpful for those who haven't quite mastered how to set the Attack and Decay parameters, but often speeds up the adjustment process for veteran compressor users. Output control. As we're squashing peaks, we're actually reducing the overall peak level. This opens up some headroom, so increasing the output level compensates for any volume drop. The usual way to adjust the output control is to turn this control up until the compressed signal's peak levels match the bypassed signal's peak levels. Some compressors include an "auto-gain" or "auto makeup" feature that increases the output gain automatically. Metering. Compressors often have an input meter, output meter for matching levels between the input and output, and most importantly, a gain reduction meter. (In Fig. 1, the orange bar to the left of the output meter is showing the amount of gain reduction.) If the meter indicates a lot of gain reduction, you're probably adding too much compression. The input meter in Fig. 1 shows the threshold with a small arrow, so you can see at a glance how much of the input signal is above the threshold. ADDITIONAL FEATURES You'll find the above functions on many compressors. The following features tend to be somewhat less common, but you'll still find them on plenty of products. Sidechain jacks are available on many hardware compressors, and some virtual compressors include this feature as well (sidechaining became formalized in the VST 3 specification, but it was possible to do in prior VST versions. A sidechain option lets you insert filters in the compressor's feedback loop to restrict compression to a specific frequency range. For example, if you insert a high pass filter, only high frequecies are compressed—perfect for "de-essing" vocals. The hard knee/soft knee option controls how rapidly the compression kicks in. With a soft knee response, when the input exceeds the threshold, the compression ratio is less at first, then increases up to the specified ratio as the input increases. With a hard knee curve, as soon as the input signal crosses the threshold, it's subject to the full amount of compression. Sometimes this is a variable control from hard to soft, and sometimes it's a toggle choice between the two. Bottom line: use hard knee when you want to clamp levels down tight, and soft when you want a gentler, less audible compression effect. The link switch in stereo compressors switches the mode of operation from dual mono to stereo. Linking the two channels together allows changes in one channel to affect the other channel, which is necessary to preserve the stereo image. Lookahead. A compressor cannot, by definition, react instantly to a signal because it has to measure the signal before it can decide how much to reduce the gain. As a result, the lookahead feature delays the audio path somewhat so the compressor can "look ahead" and see what kind of signal it will be processing, and therefore, react in time when the actual signal hits. Response or Envelope. The compressor can react to a signal based on its peak or average level, but its compression curve can follow different characteristics as well—a standard linear response, or one that more closely resembles the response of vintage, opto-isolator-based compressors. COMPRESSOR TYPES: THUMBNAIL DESCRIPTIONS Compressors are available in hardware (usually a rack mount design or for guitarists, a "stomp box") and as software plug-ins for existing digital audio-based programs. Following is a description of various compressor types. "Old faithful." Whether rack-mount or software-based, typical features include two channels with gain reduction amount meters that show how much your signal is being compressed, and most of the controls mentioned above (FIg. 4). Fig. 4: Native Instruments' Vintage Compressor bundle includes three different compressors modeled after vintage units. Multiband compressors. These divide the audio spectrum into multiple bands, with each one compressed individually (Fig. 5). This allows for a less "effected" sound (for example, low frequencies don't end up compressing high frequencies), and some models let you compress only the frequency ranges that need to be compressed. Fig. 5: Universal Audio's Precision Multiband is a multiband compressor, expander, and gate. Vintage and specialty compressors. Some swear that only the compressor in an SSL console will do the job. Others find the ultimate squeeze to be a big bucks tube compressor. And some guitarists can't live without their vintage Dan Armstrong Orange Squeezer, considered by many to be the finest guitar sustainer ever made. Fact is, all compressors have a distinctive sound, and what might work for one sound source might not work for another. If you don't have that cool, tube-based compressor from the 50s of which engineers are enamored, don't lose too much sleep over it: Many software plug-ins emulate vintage gear with an astonishing degree of accuracy (Fig. 6). Fig. 6: Cakewalk's PC2A, a compressor/limiter for Sonar's ProChannel module, emulates vintage compression characteristics. Whatever kind of audio work you do, there's a compressor somewhere in your future. Just don't overcompress—in fact, avoid using compression as a "fix" for bad mic technique or dead strings on a guitar. I wouldn't go as far as those who diss all kinds of compression, but it is an effect that needs to be used subtly to do its best. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. USB memory sticks give huge performance gains with Ableton Live By Craig Anderton Many musicians use Ableton Live with a laptop for live performance, but this involves a compromise. Laptops often have a single, fairly slow (5400 RPM) disk drive, and a limited amount of RAM compared to desktop computers. Live gives you the choice of storing clips to RAM or hard disk, but you have to choose carefully. If you assign too many clips to disk, then eventually the disk will not be able to stream all of these clips successfully, and there will be audio gaps and dropouts. But if you assign too many clips to RAM, then there won’t be enough memory left for your operating system and startup programs. Fortunately, there’s a very simple solution that solves all these problems: Store your Ableton projects on USB 2.0 RAM sticks. That way, you can assign all the clips to stream from the solid-state RAM “disk,” so Ableton thinks they’re disk clips. But, they have all the advantages of being stored in RAM—there are no problems with seek times or a hard disk’s mechanical limitations. Best of all, the clips place no demands on your laptop’s hard drive or RAM, leaving them free for other uses. Here’s how to convert your project to one that works with USB RAM sticks. 1. Plug your USB 2.0 RAM stick into your computer’s USB port. 2. Call up the Live project you want to save on your RAM stick. 3. If the project hasn’t been saved before, select "Save" or "Save As" and name the project to create a project folder. Fig. 1: The "Collect All and Save" option lets you make sure that everything used in the project, including samples from external media, are saved with the project. 4. Go File > Collect All and Save (Fig. 1), then click on "OK" when asked if you are sure. Fig. 2: This is where you specify what you want to save as part of the project. 5. When you’re asked to specify which samples to copy into the project, select "Yes" for all options, and then click OK (Fig. 2). Note that if you’re using many instruments with multisamples, this can require a lot of memory! But if you’re mostly using audio loops, most projects will fit comfortably into a 1GB stick. 6. Copy the project folder containing the collected files to your USB RAM stick. 7. From the folder on the USB RAM stick, open up the main .ALS Live project file. 8. Select all audio clips by drawing a rectangle around them, typing Ctrl-A, or Ctrl-click (Windows) on the clips to select them. Fig. 3: All clips have been selected. Under "Samples," click on RAM until it's disabled (i.e., the block is gray). 9. Select Live’s Clip View, and under Samples, uncheck "RAM" (Fig. 3). This converts all the audio clips to “disk” clips that “stream” from your USB stick. Now when you play your Live project, all your clips will play out of the USB stick’s RAM, and your laptop’s hard disk and RAM can take a nice vacation. This technique really works—try it! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. You’ve Recorded the Vocal—But Don’t Touch that Mixer Fader Quite Yet By Craig Anderton As far as I’m concerned, the vocal is the most important part of a song: It’s the conversation that forms a bond between performer and listener, the teller of the song’s story, and the focus to which other instruments give support. And that’s why you must handle vocals with kid gloves. Too much pitch correction removes the humanity from a vocal, and getting overly aggressive with composite recording (the art of piecing together a cohesive part from multiple takes) can destroy the continuity that tells a good story. Even too much reverb or EQ can mean more than bad sonic decisions, as these can affect the vocal’s emotional dynamics. But you also want to apply enough processing to make sure you have the finest, cleanest vocal foundation possible—without degrading what makes a vocal really work. And that’s why we’re here. THE GROUND RULES Vocals are inherently noisy: You have mic preamps, low-level signals, and significant amounts of amplification. Furthermore, you want the vocalist to feel comfortable, and that too can lead to problems. For example, I prefer not to sing into a mic on a stand unless I’m playing guitar at the same time; I want to hold the mic, which opens up the potential for mic handling noise. Pop filters are also an issue, as some engineers don’t like to use them but they may be necessary to cut out low-frequency plosives. In general, I think you’re better off placing fewer restrictions on the vocalist and having to fix things in the mix rather than having the vocalist think too hard about, say, mic handling. A great vocal performance with a small pop or tick trumps a boring, but perfect, vocal. Okay, now let’s prep that vocal for the mix. REMOVE HISS The first thing I do with a vocal is turn it into one long track that lasts from the start of the song to the end, then export it to disk for bringing into a digital audio editing program. Despite the sophistication of host software, with a few exceptions (Adobe Audition and Samplitude come to mind), we’re not quite at the point where the average multitrack host can replace a dedicated digital audio editor. Once the track is in the editor, the first stop is generally noise reduction. Sound Forge, Adobe Audition, and Wavelab have excellent built-in noise reduction algorithms, but you can also use stand-alone programs like iZotope’s outstanding RX 2. The general procedure is to capture a “noiseprint” of the noise, then the noise reduction algorithm subtracts that from the signal. This requires finding a portion of the vocal that consists only of hiss, saving that as a reference sample, then instructing the program to subtract anything with the sample’s characteristics from the vocal (Fig. 1). Fig. 1: A good noise reduction algorithm will not only reduce mic preamp hiss, but can help create a more “transparent” overall sound. This shot from iZotope RX (the precursor to RX 2) shows the waveform in the background that's about to be de-noised, and in the front window, a graph that shows the noise profile, input, and output. There are two cautions, though. First, make sure you sample the hiss only. You’ll need only a hundred milliseconds or so. Second, don’t apply too much noise reduction; 6-10dB should be enough, especially for reasons that will become obvious in the next section. Otherwise, you may remove parts of the vocal itself, or add artifacts, both of which contribute to artificiality. Removing the hiss makes for a much more open vocal sound that also prevents “clouding” the other instruments. DELETE SILENCES Now that we’ve reduced the overall hiss level, it’s time to delete all the silent sections (which are seldom truly silent) between vocal passages. If we do this the voice will mask hiss when it’s present, and when there’s no voice, there will be no hiss at all. Some programs offer an option to essentially gate the vocal, and use that as a basis to remove sections below a particular level. While this semi-automated process saves time, sometimes it’s better (albeit more tedious) to remove the space between words manually. This involves defining the region you want to remove; from there, different programs handle creating silence differently. Some will have a “silence” command that reduces the level of the selected region to zero. Others will require you to alter level, like reducing the volume by “-Infinity” (Fig. 2). Fig. 2: Cutting out all sound between vocal passages will help clean up the vocal track. Note that with Sound Forge, an optional automatic crossfade can help reduce any abrupt transition between the processed and unprocessed sections. Furthermore, the program may introduce a crossfade between the processed and unprocessed section, thus creating a less abrupt transition; if it doesn’t, you’ll probably need to add a fade-in from the silent section to the next section, and a fade-out when going from the vocal into a silent section. REDUCE BREATHS AND ARTIFACTS I feel that breath inhales are a natural part of the vocal process, and it’s a mistake to get rid of these entirely. For example, an obvious inhale cues the the listener that the subsequent vocal section is going to “take some work.” That said, though, applying any compression later on will bring up the levels of any vocal artifacts, possibly to the point of being objectionable. I use one of two processes to reduce the level of artifacts. The first option is to simply define the region with the artifact, and reduce the gain by 3-6dB (Fig. 3). This will be enough to retain the essential character of an artifact, but make it less obvious compared to the vocal. Fig. 3: The highlighted section is an inhale, which is about to be reduced by about -7dB. The second option is to again define the region, but this time, apply a fade-in (Fig. 4). This also may provide the benefit of fading up from silence if silence precedes the artifact. Fig. 4: Imposing a fade-in over an artifact is another way to control a sound without killing it entirely. Speaking of fade-ins, they're also useful for reducing the severity of "p-pops" (Fig. 5) This is something that can be fixed within your DAW as well as in a digital audio editing program. Fig. 5: Splitting a clip just before a p-pop, then fading in, can minimize the p-pop. The length of the fade can even control how much of the "p" sound you want to let through. Mouth noises can be problematic, as these are sometimes short, “clickey” transients. In this case, sometimes you can just cut the transient and paste some of the adjoining signal on top of it (choose an option that mixes the signal with the area you removed; overwriting might produce a discontinuity at the start or end of the pasted region). PHRASE-BY-PHRASE NORMALIZATION A lot of people rely on compression to even out a vocal’s peaks. That certainly has its place, but there’s something else you can try first: Phrase-by-phrase normalization. Unless you have the mic technique of a K. D. Lang, the odds are excellent that some phrases will be softer than others—not intentionally due to natural dynamics, but as a result of poor mic technique, running out of breath, etc. If you apply compression, the lower-level passages might not be affected very much, whereas the high-level ones will sound “squashed.” It’s better to edit the vocal to a consistent level first, before applying any compression, as this will retain more overall dynamics. If you need to add an element of expressiveness later on that wasn’t in the original vocal (e.g., the song gets softer in a particular place, so you need to make the vocal softer), you can do this with judicious use of automation. Unpopular opinion alert: Whenever I mention this technique, self-appointed “audio professionals” complain in forums that I don’t know what I’m talking about, because no real engineer ever uses normalization. However, no law says you have to normalize to zero—you can normalize to any level. For example, if a vocal is too soft but part of that is due to natural dynamics, you can normalize to, say, -6dB or so in comparison to the rest of the vocal’s peaks. (On the other hand with narration, I often do normalize everything to as consistent a level as possible, as most dynamics with narration occurs within phrases.) Referring to Fig. 6, the upper waveform is the unprocessed vocal; the lower waveform shows the results of phrase-by-phrase normalization. Note how the level is far more consistent in the lower waveform. Fig. 6: In the lower waveform, the sections in lighter blue have been normalized. Note that these sections have a higher peak level than the equivalent sections in the upper waveform. However, be very careful to normalize entire phrases. You don’t want to get so involved in this process that you start normalizing, say, individual words. Within any given phrase there will be a certain internal dynamics, and you definitely want to retain this. ARE WE PREPPED YET? DSP is a beautiful thing: Now our vocal is cleaner, of a more consistent level, and has any annoying artifacts tamed — all without reducing any natural qualities the vocal may have. At this point, you can start doing more elaborate processes like pitch correction (but please, apply it sparingly and rarely!), EQ, dynamics control, and reverb. But as you add these, you’ll be doing so on a firmer foundation. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Can't get your bass to fit right in the mix? Then follow these tips By Craig Anderton If there’s one instrument that messes with people’s minds while mixing, it’s bass. Often the sound is either too tubby, too thin, interferes too much with other instruments, or isn’t prominent enough . . . yet getting a bass to sit right in a mix is essential. So, here are ten tips on how to make your bass “play nice with others” during the mixing process. 1 CHECK YOUR ACOUSTICS Small project studio rooms reveal their biggest weaknesses below a couple hundred Hz, because the length of the bass waves can be longer than your room dimensions—which leads to bass cancelations and additions that don’t tell the truth about the bass sound. Your first acoustic fix should be putting bass traps in the corners, but the better you can treat your room, the closer your speakers will be to telling the truth. If acoustic treatment isn’t possible, then do a reality check with quality headphones. 2 MUCH OF THE SOUND IS IN THE FINGERS Granted, by the time you start mixing, it’s too late to fix the part—so as you record, listen to the part with mixing in mind. As just one example, fretted notes can give a tighter, more defined sound than open strings (which are often favored for live playing because they give a big bottom—but can overwhelm a recording). Also, the more a player can damp unused strings to keep them from vibrating, the “tighter” the part. 3 COMPRESSION IS YOUR FRIEND Normally you don’t want to compress the daylights out of everything, but bass is an exception, particularly if you’re miking it. Mics, speakers, and rooms tend to have really uneven responses in the bass range—and all those anomalies add up. Universal Audio’s LA-2A emulation is just one of many compressors that can help smooth our response issues in a bass setup. Compression can help even out the response giving a smoother, rounder sound. Also, try using parallel compression—i.e., duplicate the bass track, but compress only one of the tracks. Squash one track with the compressor, then add in the dry signal for dynamics. Some compressors include a dry/wet control to make it easy to adjust a blend of dry and compressed sounds. 4 THE RIGHT EQ IS CRUCIAL Accenting the pick/pluck sound can make the bass seem louder. Trying boosting a bit around 1kHz, then work upward to about 2kHz to find the “magic” boost frequency for your particular bass and bassist. Also consider trimming the low end on either the kick or the bass, depending on which one you want to emphasize, so that they don’t fight. Finally, many mixes have a lot of lower midrange buildup around 200-400Hz because so many instruments have energy in that part of the spectrum. It’s usually safe to cut bass a bit in that range to leave space for the other instruments, thus providing a less muddy overall sound; sometimes cutting just below 1kHz, like around 750-900Hz, can also give more definition. 5 TUNING IS KEY If the bass foundation is out of tune, the beat frequencies when the harmonics combine with other instruments are like audio kryptonite, weakening the entire mix. Beats within the bass itself are even worse. Tune, baby, tune! This can’t be emphasized enough. If you get to mixdown and find the bass has notes that are out of tune, cheat: Many pitch correction tools intended for vocals will work with single-note bass lines. 6 PUT HIGHPASS FILTERS ON OTHER INSTRUMENTS To make for a tighter, more defined low end overall, clean up subsonics and low frequencies on instruments that don’t really have any significant low end (e.g., guitars, drums other than kick, etc.). The QuadCurve EQ in Cakewalk Sonar’s ProChannel has a 48dB/octave highpass filter that’s useful for cleaning up low frequencies in non-bass tracks. A low cut filter, as used for mics, is a good place to start. By carving out more room on the low end, there will be more space for the bass to fit comfortably in the mix. The steeper the slope, the better. 7 TWEAK THE BASS IN CONTEXT Because bass is such an important element of a song, what sounds right when soloed may not mesh properly with the other tracks. Work on bass and drums as a pair—that’s why they’re called the “rhythm section”—so that you figure out the right relationship between kick and bass. But also have the other instruments up at some point to make sure the bass supports the mix as a whole. 8 BEWARE OF PHASE ISSUES It’s common to take a direct out along with a miked or amp out, then run them to separate tracks. Be careful, though: The signal going to the mic will hit later than the direct out, because the sound has to travel through the air to get to the mic. If you use two bass tracks, bring up one track, monitor in mono (not stereo), then bring up the other track. If the volume dips, or the sound gets thinner, you have a phase issue. If you’re recording into a DAW, simply slide the later track so it lines up with the earlier track. The timing difference will only be a few milliseconds (i.e., one millisecond for every foot of distance from the speaker), so you’ll probably need to zoom way in in order to align the tracks properly. 9 RESPECT VINYL’S SPECIAL REQUIREMENTS Vinyl represents a tiny amount of market share, but it’s growing and you never know when something you mix will be released on vinyl. So, if your project has even a slight chance of ending up on vinyl, pan bass to the precise center. Bass is one frequency range where there should be no stereo imaging. 10 DON’T FORGET ABOUT BASS AMP SIMS You’ll find some excellent bass amp sims in Native Instrument’s Guitar Rig, Waves GTR, Live 6 POD Farm, and Peavey’s ReValver, as well as the dedicated Ampeg SVX plug-in (from the AmpliTube family) offered by IK Multimedia. IK Multimedia’s Ampeg SVX gives solid bass sounds in stand-alone mode, but when used as a plug-in, can also “re-amp” signals recorded direct. This shows the Cabinet page, where you set up your “virtual mic.” These open up the option of recording direct, but then “re-amping” during the mix to get more of a live sound. You’ll also have more control compared to using a “real” bass amp. Even if you don’t want to use a bass sim as your primary bass sound, don’t overlook the many ways they can enhance a physical bass sound. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered hundreds of tracks), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. How low can you go? Use an octave divider, and find out! By Craig Anderton Octave dividers aren’t just for guitar players: They also rock for bass, whether you’re getting mega-low sounds from the lower strings, or playing high up on the neck for very cool 8-string bass effects. It’s easy to do octave division with amp sims and DAWs, but there are some definite tricks involved. SPLIT YOUR SIGNAL As when adding many other types of effects on bass, it’s best to create a second track in parallel with the main bass sound, and dedicate the second track to the octave divider. This lets you mix in the precise amount of octave sound, but more importantly, you may need to condition the bass signal to optimize it for octave division. CHOOSE YOUR DIVIDER Most amp sims include octave dividers. Fig. 1: Waves’ GTR|Solo is set up for octave division on bass. The Pitcher module provides the octave division; its Mix control is set to divided sound only. I’ve successfully used octave dividers on bass with IK Multimedia AmpliTube, Waves GTR Solo (Fig. 1) and GTR3, Native Instruments Guitar Rig (Fig. 2), and Peavey ReValver Mk III. Fig. 2: Guitar Rig’s Pro-Filter module is an excellent EQ for conditioning a bass signal before it hits the Oktaver; this screen shot is from Guitar Rig 3. There’s not a lot of difference among these effects for this particular task; they all do the job. You can also use other available modules to condition the bass signal. PRE-OCTAVE PROCESSING Two main problems can interfere with proper triggering: An inconsistent input signal level, and triggering on a harmonic rather than the fundamental (which causes an “octave-hopping” effect, where the signal jumps back and forth between the fundamental and octave). A compressor can solve the consistency problem. Set it for a moderate amount of compression (e.g., 4:1 ratio, with a fairly high threshold). Make sure the compressed sound doesn’t have a “pop” at the beginning, and the sustain is smooth. Then if needed, patch in an EQ to take off some of the highs—the object is to emphasize the fundamental. This may require compromise; too much filtering will reduce the level from the higher strings to where they might not be able to trigger the octave divider (as well as change the tone), whereas not filtering enough may cause octave-hopping on the lower strings. What works best for me is cutting highs and boosting the low bass a bit. If the EQ curve isn’t sharp enough, you may get better results by patching two EQs in series. I’ve also found that with Guitar Rig, using the Pro Filter module with mode set to LPF (lowpass) and slope to 100\\\% four-pole provides outstanding conditioning, especially when preceded by the Tube Compressor. THE FINAL TOUCH Playing technique also matters. Popping and snapping might confuse the octave divider, as can the transients that occur from playing with a pick. Playing with your fingers or thumb gives the best results, but don’t be afraid to experiment; for example, if you do “snap” the string, the sound might mask the divided sound anyway, so it won’t matter. Also, remember that octave dividers are monophonic, so make sure only one string vibrates at a time. Once you have your signal chain tweaked, adjust the parallel, octave-divided signal for the right balance with the main bass signal. You’ll probably find yourself playing an octave higher than normal, because the octave divider will supply the low fundamental. But octave division is also a great way to make those low strings create seismic-type lows that throb in a way you can’t get any other way. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Fix vocal pitch without nasty correction artifacts by Craig Anderton The critics are right: pitch correction can suck all the life out of vocals. I proved this to myself accidentally when working on some background vocals. I wanted them to have an angelic, “perfect” quality; as the voices were already very close to proper pitch anyway, I thought just a tiny bit of manual pitch correction would give the desired effect. (Well, that and a little reverb.) I was totally wrong, because the pitch correction took away what made the vocals interesting. It was an epic fail as a sonic experiment, but a valuable lesson because it caused me to start analyzing vocals to see what makes them interesting, and what pitch correction takes away. And that’s when I found out that the critics are also totally wrong, because pitch correction—if applied selectively—can enhance vocals tremendously, without anyone ever suspecting the sound had been corrected. There’s no robotic quality, it doesn’t steal the vocalist’s soul, and pitch correction can sometimes even add the kind of imperfections that make a vocal sound more “alive.” This article uses Cakewalk Sonar’s V-Vocal as a representative example of pitch correction software, but other programs like Melodyne (Fig. 1), Waves Tune (Fig. 2), Nectar (Fig. 3), and of course the grand-daddy of them all, Antares Auto-Tune (Fig. 4), all work fairly similarly. They need to analyze the vocal file, after which they indicate the pitches of the notes. These can all be quantized to a particular scale with “looser” or “tighter” correction, and often you can correct timing and formant as well as pitch. But more importantly, with most pitch correction software you can turn off automatic quantizing to a particular scale, and correct pitch with a scalpel instead of a machete. That's the technique we're going to describe here. Fig. 1: Celemony Melodyne Fig. 2: Waves Tune LT Fig. 3: iZotope Nectar's pitch correction module Fig. 4: Antares Auto-Tune EVO BEWARE SIGNAL PROCESSING! Pitch correction works best on vocals that are "raw," without any processing; effects like modulation, delay, or reverb can make pitch correction at best glitchy and at worst, impossible. Even EQ, if it emphasizes the high frequencies, can create unpitched sibilants that confuse pitch correction algorithms. The only processing you should use on vocals prior to employing pitch correction is de-essing, as that can actually improve the ability of pitch correction to do its work. If your pitch correction processor inserts as a plug-in (e.g., iZotope's Nektar), then make sure it's before any other processors in the signal chain. WHAT TO AVOID The key to proper pitch correction use is knowing what to avoid, and the prime directive is don’t ever use any of the automatic correction options—unless you specifically want that hard correction, hip-hop vocal effect (in V-Vocal, these are the controls grouped under the “Pitch Correction” or “Formant Control” boxes). Do only manual correction, and then, only if something actually sounds wrong. Avoid any “labor-saving” devices; don’t use options that add LFO vibrato. In V-Vocal, I always use the pencil tool to change or add vibrato. Manual correction takes more effort to get the right sound (and you’ll become best friends with your program’s Undo button), but the human voice simply does not work the way pitch correction software works when it’s on auto-pilot. By making all your changes manually, you can ensure that pitch correction works with the vocal instead of against it. DO NO HARM One of my synth programming “tricks” on choir and voice patches is to add short, subtle upward or downward pitch shifts at the beginning of phrases. Singers rarely go from no sound to perfectly-pitched sound, and the shifts add a major degree of realism to patches. Sometimes I’ll even put the pitch envelope attack time or envelope amount on a controller so I can play these changes in real time. Pitch correction has a natural tendency to remove or reduce these spikes, which is partially responsible for pitch-corrected vocals sounding “not right.” So, it’s crucial not to correct anything that doesn’t need correcting. Consider the “spikey” screen shot (Fig. 5), bearing in mind that the orange line shows the original pitch, and the yellow line shows how the pitch was corrected. Fig. 5: The pitch spikes at the beginning of the notes add character, as do the slight pitch differences compared to the “correct” pitch. Each note attack goes sharp very briefly before settling down to pitch, and “correcting” these removed any urgency the vocal had. Also, all notes except the last one should have been the same pitch. However, the first note being slightly flat, with the next one on pitch (it had originally been slightly sharp), and the next one slightly sharp, added a degree of tension as the pitch increased. This is a subtle difference, but you definitely notice a loss if the difference is “flattened” to the same pitch. In the last section the pitch center was a little flat; raising it up to pitch let the string of notes resolve to something approximating the correct pitch, but note that all the pitch variations were left in and only the pitch center was changed. The final note’s an interesting case: It was supposed to be a full tone above the other notes, but the orange line shows it just barely reached pitch. Raising the entire note, and letting the peak hit slightly sharp, gave the correct sense of pitch while the slight “overshoot” added just the right amount of tension. VIBRATO Another problem is where the vibrato “runs away” from the pitch, and the variations become excessive. Fig. 6 shows a perfect example of this, where the final held note was at the end of a long phrase, and I was starting to run out of breath. Referring to the orange line, I came in sharp, settled into a moderate but uneven vibrato, but then the vibrato got out of control at the end. Fig. 6: Re-drawing vibrato retains the voice’s human qualities, but compensates for problems. Bearing in mind the comments on pitch spikes, note that I attenuated the initial spike a bit but did not flatten it to pitch. Next came re-drawing the vibrato curve for more consistency. It’s important to follow the excursions of the original vibrato for the most natural sound. For example, if the original vibrato went up too high in pitch, then the redrawn version should track it, and also go up in pitch—just not as much. As soon as you go in the opposite direction, the correction has to work harder, and the sound becomes unnatural. This emphasizes the need to use pitch correction to repair, not replace, troublesome sections. Also note that at the end, the original pitch went way flat as I ran out of breath. In the corrected version, the vibrato goes subtly sharp as the note sustains—this adds energy as you build to the next phrase. Again, you don’t hear it as “sharp,” but you sense the psycho-acoustic effect. MAJOR FIXES Sometimes a vocal can be perfect except for one or two notes that are really off, and you’re loathe to punch. V-Vocal can do drastic fixes, but you’ll need to “humanize” them for best results. In the before-and-after screen shot (Fig. 7), the pitch dropped like a rock at the end of the first note, then overshot the pitch for the second note, and finally the vibrato fell flat (literally). The yellow line in the top image shows what typical hard pitch correction would do—flatten out both notes to pitch. On playback, this indeed exhibited the “robot” vibe, although at least the pitches were now correct. Fig. 7: The top image shows a hard-corrected vocal, while the lower image shows it after being “humanized.” The lower image shows how manual re-drawing made it impossible to tell the notes had been pitch-corrected. First, never have a 90 degree pitch transition; voices just don’t do that. Rounding off transitions prevents the “warbling” hard correction sound. Also note that again, the pitch was re-drawn to track the original pitch changes, but less drastically. Be aware that often, the “wrong” singing is instinctively right for the song, and restoring some of the “wrongness” will enhance the song’s overall vibe. Shifting pitch will also change the formant, with greater shifts leading to greater formant changes. However, even small changes may sound wrong with respect to timbre. Like many pitch correction program, V-Vocal also lets you edit the formant (i.e., the voice’s characteristic timbre). When you click on V-Vocal’s F button, you can adjust formant as easily as pitch (Fig. 8). Fig. 8: The formant frequency has been raised somewhat to compensate for the downward timbre shift caused by fixing the pitch. In the screen shot with formant editing, the upper image shows that the vibrato was not only excessive, but its pitch center was higher than the pitch-corrected version. The lower pitch didn’t exactly give a “Darth Vader” timbre, but didn’t sound right in comparison to the rest of the vocal. The lower image shows how the formant frequency was raised slightly. This offset the lower formant caused by pitch correction, and the vocal’s timbre ended up being consistent with the rest of the part. A REAL-WORLD EXAMPLE To hear these kind of pitch correction techniques—or more accurately, to hear a song using the pitch correction techniques where you can’t hear that there’s pitch correction—check out the following music video. This is a cover version of forumite Mark Longworth’s “Black Market Daydreams” (a/k/a MarkydeSad and before that, Saul T. Nads), and there’s quite a bit of touch-up on my vocals. But listen, and I think you’ll agree that pitch correction doesn’t have to sound like pitch correction. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...