Jump to content

Anderton

Members
  • Content Count

    18,248
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. Keep your tracking session on an even keel with these tips for smoother sessions By Craig Anderton As the all-important first step of the recording process, laying down tracks is crucial. No matter how well you can mix and master, you’re hosed if the tracks aren’t good. But tracking is an elusive art. Some feel it’s pretty much a variation on performing; others step enter tracks via MIDI, one note at a time. Yet regardless of how you approach tracking, you want to create a recording environment where inspiration can flourish — troubleshooting your setup in the middle of the creative process can crush your muse. There are valid psychological reasons why this is so, based on the way that our brain processes information; suffice it to say you don’t want to mix creative flights of fancy with down-to-earth analytical thinking. So, let’s investigate a bunch of tips on how to track as efficiently — and creatively — as possible. 1 HAVE EVERYTHING READY TO GO I’m a fanatic when miking an acoustic instrument: I need one person to adjust the mics, and another to play the instrument, while I listen in the control room. But I also want all this setup to be done before the session begins, so the artist can be as fresh as possible. True, sometimes it’s necessary to make some compensations due to differences in “touch,” but those compensations don’t take very long. 2 CREATE A SCREEN LAYOUT THAT’S OPTIMIZED FOR TRACKING Most sequencers let you save specific “views” or window sets (Fig. 1). For example, you certainly don’t need to do waveform editing when you’re tracking (and if you do, we need to talk!). Fig. 1: Logic was one of the first DAWs to really exploit screen presets. As you’ll likely not be sitting right next to your computer as you play an instrument, go for large fonts, big readouts, wide instead of narrow channel strips—anything that makes the recording and track assignment process more obvious. 3 ZERO THE CONSOLE If you’re using a hardware mixer, center all the EQ controls, turn all the sends to zero, make sure anything that can be bypassed is in the bypass mode, and so on. Many mixer modules have some kind of reset option; take advantage of them. You want to make sure that any changes you make start from a consistent point, as well as insure that there aren’t any spurious noise contributions (like from an open mic preamp). 4 LEARN SOFTWARE SHORTCUTS Anytime you can hit a keyboard key instead of move a mouse, you’re saving time, effort, and staying in the right-brain (creative) frame of mind. For example if you don’t use the top octave of an 88-note keyboard much, your software might allow you to assign these keys to the record buttons on the first 12 channels of your tracking setup—or at the very least, use the top few notes for transport control. 5 CONTROLLERS CAN BE A BEAUTIFUL THING Once upon a time in a galaxy far, far away, DigiTech made a guitar processor called the GNX4. One of its features was “hands-free recording” when used with Cakewalk hosts like Sonar, where you could initiate playback, record, arm tracks, create new tracks, and other operations simply by pushing footswitches. While intended for guitar players, I found it very helpful for general recording applications and never abandoned a quest for footswitches. Fig. 2: The three jacks toward the right are for two footswitches and an expression pedal. The footswitches defaul to transport functions, but can be reassigned. If you have a MIDI keyboard, chances are you can use a sustain pedal to do something useful, like initiate recording. The Mackie Control Universal Pro (Fig. 2) has two footswitch jacks, which default to start/stop and record, and you can take this to the max with X-Tempo Designs’ wireless POK footswitch bank. 6 KNOW WHEN TO TAKE A BREAK If someone cutting a track starts running into a wall, it’s seldom worth continuing. It’s better to take a break and let the player (that means you, too!) come back refreshed and with a slightly different perspective. 7 TAKE ADVANTAGE OF LOOP RECORDING Loop recording, also called composite recording (Fig. 3), can help put together the perfect performance. For more information on loop recording, check out this article. Fig. 3: Sonar X3's "speed comping" merges loop recording with keyboard navigation. But loop recording is something best done at one time. If you record a bunch of takes, edit the best parts together, then try to add more parts, the newer takes seldom match up well with the older ones. If you need to add more parts, consider starting over or make sure you record enough takes in the first place. 8 DON’T EDIT WHILE YOU TRACK Because you read all the way to the end, your reward is the most important tip here. With loop recording, it might be tempting to edit the parts together right after recording them. But don’t — that can really disrupt the session’s flow if more tracking is on the agenda. As long as you know that you have enough good takes to put together a part, move on. The same applies to any editing. Even with MIDI, I’ll usually leave a track “as is,” and use real-time MIDI plug-ins (which don’t alter the file) to do any quantization if a part has some rough spots. Tracking is tracking; editing is editing. Do just enough editing (if needed) so that other players have something decent to follow, and worry about doing any polishing later. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Sometimes the "right" way to do things is nowhere near as much fun as the "wrong" way By Craig Anderton Whether giving seminars or receiving emails, I’m constantly asked about the “right” way to record, as if there was some committee on standards and practices dedicated to the recording industry (“for acoustic guitar, you must use a small diaphragm condenser mic, or your guitar will melt”). Although I certainly don’t want to demean the art of doing things right, some of the greatest moments in recording history have come about because of ignorance, unbridled curiosity, luck, trying to impress the opposite sex, or just plain making a mistake that became a happy accident. When Led Zeppelin decided to buck the trend at that time of close miking drums, the result was the Olympian drum sound in “When the Levee Breaks.” Prince decided that sometimes a bass simply wasn’t necessary in a rock tune, and the success of “When Doves Cry” proved he was right. Distortion used to be considered “wrong,” but try imaging rock guitar without it. A lot of today’s gear locks out the chance to make mistakes. Feedback can’t go above 99, while “normalized” patching reduces the odds of getting out of control. And virtual plug-ins typically lack access points, like insert and loop jacks, that provide a “back door” for creative weirdness. But let’s not let that stop us—it’s time to reclaim some our heritage as sonic explorers, and screw up some of the recording process. Here are a few suggestions to get you started. UNINTENDED FUNCTIONS One of my favorite applications is using a vocoder “wrong.” Sure, we’re supposed to feed an instrument into the synthesis input, and a mic into the analysis input. But using drums, percussion, or even program material for analysis can “chop” the instrument signal in rhythmically interesting ways. Got a synth, virtual or real, with an external input (Fig. 1)? Turn the filter up so that it self-oscillates (if it lets you), and mix the external signal in with it. Fig. 1: Arturia’s miniV has an external input. Insert it into a track as an effect, and you can process a signal with the synth’s various modules. The sound will be dirty, rude, and somewhat like FM meets ring modulation. To take this further, set up the VCA so you can do gated/stuttering techniques by pressing a keyboard key to turn it on and off. And we all know headphones are for outputting sound, right? Well, DJs know you can hook it up reverse, like a mic. Sure, the sound is kinda bassy because the diaphragm is designed to push air, not react to tiny vibrational changes. But no problem! Kick the living daylights out of the preamp gain, add a ton o’ distortion, and you’ll generate enough harmonics to add plenty of high frequencies. I was reluctant to include the following tip, as it relies on the ancient Lexicon Pantheon reverb (a DirectX format plug-in included in Sonar, Lexicon Omega, and other products back in the day). I really tried to find a more contemporary reverb that can do the same thing, but I couldn’t. However, this does give a fine example of unintended functionality: having a reverb iprovide some really cool resonator effects. If you have a Pantheon, try these settings (Fig. 2): Reverb type: custom Pre-delay, Room Size, RT60, Damping: minimum settings Mix: 100\% (wet only) Level: as desired Density Regen: +90\% Density Delay: between 0 and 20ms Echo Level (Left and Right): off Spread, Diffusion: 0 Bass boost: 1.0X Fig. 2: The plug-in says it’s a reverb, but here Pantheon is set up as a resonator. Vary the Regen and Delay controls, but feel free to experiment with the others. You can even put two Pantheons in series set for highly resonant, totally spooky sounds. PARAMETER PUSHING The outer edges of parameter values are meant for exploration. For example, digital audio pitch transposition can provide all kinds of interesting effects. Tune a low tom down to turn it into a thuddy kick drum, or transpose slap bass up two octaves to transform it into a funky clav. Or consider the “acidization” process in Acid and Sonar. Normally, you set slice points at every significant transient. But if you set slice points at 32nd or 64th notes, and transpose pitch up an octave or two, you’ll hear an entirely different type of sound. I also like to use Propellerheads’ ReCycle as a “tremolo of the gods” (Fig. 3). Fig. 3: ReCycle can do more than simply convert WAV or AIFF files into stretchable audio—it can also create novel tremolo effects. Load in a sustained sound like a guitar power chord, set slice points and decay time to chop it into a cool rhythm, then send it back to the project from which it came. GUITAR WEIRDNESS For a different type of distortion, plug your guitar directly into your mixer (no preamp or DI box), crank the mic pre, then use EQ to cut the highs and boost the mids to taste. Is this the best distortion sound in the world? No. Will it sound different enough to grab someone’s attention? Yes. When you play compressed or highly distorted guitar through an amp (or even studio monitors, if you like to live dangerously), press the headstock up against the speaker cabinet and you’ll get feedback if the levels are high enough. Now work that whammy bar... Miking guitar amps is also a fertile field for weirdness. Try a “mechanical bandpass filter” with small amps—set up the mic next to the speaker, then surround both with a cardboard box. One of the weirdest guitar sounds I ever found was when I re-amped the guitar through a small amp pointed at a hard wall, set up two mics between the amp and the wall, then let them swing back and forth between the amp and wall. It created a weird stereo phasey effect that sounded marvelous (or at least strange) on headphones. DISTORT-O-DRUM Distortion on drums is one of those weird techniques that can actually sound not weird. You can put a lot of distortion on a kick and not have it sound “wrong”—it just gains massive amounts of punch and presence. One of my favorite techniques is copying a drum track, putting it in parallel with the original drum track, then running the copy through a guitar amp plug-in set for a boxy-sounding cabinet. It gives the feeling of being in a really funky room. Replacing drum sounds can also yield audio dividends. My musical compatriot Dr. Walker, a true connoisseur of radical production techniques, once replaced the high-hat in his drum machine with sampled vinyl noise. That was a high-hat with character, to say the least. If you want a sampled drum sound to have an attack that cuts through a track like a machete, load the sample into a digital audio editor that has a pencil tool. Then, within the first 2 or 3ms of the signal, add a spike (shown in red in the diagram for clarity; see Fig. 4). Fig. 4: Messing up a drum sample’s initial attack adds a whole new kind of flavor. When you play back the sound, the attack will now be larger than life, loaded with harmonics, and ready to jump out the speaker. However, it all happens so fast you don’t really perceive it as distortion. (You can even add more spikes if you dare.) Another drum trick that produces a ton of harmonics at the attack is to normalize a drum sample, then increase gain by a few dB — just enough to clip the first few milliseconds of the signal. Again, the drum sound will slam out of the speakers. FUN WITH FEEDBACK A small hardware mixer is a valuable tool in the quest for feedback-based craziness. Referring to Fig. 5, if you have a hardware graphic equalizer, patch it after the mixer output, split the EQ’s output so that one split returns back into the mixer input, monitor the EQ’s other split from the output, and feed in a signal (or not — you can get this to self-oscillate). Fig. 5: Here’s a generalized setup for adding feedback to a main effect. The additiona effect in the feedback loop isn’t essential, but changing the feedback loop signal can create more radical results. With the EQ’s sliders at 0, set the mixer to just below unity. As you increase the sliders, you’ll start creating tones. This requires some fairly precise fader motion, so turn down your monitors if the distortion runs away—or add a limiter to clamp the output. If you have a hardware pitch shifter, then feed some of the output back to the input (again, the mixer will come in handy) through a delay line at close to unity gain. Each echo will shift further downward or upward, depending on your pitch transposer’s setting. With some sounds, this can produce beautiful, almost bell tree-like effects. Feedback can also add unusual effects with reverb, as the resonant peaks tend to shift. At some settings, the reverb crosses over into a sort of tonality. You may need to tweak controls in real time and ride everything very carefully, but experiment. Hey, that’s the whole message of this article anyway! PREFAB NASTINESS? Lately there’s been a trend to “formalize” weird sounds, like bit reducers, vinyl emulators, and magnetic tape modelers. While these are well-intentioned attempts to screw things up, there’s a big difference between a plug-in that reduces your audio to 8 bits, and playing back a sample on a Mirage sampler, which is also 8 bits. The Mirage added all kinds of other oddities — noises, aliasing, artifacts — that the plug-in can’t match. Playing a tune through a filter, or broadcasting it to a transistor radio in front of a mic (try it sometime!) produce very different results. Bottom line: Try to go to the source for weirdness, or create your own. Once weirdness is turned into a plug-in with 24/96 resolution, I’m not sure it’s really weirdness anymore. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. As we close out MIDI’s 30th anniversary, it’s instructive to reflect on why it has endured and remains relevant By Craig Anderton The MIDI specification first saw the light of day at the 1981 AES, when Dave Smith of Sequential Circuits presented a paper on the “Universal Synthesizer Interface.” It was co-developed with other companies (an effort driven principally by Roland’s Ikutaro Kakehashi, a true visionary of this industry), and made its prime time debut at the 1983 Los Angeles NAMM show, where a Sequential Circuits Prophet-600 talked to a Roland keyboard over a small, 5-pin cable. I saw Dave Smith walking around the show and asked him about it. “It worked!” he said, clearly elated—but I think I detected some surprise in there as well. Sequential Circuits' Prophet-600 talking to a Roland keyboard at the 1983 NAMM show (photo courtesy the MIDI Manufacturers Association and used with permission) “It” was the Musical Instrument Digital Interface, known as MIDI. Back in those days, polyphonic synthesizers cost thousands of dollars (and “polyphonic” meant 8 voices, if you were lucky and of course, wealthy). The hot computer was a Commodore-64, with a whopping 64 kilobytes of memory—unheard of in a consumer machine (although a few years before, an upstart recording engineer named Roger Nichols was stuffing 1MB memory boards in a CompuPro S-100 computer to sample drum sounds). The cute little Macintosh hadn’t made its debut, and as impossible as it may seem today, the PC was a second-class citizen, licking its wounds after the disastrous introduction of IBM’s PCjr. Tom Oberheim had introduced his brilliant System, which allowed a drum machine, sequencer, and synthesizer to talk together over a fast parallel bus. Tom feared that MIDI would be too slow. And I remember talking about MIDI at a Chinese restaurant with Dave Rossum of E-mu systems, who said “Why not just use Ethernet? It’s fast, it exists, and it’s only about $10 to implement.” But Dave Smith had something else in mind: An interface so simple, inexpensive, and foolproof to implement that no manufacturer could refuse. Its virtues would be low cost, adequate performance, and ubiquity in not just the pro market, but the consumer one as well. Bingo. But it didn’t look like success was assured at the time; MIDI was derided by many pros who felt it was too slow, too limited, and just a passing fancy. 30 years later, though, MIDI has gone far beyond what anyone had envisioned, particularly with respect to the studio. No one foresaw MIDI being part of just about every computer (e.g., the General MIDI instrument sets). This trend actually originated on the Atari ST—the first computer with built-in MIDI ports as a standard item (see "Background: When Amy Met MIDI" toward the end of this article). EVOLUTION OF A SPEC Oddly, the MIDI spec officially remains at version 1.0, despite significant enhancements over the years: the Standard MIDI File format, MIDI Show Control (which runs the lights and other effects at Broadway shows like Miss Saigon and Tommy), MIDI Time Code to allow MIDI data to be time-stamped with SMPTE timing information, MIDI Machine Control for integration with studio gear, microtonal tuning standards, and a lot more. And the activity continues, as issues arise such as how best to transfer MIDI over USB, with smart phones, and over wireless. The guardian of the spec, the MIDI Manufacturers Association (MMA), has stayed a steady course over the past several decades, holding together a coalition of mostly competing manufacturers with a degree of success that most organizations would find impossible to pull off. The early days of MIDI were a miracle: in an industry where trade secrets are jealously guarded, manufacturers who were intense rivals came together because they realized that if MIDI was successful, it would drive the industry to greater success. And they were right. The MMA has also helped educate users about MIDI, through books and online materials such as "An Introduction to MIDI." I had an assignment at the time from a computer magazine to write a story about MIDI. After turning it in, I received a call from the editor. He said the article was okay, but it seemed awfully partial to MIDI, and was unfair because it didn’t give equal time to competing protocols. I tried to explain that there were no competing protocols; even companies that had other systems, like Oberheim and Roland, dropped them in favor of MIDI. The poor editor had a really hard time wrapping his head around the concept of an entire industry willingly adopting a single specification. “But surely there must be alternatives.” All I could do was keep replying, “No, MIDI is it.” Even when we got off the phone, I’m convinced he was sure I was holding back information on MIDI’s competition. MIDI HERE, MIDI THERE, MIDI EVERYWHERE Now MIDI is everywhere. It’s on the least expensive home keyboards, and the most sophisticated studio gear. It’s a part of signal processors, guitars, keyboards, lighting rigs, smoke machines, audio interfaces…you name it. It has gone way beyond its original idea of allowing a separation of controller and sound generator, so people didn’t have to buy a keyboard every time they wanted a different sound. SO WHERE’S IT GOING? “Always in motion, the future…” Well, Yoda does have a point. But the key point about MIDI is that it’s a hardware/software protocol, not just one or the other. Already, the two occasionally take separate vacations. The MIDI data in your DAW that drives a soft synth doesn’t go through an opto-isolators or cables, but flies around inside your computer. One reason why MIDI has lasted so long is because it’s a language that expresses musical parameters, and these haven’t changed much in several centuries. Notes are still notes, tempo is still tempo, and music continues to have dynamics. Songs start and end, and instruments use vibrato. As long as music is made the way it’s being made, the MIDI “language” will remain relevant, regardless of the “container” used to carry that data. However, MIDI is not resting on its laurels, and neither is the MMA—you can find out what they're working on for the future here. Happy birthday, MIDI. You have served us well, and we all wish you many happy returns. For a wealth of information about MIDI, check out The MIDI Association web site. Background: When Amy Met MIDI [attachment=139991:name]After MIDI took off, many people credited Atari with amazing foresight for making MIDI ports standard on their ST series of computers. But the inclusion of MIDI was actually a matter of practicality. Commodore was riding high with the C-64, in large part because of the SID (Sound Interface Device) custom IC, a very advanced audio chip for its time. (Incidentally, Bob Yannes, one of Ensoniq’s founders and also the driving force behind the Mirage sampler, played the dominant role in SID’s development.) Atari knew that if it wanted to encroach on Commodore’s turf, they needed something better than SID. They designed an extremely ambitious sound chip, code-named Amy, that was supposed to be a “Commodore killer.” But Amy was a temperamental girl, and Atari was never able to get good enough yields to manufacturer the chips economically. An engineer suggested putting a MIDI port on the machine, so it could drive an external sound generator; then they wouldn’t have to worry about an onboard sound chip. Although this solved the immediate Amy problem, it also turned out to be a fortuitous decision: Atari dominated the European music-making market for years, and a significant chunk of the US market as well. To this day, a hardy band of musicians still use their aging ST and TT series Atari computers because of the exceptionally tight MIDI timing – a result of integrating MIDI into the core of the operating system. Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. It's a whole new world for DJs - and there's a whole new world of DJing options by Craig Anderton If the term “DJ” makes you think “someone playing Barbra Streisand songs at my cousin’s wedding,” then you might think gas is $1.20 a gallon, and wonder how Ronald Reagan will turn out as president. DJing has changed radically over the past two decades, fueled by accelerating world-wide popularity, technological advances, and splits into different styles. While some musicians dismiss DJs because they “just play other peoples’ music, not make their own,” DJing demands a serious skill set that’s more like a conductor or arranger. Sets are long, and there are no breaks—you not only have to pace the music perfectly to match the audience’s mood, but create seamless transitions between cuts that are probably not at the same tempo or key. On top of that, DJs require an encyclopedic knowledge of the music they play so they can always choose the right music at the right time, and build the dynamics of the music into an ongoing series of peaks and valleys—with each peak taking the audience higher than the previous one. What’s more, the bar is always being raised. DJs are no longer expected just to play music, but use tempo-synched effects, and sometimes even trade off with other DJs on the same stage or integrate live musicians—or play instrumental parts themselves on top of what they’re spinning. Quite a few DJs have gotten into not just using other tracks, but creating their own with sophisticated software DAWs. Let’s take a look at some of the variant strains of DJing. These apply to both mobile DJs, the closest to the popular (mis)conception of the DJ as they typically bring their own sound systems, music, and play events including everything from art opening to weddings; and club DJs, who are attractions at dance clubs and renowned for sophisticated DJing techniques (like effects and scratching). VINYL AND TURNTABLES This is where it all started, where DJs have to beat-match by listening carefully to one turntable while the other is spinning, line up the music, then release the second turntable at the precise moment to sync up properly with the current turntable, and crossfade between the two. Vinyl is where scratching originated by moving the record back and forth under the needle. Vinyl is still popular among traditionalists, but there are many more alternatives now. The Stanton STR8-150 is a high-torque turntable with a "skip-proof" straight tone arm, key correction, reverse, up to 50\\\% pitch adjustment, and S/PDIF digital outputs. DJING WITH CDS As CDs replaced vinyl, DJs started looking for DJing solutions involving CDs. Through digital technology, it became possible to DJ with CDs, as well as use vinly record-like controllers to simulate the vinyl DJ experience (scratching and beat-matching) with CDs. Originally frowned on by traditional DJs, CD-based DJs developed their own skill set and figured out how to create an end result with equal validity to vinyl. THE OTHER MP3 REVOLUTION As MP3s replaced CDs, DJs again followed suit. But this time, the full power of the computer started being brought into play. Many MP3-based DJing packages now combine hardware controllers with computer programs that not only play back music, but include effects and allow seeing visual representations of waveforms to facilitate beat-matching. What’s more, effects often sync to tempo and map to controls, so the DJ can add these effects in creative ways that become part of the performance. Native Instruments’ Traktor Kontrol is designed specifically as a match for their Traktor DJing software. MP3-based DJing also meant that DJs were freed forever from carrying around records or CDs, as they could store gigabytes of music on the same laptop running the DJ program itself. ABLETON LIVE: THE DAW FOR DJS This article isn’t really about mentioning products, but in this case, there’s no other option: Live occupies a unique position as a program that straddles the line between DAW and DJ. It’s hard to generalize about how people use Live, because different DJs have very different approaches. Some bring in complete songs and use Live’s “warping” capabilities to beat-match, then crossfade between them on-fly-while bringing in other music; others construct entire compositions out of loops, which they trigger, solo, mute, and arrange in real time. Live’s “Session View” is the main aspect of the program used to create DJ sets out of loops and other digital audio files. Although a runaway favorite of DJs, Live isn’t the only program used by DJs—Propellerhead Reason, Sony Acid, and Apple Logic are three other mainstream programs that are sometimes pressed into service as DJ tools. NONE OF THE ABOVE: OTHER DJ TOOLS A variety of musical instruments are also used for DJing. Although the best-known are probably Akai’s MPC-series beatboxes, people use everything from sampling keyboards to Avid’s Venom synth in multi-timbral mode to do, if not traditional DJing, beats-oriented music that is closer to DJing than anything else. Akai’s MPC5000 is a recent entry in the MPC series, invented by Roger Linn, which popularized the trend of DJs using “beatbox”-type instruments. I've even used M-Audio's Venom synthesizer to do a DJ-type set by calling up Multis and soloing/muting/mixing drum, bass, arpeggiator patterns, and playng lead lines on top of all that. Here's a video whose soundtrack illustrates this application. If you haven’t done any DJing, it’s fun—and if you haven’t heard good DJ sets, internet radio is a great place to find them being played out of Berlin, Paris, Holland, Bangkok, and other musical hotbeds. But be forewarned: You may find a brand new musical addiction. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. There’s much more to mixing than just levels By Craig Anderton When mixing, the usual way to make an instrument stand out is to raise its level. But there are other ways to make an instrument leap out at you, or settle demurely into the background, that don’t involve level in the usual sense. These options give you additional control over a mix that can be very helpful. CHANGING START TIMES CHANGES PERCEIVED LOUDNESS The ear is most interested in the first few hundred milliseconds of a sound, then moves on to the next sound. This may have roots that go way back into our history, when it was important to know if a new sound was leaves rustling in the wind – or a sabre-tooth tiger about to pounce. What happens during those first few hundred milliseconds greatly affects the perception of how “loud” that signal is, as well as the relationship to other sounds happening at the same time. Given two sounds that play at almost the same time, the one that started first will appear to be more prominent. For example, suppose you have kick drum and bass playing together. If you want the bass to be a little more prominent than the kick drum, move it ahead of the kick. To push the bass behind the kick, move it late compared to the kick. The way to move sounds depends on your recording medium. With MIDI sequencers, a track shift function will do the job. With hard disk recorders, you can simply grab a part on-screen and shift it, or use a “nudge” function (if available). Even a few milliseconds of shift can make a big difference. CREATIVE USE OF DISTORTION If you want to bring just a couple instruments out from a mix, patch an exciter or “tube distortion” device set for very little distortion (depending on whether you’re looking for a cleaner or grittier sound, respectively) into an aux bus during mixdown. Now you can turn up the aux send for individual channels to make them jump out from a mix to a greater or lesser degree. TUBES AS PROCESSORS Many members of the “anti-digital” club talk about how tube circuitry creates a mellower, warmer sound compared to solid state devices. Whether you agree or not, one thing is clear: the sound is at the very least different. Fortunately, you can use this to your advantage if you have a digital recorder. As just one example of how to change the mix with tubes, try recording background vocals through a tube preamp, and the lead vocal through a solid-state preamp (or vice-versa). Assuming quality circuitry, the “tubed” vocals will likely sound a little more “in the background” than the solid-state ones. Percussion seems to work well through tubes too, especially when you want the sound to feel less prominent compared to trap drums. PITCH CHANGES IN SYNTH ENVELOPES This involves doing a little programming at your synth, but the effect can be worth it. As one example, take a choir patch that has two layered chorus sounds (the dual layering is essential). If you want this sound to draw more attention to itself, use a pitch envelope to add a slight downward pitch bend to concert pitch on one layer, and a slight upward pitch bend to concert pitch on the other layer. The pitch difference doesn’t have to be very much to create a more animated sound. Now remove the pitch change, and notice how the choir sits further back in the track. Click here for an audio example that plays a short choir part first without the pitch bend, then adds pitch bend. MINI FADE-INS With a hard disk recorder, you can do little fade-ins to make an attack less prominent, thus putting a sound more in the background. However, if you do a fade starting from the beginning of a sound, you’ll lose the attack altogether. Instead, extend the start of the fade to before the sound begins (Fig. 1). Fig. 1: Starting a fade before a sound begins softens the attack without eliminating it. After applying the fade-in operation, the audio doesn’t come up from zero, and the attack will be reduced. VOCAL PANNING One common technique used to strengthen voices is doubling, where a singer sings a part then tries to duplicate it as closely as possible. The slight timing variations add a fuller effect than doubling the sound electronically. However, panning or centering these two tracks makes a big difference during mixing. When centered, the vocal lays back more in the track, and can tend to sound not as full. When panned out to left and right (this needn’t be an extreme amount), the sound seems bigger and more prominent. Some of this is also due to the fact that when panned together, one voice might cover up the other a bit. This doesn’t happen as much when panned. CHORUSING AS KRYPTONITE If you want to weaken a signal, a chorus/flanger can help a lot if it has the option to throw the delayed signal out of phase with the dry signal. Set the chorus/flanger for a short delay (under 10 ms or so), no modulation depth, and use an out of phase output mix (e.g., the output control that blends straight and delayed sounds says -50 instead of +50, or there's an option to invert the signal – see Fig. 2). Fig. 2: A chorus/flanger, when adjusted properly, can "weaken" a sound by applying comb filtering. Alter the mix by starting with the straight sound, then slowly adding in the delayed sound. As the delayed sound’s level approaches the straight sound’s level, a comb-filtering effect comes into play that essentially knocks a bunch of holes in the signal’s frequency spectrum. If you’re trying to make a piano or guitar take up less space in a track, this technique works well. MIXING VIA EQ EQ is a very underutilized resource for mixing. Turning the treble down instead of the volume can bring a track more into the background without having it get “smaller,” just less “present.” A lot of engineers go for really bright sounds for instruments like acoustic guitars, then turn down the volume when the vocals come in (or some other solo happens). Try turning the brightness down a tad instead. And of course, being able to automate EQ changes makes the process go a lot more easily. Overall, when it comes to mixing you have a lot of options other than just changing levels – and implementing changes in this way can make a big difference to the “character” of a mix. Have fun adding some of the above tips to your repertoire. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. It's not the same as a double-neck, but it does let you do some of the same tricks by Craig Anderton I’ll admit it: I’ve always lusted after a double-neck 6 string/12-string guitar. I love the big, rich, “chorused” sound of a 12-string, but I also like to bend notes and hit those six-string power chords. However, I don’t like the weight or the cost of a double-neck, and there’s a certain inconvenience—there are more strings to change, and let’s not even talk about carrying a suitable case around. So my workaround is to “undouble” the top two strings, turning the 12-string into a 10-string. Remove the E string closest to the B strings, and the B string closest to the G strings. This allows bending notes on the top two strings, but you’ll still have a plenty rich sound when hitting chords. Besides, it’s easy enough to add a chorus pedal afterwards, and get additional richness on strings—producing the same kind of effect on the top two strings that you get from doubling them. Sure, it’s not a real double-neck—but it gets you much of the way there, and best of all, wearing it for a couple hours during a performance won’t turn you into the hunchback of Notre Dame over time. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Reason’s Combinator is a great way to create a “building block” that consists of multiple modules and controls By Craig Anderton Reason’s Combinator device (Combi for short), introduced in Reason 3, provides a way to build workstation-style combi programs with splits, velocity-switched layers, integral processing, and more—then save the combination for later recall. However, note that Combis aren’t limited to creating keyboard instruments (one Combi factory patch combines Reason’s “MClass” mastering processors into a mastering suite). Basically, anything you create in Reason can be “combinated.” Furthermore, four knobs and four buttons on the Combi front panel are assignable to multiple parameters. For example, if you have a stack with five synthesizers, one of the knobs could be a “master filter cutoff” control for all the synths. The knobs and buttons can be recorded as automation in the sequencer, or tied to an external controller. CREATING A COMBINATOR PATCH Let’s look at a real-world application that uses Reason’s Vocoder 512. A Vocoder has two inputs: Modulator and Carrier. Both go through filter banks; the modulator filters generate control signals that control the amplitude of the equivalent filter bands that process the carrier. Thus, The modulator impresses its frequency spectrum onto the carrier. The more filters (bands) in the filter banks, the greater the resolution. Typically, vocoders have a mic plugged into the modulator, so speaking into it impresses speech-like characteristics onto the carrier, and thus creates “talking instrument” sounds. However, no law says you have to use a mic, and my fave vocoder setup uses a big, sustained synth sound as the carrier, and a drum machine (rather than voice) as the modulator. The Combi is ideal for creating this setup. Rather than include the synth within the Combi, we’ll design the “DrumCoder Combi” as a signal processor that accepts any Reason sound generator. The Combi includes a Vocoder, ReDrum drum machine, and Spider Audio Merger (Fig. 1). Remember to load the ReDrum with a drum kit, and create some Patterns for modulating the vocoder. To hear only the patterns, set the Vocoder Dry/Wet control to dry. Fig. 1: “DrumCoder” Combi patching. ReDrum has a stereo out but the vocoder’s input is mono, so a Spider merger combines the drum outs. The Combi out goes to the hardware interface, while the input is available for plugging in a sound source. Let’s program the Combi knobs. Open the Combinator’s programmer section, then click on the Vocoder label in the Combi Programmer. Using Rotary 1’s drop-down menu, assign it to Vocoder Decay. Assign Rotary 2 to Vocoder Shift, and Rotary 3 to HF Emphasis. Rotary 4 works well for Wet/Dry, but if you want to use it to select ReDrum patterns instead, click on ReDrum in the programmer and assign Knob 4 to Pattern Select. I’ve programmed the buttons to mute particular ReDrum drums. Now let’s create a big synth stack Combi (Fig. 2) to provide a signal to the DrumCoder. Layer two SubTractors, then a third transposed down an octave. Assign the Combi knobs to control the synth parameters of your choice; Amp Env Decay for all three is useful. Fig. 2: Two SubTractors each feed a CF-101 Chorus. The “Bass” SubTractor feeds a UN-16 Unison. All three effect outs feed a 6:2 line mixer, which patches to the “Big SubTractor” Combi out. TESTING, TESTING Patch the Super SubTractor Combi out to the Vocoder Combi in, and the Vocoder Combi out to the appropriate audio interface output. Start the sequencer to get ReDrum going, then play your keyboard (which should be feeding MIDI data to the Big SubTractor Combi). You’ll hear the keyboard modulated by the drum beat – cool! Now diddle with some of the Vocoder Combi front panel controls, and you’ll find out why Combis rule. RESOURCES These files are useful for checking out the Combinator examples described in this article. DrumCoder.mp3 is an audio example of drumcoding. BigSubTractor.cmb and DrumCoder.cmb are Combis for Reason, as described in the article. DrumCoder.rns is a Reason song file that contains both Combis and sends the output to Reason’s mixed output. If you don’t have a keyboard handy, you can audition this patch by going to the sequencer and unmuting the Big SubTractor track, which plays a single note into the Big SubTractor instrument. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. An analog tool from yesteryear transitions to digital—and learns a few new tricks in the process By Craig Anderton Step sequencing has aged gracefully. Once a mainstay of analog synths, step sequencing has stepped into a virtual phone booth, donned its Super Sequencer duds, and is now equally at home in the most cutting-edge dance music. In a way, it’s like a little sequencer that runs inside of a bigger host sequencer, or within a musical instrument. But just because it’s little doesn’t mean it isn’t powerful, and several DAWs include built-in step sequencers. Early analog step sequencers were synth modules with 8 or 16 steps, and driven by a low-frequency clock. Each step produced a control voltage and trigger, and could therefore trigger a note just as if you’d triggered a keyboard. The clock determined the rate at which each successive step occurred. As a result, you could set up a short melodic sequence, or feed the control voltage to a different parameter, such as filter cutoff. Step sequencing in a more sophisticated form was the basis of drum machines and boxes like the Roland TB-303 BassLine, and is also built into today’s virtual instruments, such as Cakewalk’s Rapture, and even as a module in processors like Native Instruments’ Guitar Rig (Fig. 1). Fig. 1: Guitar Rig’s 16-step “Analog Sequencer” module is controlling the Pro Filter’s cutoff frequency. Reason’s patch-cord oriented paradigm makes it easy to visualize what’s happening with a typical step sequencer (Fig. 2). Fig. 2: This screen shot, cut and pasted for clarity, shows Reason’s step sequencer graphic interface, as well as how it’s “patched” into the SubTractor synthesizer. The upper Matrix view (second “rack” up from the bottom) shows the page generating a stepped control voltage that’s quantized to a standard musical scale as well as a gate signal; these create notes in the SubTractor and trigger their envelopes, as shown by the patch connections on the rear. The lower Matrix view is generating a control voltage curve from the Curve page, and sending this to the SubTractor synth filter. The short, red vertical strips on the bottom of either Matrix front panel view indicate where triggers occur. THIS YEAR’S MODEL Analog step sequencers typically had little more than a control for the control voltage level, and maybe a pushbutton to advance through the steps manually. Modern step sequencers add a lot of other capabilities, such as . . . Pattern storage. Once you tweaked an analog step sequencer, there was nothing you could do to save its settings other than write them down. Today’s sequencers usually do better. For example, the Matrix module in Reason stores four banks of 8 patterns, which can be programmed into the sequencer to play back as desired. Controller sequencing. Step sequencers aren’t just for notes anymore, and it’s usually possible to generate sequences of controllers along with notes (Fig. 3). Fig. 3: A row in Sonar’s Step Sequencer triggers notes, but you can expand the row to show other controller options. This example shows velocity editing. Variable number of steps. Freed from the restrictions of hardware, software step sequencers can provide any number of steps, although you’ll seldom find more than 128—if you need more, use the host’s sequencing capabilities. Step resolution. Typically, with a 16-step sequencer, each step is a 16th note. Variable step resolution allows each step to represent a different value, like a quarter note, eighth note, 32nd note, etc. Step quantization. With analog sequencers, it seemed almost impossible to “dial in” particular pitches; and when you did, they’d eventually drift off pitch anyway. With today’s digital versions, you can quantize the steps to particular pitches, making it easy to create melodic lines. The step sequencers in Rapture even allow for MIDI note entry, so you can play your line and the steps will conform to what you entered. Smoothing. This “rounds off” the sharp edges of the step sequence, producing a more rounded control characteristic. WHAT ARE THEY GOOD FOR? Although step sequencers are traditionally used to sequence melody lines, they have many other uses. Complex LFO. Why settle for the usual triangle/sawtooth/random LFO waveforms? Control a parameter with a step sequencer instead, and you can create pretty whacked waveforms by drawing them in the step sequencer. Apply smoothing, and the resulting waveform will sound more continuous rather than stepped. Create rhythmic patterns with filters. Feeding the filter cutoff parameter with a step sequencer can provide serious motion to the filter sound. This is the heart of Roger Linn’s AdrenaLinn processor, which imparts rhythmic effects to whatever you send into the input. If the step level is all the way down, the cutoff is all the way down and no sound comes out. Higher-level steps kick the filter open more, thus letting the sound “pulse” through. Polyrhythms. Assuming your step sequencer has a variable number of steps, you can create some great polyrhythmic effects. For example, consider setting up a 4-step sequence (1 measure of 4/4) in one step sequencer, and a 7-step sequence (1 measure of 7/4) in a second step sequencer, each driving different parameters (e.g., filter sweeps in opposite channels, or two different oscillator pitches). They play against each other, but “meet up” every seven measures (28 beats). Double-time and half-time sequences. By changing step resolution in the middle of a sequence, such as switching from 8th notes to 16th notes or vice-versa, it’s possible to change the sequence to double-time or half-time respectively. Complex panning. Imagine a step sequencer generating a percussive sequence by triggering a sound with a very quick decay. Now imagine a step sequencer altering the pan position for each hit – this can add an incredible amount of animation to a percussion mix. Live performance options. The original step sequencers were “set-and-forget” type devices. But nowadays, playing with a step sequencer in real time can turn it into a bona fide instrument (ask the TB-303 virtuosos). Change pitch, alter rhythms, edit triggers . . . the results can be not only hypnotic, but inspiring Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. When you're about to lay down a vocal, one of these tips just might help save a take By Craig Anderton These 16 tips can be helpful while recording, but many are also suitable for live performance...check ‘em out. TO POP FILTER OR NOT TO POP FILTER? Some engineers feel pop filters detract from a vocal, but pops detract from a vocal even more. If the singer doesn’t need a pop filter, fine. Otherwise, use one (Fig, 1). Fig. 1: Don’t automatically assume you need a pop filter, but have one ready in case you do. NATURAL DYNAMICS PROCESSING The most natural dynamics control is great mic technique—moving closer for more intimate sections, and further away when singing more forcefully. This can go a long way toward reducing the need for drastic electronic compression. COMPRESSOR GAIN REDUCTION When compressing vocals, pay close attention to the compressor’s gain reduction meter as this shows the amount by which the input signal level is being reduced. For a natural sound, you generally don’t want more than 6dB of reduction (Fig. 2) although of course, sometimes you want a more “squashed” effect. Fig. 2: The less gain reduction, as illustrated here with Cakewalk’s PC2A Leveler, the less obvious the compression effect. To lower the amount of peark or gain reduction, either raise the threshold parameter, or reduce the compression ratio. NATURAL COMPRESSION EFFECTS Lower compression ratios (1.2:1 to 3:1) give a more natural sound than higher ones. USE COMPRESSION TO TAME PEAKS WHILE RETAINING DYNAMICS To clamp down on peaks while leaving the rest of the vocal dynamics intact, choose a high ratio (10:1 or greater) and a relatively high threshold (around –1 to –6dB; see Fig. 3). Fig. 3: A high compression ratio, coupled with a high threshold, provides an action that’s more like limiting than compression. This example shows Native Instruments’ VC160. To compress a wider range of the vocal, use a lower ratio (e.g., 1.5 or 2:1) and a lower threshold, like –15dB. COMPRESSOR ATTACK AND DECAY TIMES An attack time of 0 clamps peaks instantly, producing the most drastic compression action; use this if it’s crucial that the signal not hit 0dB, yet you want high average levels. But consider using an attack time of 5 - 20ms to let through some peaks. The decay (release) setting is not as critical as attack; 100 - 250ms works well. Note: Some compressors can automatically adjust attack and decay times according to the signal passing through the system. This often gives the optimum effect, so try it first. SOFT KNEE OR HARD KNEE? A compressor’s knee parameter, if present, controls how rapidly the compression kicks in. With soft knee, when the input exceeds the threshold, the compression ratio is less at first, then increases up to the specified ratio as the input increases. With hard knee, once the input signal crosses the threshold, it’s subject to the full amount of compression. Use hard knee when controlling peaks is a priority, and soft knee for a less colored sound. TOO MUCH OF A GOOD THING Compression has other uses, like giving a vocal a more intimate feel by bringing up lower level sounds. However, be careful not to use too much compression, as excessive squeezing of dynamics can also squeeze the life out of the vocals. NOISE GATING VOCALS Because mics are sensitive and preamps are high-gain devices, there may be hiss or other noises when the singer isn’t singing. A noise gate can help tame this, but if the action is too abrupt the voice will sound unnatural. Use a fast attack and moderate decay (around 200ms). Also, instead of having the audio totally off when the gate is closed, try attenuating the gain by around 10dB or so instead. This will still cut most of the noise, but may sound more natural. SHIFT PITCHES FOR RICHER VOCALS One technique for creating thicker vocals is to double the vocal line by singing along with the original take, then mixing the doubled take at anywhere from –0 to –12dB behind the original. However, sometimes it isn’t always possible to cut a doubled line—like when you’re mixing, and the vocalist isn’t around. One workaround is to copy the original vocal, then apply a pitch shift plug-in (try a shift setting of –15 to –30 cents, with processed sound only—see Fig. 4). Fig. 4: Studio One Pro’s Inspector allows for easy “de-tuning.” Mix the doubled track so it doesn’t compete with, but instead complements, the lead vocal. FIXING A DOUBLED VOCAL Sometimes an occasional doubled word or phrase won’t gel properly with the original take. Rather than punch a section, copy the same section from the original (non-doubled) vocal. Paste it into the doubled track about 20 - 30ms late compared to the original. As long as the segment is short, it will sound fine (longer segments may sound echoed; this can work, but destroys the sense of two individual parts being played). REVERB AND VOCALS Low reverb diffusion settings work well with vocals, as the sparser number of reflections prevents the voice from being overwhelmed by a “lush” reverb sound. 50 - 100ms pre-delay works well with voice, as the first part of the vocal can punch through without reverb. INCREASING INTELLIGIBILITY A slight upper midrange EQ boost (around 3 - 4kHz) adds intelligibility and “snap” (Fig. 5). Fig. 5: Sonar’s ProChannel EQ set for a slight upper midrange boost (circled in yellow). Note the extreme low frequency rolloff (circled in red) to get rid of sounds below the range of the vocal, like handling noise. Be very sparing; the ear is highly sensitive in this frequency range. Sometimes a slight treble boost, using shelving EQ, will give equal or better results. NUKE THE LOWS A really steep, low-frequency rolloff (Fig. 5) that starts below the vocal range can help reduce hum, handling noise, pops, plosives, and other sounds you usually don’t want as part of the vocal. “MOTION” FILTERING For more “animation” than a static EQ boost, copy the vocal track and run it through an envelope follower plug-in (processed sound only, bandpass mode, little resonance). Sweep this over 2.5 to 4kHz; adjust the envelope to follow the voice. Mix the envelope-followed signal way behind the main vocal track; the shifting EQ frequency highlights the upper midrange in a dynamic, changing way. Note: If the effect is obvious, it’s mixed in too high. RE-CUT, DON’T EDIT Remember, the title was “16 Quick Vocal Fixes.” Many times, having a singer punch a problematic part will solve the issue a whole lot faster than spending time trying to edit it using a DAW’s editing tools. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Here are some secrets behind getting those wide, spacious, pro-sounding mixes that translate well over any system By Craig Anderton We know them when we hear them: wide, spacious mixes that sound larger than life and higher than fi. A great mix translates well over different systems, and lets you hear each instrument clearly and distinctly. Yet judging by a lot of project studio demos that pass across my desk, achieving the perfect mix is not easy…in fact, it's very hard. So, here are some tips on how to get that wide open sound whenever you mix. The Gear: Keep It Clean Eliminate as many active stages as possible between source and recorder. Many times, devices set to "bypass" may not be adding any effect but are still in the signal path, which can add some slight degradation. How many times do line level signals go through preamps due to lazy engineering? If possible, send sounds directly into the recorder—bypass the mixer altogether. For mic signals, use an ultra-high quality outboard preamp and patch that directly into the recorder rather than use a mixer with its onboard preamps. Although you may not hear much of a difference when monitoring a single instrument if you go directly into the recorder, with multiple tracks the cumulative effect of stripping the signal path to its essentials can make a significant difference in the sound's clarity. But what if you're after a funky, dirty sound? Just remember that if you record with the highest possible fidelity, you can always mess with the signal later on during mixdown. The Arrangement Before you even think about turning any knobs, scrutinize the arrangement. Solo project arrangements are particularly prone to "clutter" because as you lay down the early tracks, there's a tendency to overplay to fill up all that empty space. As the arrangement progresses, there's not a lot of space left for overdubs. Here are some suggestions when tracking: Once the arrangement is fleshed out, go back and recut tracks that you cut earlier on. Try to play these tracks as sparsely as possible to leave room for the overdubs you've added. Like many others, I write in the studio, and often the song will have a slightly tentative feel because it wasn't totally solid prior to recording it. Recutting a few judicious tracks always seems to both simplify and improve the music. Try building a song around the vocalist or other lead instrument instead of completing the rhythm section and then laying down the vocals. I often find it better to record simple "placemarkers" for the drums, bass, and rhythm guitar (or piano, or whatever), then immediately get to work cutting the best possible vocal. When you re-record the rhythm section for real, you'll be a lot more sensitive to the vocal nuances. As Sun Ra once said, "Space is the place." The less music you play, the more weight each note has, and the more spaciousness this creates in the overall sound. Proofing the Tracks Before mixing, listen to each track in isolation and check for switch clicks, glitches, pops, and the like, then kill them. These low-level glitches may not seem that important, but multiply them by a couple dozen tracks, and they can definitely muddy things up. If you don't want to get too heavily into editing, you can do simple fixes by punching in and out over the part to be erased. DAWs may or may not have sophisticated enough editing options to solve particular problems; for example, they'll probably let you cut and paste, but if something like sophisticated noise reduction is not available in a plug-in, this may require opening the track in a digital audio editing program, applying the appropriate processing, then bringing the track back into the DAW. Also note that some recording programs can "link" to a particular digital audio editor. In this case, all you may need to do is, for example, double-click on a track, and you're ready to edit. Equalization The audio spectrum has only so much space, and you need to make sure that each sound occupies its own turf without fighting with other parts. This is one of the jobs of EQ. For example, if a rhythm instrument interferes with a lead instrument, reduce the rhythm instrument's response in the part of the spectrum that overlaps the lead. One common mistake I hear with recordings done by singer/songwriters is that they (naturally) feature themselves in the mix, and worry about "details" like the drums later. However, as drums cover so much of the audio spectrum (from the low-frequency thud of the kick to the high-frequency sheen of the cymbals), and because drums tend to be so upfront in today's mixes, it's usually best to mix the drums first, then find "holes" in which you can place the other instruments. For example, if the kick drum is very prominent, it may not leave enough room for the bass. So, boost the bass at around 800 to 1,000 Hz to bring up some of the pick noise and brightness. This is mostly out of the range of the kick drum, so the two won't interfere as much. Try to think of the song as a spectrum, and decide where you want the various parts to sit, and their prominence (see Fig. 1). I often use a spectrum analyzer when mixing, not because your ears don't work well enough for the task, but because it provides invaluable ear training and shows exactly which instruments take up which parts of the audio spectrum. This can often alert you to a buildup of excessive level in a particular region. Fig. 1: Different instruments sit in different portions of the spectrum (of course, this depends on lots of factors, and this illustration is only a rough approximation). Use EQ to distribute the energy from various instruments so that they use the full spectrum rather than bunch up in one specific range. If you really need a sound to "break through" a mix, try a little bit of boost in the 1 to 3 kHz region. Just don't do this with all the instruments; the idea is to use boosts and cuts to differentiate one instrument from another. To place a sound further back in the mix, sometimes switching in a high-cut filter will do the job by "dulling" the sound somewhat—you may not even need to switch in the main EQ. Also, using the low-pass filter on instruments that veer toward the bass range, like guitar and piano, can help trim their low end to open up more space for the all-important bass and kick drum. Compression When looking for the biggest mix, compression can actually makes things sound "smaller" (but louder) by squeezing the dynamic range. If you're going to use compression, try applying compression on a per-channel basis rather than on the entire mix. Compression is a whole other subject (check out the article Compressors Demystified), but suffice it to say that many people have a tendency to compress until they can "hear the effect." You want to avoid this; use the minimum amount of compression necessary needed to tame unruly dynamic range. If you do end up compressing the stereo two-track, here's a tip to avoid getting an overly squeezed sound: Mix in some of the straight, non-compressed signal. This helps restore a bit of the dynamics yet you still have the thick, compressed sound taking up most of the available dynamic range. Mastering Mastering is the Supreme Court of audio—if you can't get a ruling in your favor there, you have nowhere else to go. A pro mastering engineer can often turn muddy, tubby-sounding recordings into something much clearer and defined. Just don't expect miracles, because no one can squeeze blood from a stone. But a good mastering job might be just the thing to take your mix to the next level, or at least turn a marginal mix into a solid one. The main point of this article is that there is no button you can click on that says "press here for wide open mixes." A good mix is the cumulative result of taking lots of little steps, such as the ones detailed above, until they add up to something that really works. Paying attention to detail does indeed help. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Don't Miss Out on the Next Big Thing in Guitar Distortion By Craig Anderton If you're a guitarist and you're not into multiband distortion...well, you should be. Just as multiband compression delivers a smoother, more transparent form of dynamics control, multiband distortion delivers a "dirty" sound like no other. Not only does it give a smoother effect with guitar, it's a useful tool for drums, bass, and believe it or not, program material – some people (you know who you are!) have even used it with mastering to add a distinctive, unique "edge." As far as I know, the first example of multiband distortion was a do-it-yourself project, the Quadrafuzz, that I wrote up in the mid-'80s for Guitar Player magazine. It remains available from PAiA Electronics (www.paia.com), and is described in the book "Do It Yourself Projects for Guitarists" (BackBeat Books, ISBN #0-87930-359-X). I came up with the idea because I had heard hex fuzz effects with MIDI guitar, where each string was distorted individually, and liked the sound. But it was almost too clean, yet I wasn't a fan of all the intermodulation problems with conventional distortion. Multiband distortion was the answer. However, we've come a long way since the mid-'80s, and now there are a number of ways to achieve this effect with software. HOW IT WORKS Like multiband compression, the first step is to split the incoming signal into multiple frequency bands (typically three or four). These usually have variable crossover points, so each band can cover a variable frequency range. This is particularly important with drums, as it's common to have the low band zero in on the kick and distort it a bit, while leaving higher frequencies (cymbals etc.) untouched. Then, each band is distorted individually (incidentally, this is where major differences show up among units). Then, each band will usually have a volume control so you can adjust the relative levels among bands. For example, it's common to pull back on the highs a bit to avoid "screech," or boost the upper midrange so the guitar "speaks" a little better. With guitar, you can hit a power chord and the low strings will have minimal intermodulation with the high strings, or bend a chord's higher strings without causing beating with the lower ones. SOFTWARE PLUG-INS The first multiband distortion plug-in was a virtual version of the Quadrafuzz, coded as a VST/DX plug-in by Spectral Design for Steinberg. Although I was highly skeptical that software could truly emulate the sound of the hardware design, fortunately a guitarist was on the design team, and he nailed the sound. The Quadrafuzz was included with Cubase SX, and is a currently available from Steinberg as a "legacy" plug-in. But they took it further than the hardware version, offering variable frequency bands (the hardware version is "tuned" specifically for guitar), as well as five different distortion curves for each band, from heavy clipping to a sort of "soft knee" distortion. As a result, it's far more versatile than the original version. A free plug-ins, mda's Bandisto, is basic but a fine way to get started. It offers three bands, with two variable crossover points, and distortion as well as level controls for each of the three bands. There are two distortion modes, unipolar (a harsh sound) and bipolar, which clips both sides of the waveform and gives a smoother overall effect. While the least sophisticated of these plug-ins, you can't beat the price. Bandisto is as good a way as any to get familiar with multiband distortion. Ohm Force's Predatohm provides up to four bands, each of which includes four controls to change the distortion's tonality as well as the channel's overall tone and character. Unique to Predatohm is a feedback option that can add an extremely aggressive edge (it's all over my "Turbulent Filth Monsters" sample CD of hardcore drum loops), as well as a master tone section. Wild, wacky, and wonderful, this plug-in has some serious attitude. Under its spell, even nylon-string guitars can become hardcore dirt machines. iZotope's Trash uses multiband distortion as just one element of a comprehensive plug-in that also incorporates pre- and post-distortion filtering, amp cabinet modeling, multi-band compression, and delay. The number of bands is variable from one to four, but each band can have any one of 47 different algorithms. Also, there are two distortion stages, so you can emulate (for example) a fuzzbox going into an overdriven amp (however, the bands are identical for each of the two stages). The pre- and post-distortion filter options are particularly useful for shaping the distortion's tonal quality. This doesn't just make trashy sounds, it revels in them. Sophisticated trash may be an oxymoron, but in this case, it's appropriate due to the complement of highly capable modules. ROLLING YOUR OWN You're not constrained to dedicated plug-ins. For example, Native Instruments' Guitar Rig has enough options to let you create your own multiband distortion. A Crossover module allows splitting a signal into two bands; placing a Split module before two Crossover modules gives the required four bands. Of course, you can go nuts with more splits and create more bands. You can then apply a variety of amp and/or distortion modules to each frequency split. Yet another option is to copy a track in your DAW for as many times as you want bands of distortion. For each track, insert the filter and distortion plug-ins of your choice. On advantage to this approach is each band can have its own aux send controls, as well as panning. Spreading the various bands from left to right (or all around you, for surround fans!) adds yet another level of satisfying mayhem. In terms of filtering, the simplest way to split a signal into multiple bands is to use a multiband compressor, but set to no compression and with individual bands soloed (most multiband compressors will let you solo or bypass individual bands). For example with three tracks, you could have a high, middle, and low band from each crossover feeding its own distortion plug-in. Here a guitar track has been "cloned" three times in Cakewalk Sonar, with each instance feeding a multiband crossover followed by an amp sim plug-in (Native Instruments' Guitar Rig). The multiband compressors have been edited to act as crossovers, thus feeding different frequency ranges to the amp sims. AND BEST OF ALL... Thanks to today's fast computers, sound cards, and drivers, you can play guitar through plug-ins in near-real time, so you can tweak away while playing crunchy power chords that rattle the walls. Happy distorting! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. Before you play with your new gear, make sure you keep a record of its vital stats by Craig Anderton When you buy a piece of gear, of course the first thing you want to do is have fun with it! But think about the future: At some point, it's going to need repairs, or you might want to sell it, or it might (and I sure hope it doesn't) get stolen. As a result, it's a good idea to plan ahead and do the following. 1. Buy some kind of storage system for saving all the various things that come packed with the gear. This includes rack ears you might use someday if you rack mount it, the owner's manual or a CD-ROM containing any documentation, any supplementary "read me" pieces of paper, that audio or MIDI adapter you don't think you'll use but you'll need someday, and the like. For storage, I use stackable sets of plastic drawers you can buy inexpensively just about anywhere; for gear that comes only with paper and no bulky accessories, I have files in a filing cabinet packed with manuals and such. A more modern solution for downloadable files is to have a “manual bookshelf” in your iPad. 2. Register your purchase. Sometimes it's a hassle to do this, but it's important to establish a record for warranty work. For software, it can mean the difference between paying for an upgrade and getting one for free, because a new version came out within a short period of time after you purchased the program. I always check the "Keep me notified of updates" box if available; sure, you'll get some commercial offers and such, but you'll also be among the first to find out that an update is available. 3. Record any serial numbers, authorization codes, etc. Also record your user name and password for the company's web site, as with software, that's often what you need to access downloads and upgrades. Also record when and where you purchased the gear, and how much you paid. I keep all this information on my computer, and copy it to a USB stick periodically as backup. 4. For software, retain all firmware and software updates. If you ever have to re-install a program, it may not be possible to upgrade from, say, Version 1 to Version 3—you may need to go through Version 2 first. I keep all upgrades on an data drive in my computer, and backed up to an external hard drive. With all this info at your fingertips, if you ever go to sell the gear, you'll be very glad you had these records. What's more, if any problems crop up with your gear, you'll be well-prepared to deal with them. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Exploring the Art of Filthy Signal Mutation by Craig Anderton I like music with a distinctly electronic edge, but also want a human "feel." Trying to resolve these seemingly contradictory ideals has led to some fun experimentation, but one of the more recent "happy accidents" was finding out what happens when you apply heavy signal processing to multitracked drums played by a human drummer. I ended up with a sound that slid into electronic tracks as easily as a debit card slides into an ATM machine, yet with a totally human feel. This came about because Discrete Drums, who make rock-oriented sample libraries of multitracked drums (tracks are kick, snare, stereo toms, stereo room mic tracks, and stereo room ambience), received requests for a more extreme library for hip-hop/dance music. I had already started using their CDs for this purpose, and when I played some examples of loops I had done, they asked whether I'd like to do a remixed sample CD with stereo loops. Thus, the "Turbulent Filth Monsters" project was born, which eventually became a sample library (originally distributed by M-Audio, and now by Sonoma Wire Works). Although I used the Discrete Drums sample library CDs and computer-based plug-ins, the following techniques also apply to hardware processors used in conjunction with drum machines that have individual outs, or multitracked drums recorded on a multitrack recorder (or sample CD tracks bounced over to a multitrack). Try some of these techniques, and you'll create drum sounds that are as unique as a fingerprint - even if they came from a sample CD. EFFECTS AUTOMATION AND REAL TIME CONTROL Editing parameters in real time lets you "play" an effect along with the beat. This is a good thing. However, it's unlikely that you'll be able to vary several parameters at once while mixing the track down to a loop, so you'll want to record these changes as automation. Hardware signal processors can often accept MIDI controllers for automation. If so, you can sync a sequencer up to whatever is playing the tracks. Then, deploy a MIDI control surface (like the Mackie Control, Novation Nocturn, etc.) to record control data into the sequencer. Once in the sequencer, edit the controller data if needed. If the processor cannot accept control signals, then you'll need to make these changes in real time. If you can do this as you mix, fine. Otherwise, bounce the processed signal to another track so it contains the changes you want. Software plug-ins for DAWs are a whole other matter, as there are several possible automation scenarios: Use a MIDI control surface to alter parameters, while recording the data to a MIDI track (hopefully this will drive the effect on playback) Twiddle the plug-in's virtual knobs in real time, and record those changes within the host program Use non-real time automation envelopes Record data that takes the form of envelopes, which you can then edit Use no automation at all. In this case, you can send the output through a mixer and bounce it to another track while varying the parameter. This can require a little after-the-fact trimming to compensate for latency (i.e., delay caused by going through the mixer then returning back into the computer) issues. For example, with VST Automation (Fig. 1), a plug-in will have Read and Write Automation buttons. Fig. 1: Click on the Write Automation button with a VST plug-in, and when you play or record, tweaking controls will write automation into your project. If you click on the Write Automation button, any changes you make to automatable parameters will be written into your project. This happens regardless of whether the DAW is in record or playback mode. PARALLEL EFFECTS In many cases, you want any effects to be in parallel with the main drum sound. For example, if you put ring modulation or wah-wah on a kick drum, you'll lose the essential "thud" that fills out the bottom. With a hard disk recorder, parallel effects are easy to do: Copy the track and add the effects to the copy (Fig. 2). Fig. 2: Ring Thing, a free download from DLM, is processing a copy of the drum track. The processed track is mixed in with the original drum track at a lower level. With a hardware mixer, it's also not hard to do parallel processing because you can split the channel to be processed into two mixer inputs, and insert the effect into one of the input channel strips. THESE ARE A FEW OF MY FAVORITE FX Okay, we're set up for real time control and are playing back some drum tracks. Here are some of my favorite nasty drum processors. Ring Modulator. A ring modulator has two inputs, for a carrier and modulator. The output provides the sum and difference of the two signals while suppressing the originals. For example, if you feed in a 400 Hz carrier and 1 kHz modulator, the output will consist of a 600 Hz and 1.4 kHz tone mixed together. Most plug-in ring modulators dedicate the carrier input to an oscillator that's part of the plug-in, with the track providing the modulator input. A hardware ring modulator - if you can find one - may include a built-in carrier waveform, or have two "open" inputs where you can plug in anything you want. The ring modulator produces a "clangorous," metallic, enharmonic sound (sounds good already, eh?). I like to use it mostly as a parallel effect on toms and kick; a snare signal, or room sounds, are complex enough that adding further complexity usually doesn't help. Having a steady carrier tone can get pretty annoying (although it has its uses for electro-type music), so I like to vary the frequency in real time. Envelope followers and LFOs - particularly tempo-synched LFOs - are good choices, although you can always tweak the frequency manually. With higher frequencies, the sound becomes kind of toy-like; lower frequencies can give more power if you zero in on the right frequency range. Envelope-Controlled Filter. This is another favorite for individual drum sounds. Again, you'll probably want to run this in parallel unless you seek a thinner sound. High resonance settings make the sound more "dinky," whereas low resonance can give more "thud" and depth. For hardware, you'll likely need a stomp box, where envelope-controlled filters are plentiful (the Boss stomp boxes remain a favorite, although if you can find an old Mutron III or Funk Machine, those work too). For plug-ins, many guitar amp sims have something suitable (e.g,, the Wah Wah module in Waves GTR Solo; see Fig. 3). Fig. 3: This preset for Waves GTR Solo adds funkified wah effects to drum tracks. The Delay adds synched echos, the Amp module adds some grit, and the Compressor at the output keeps levels under control. I also like using the wah effect in IK Multimedia's AmpliTube 2 guitar amp plug-in, which is also great for... Distortion. Adding a little bit of grit to a kick drum can make it punch through a track, but I've also added heavy distortion to the room mic sound while keeping the rest of the drums clean. This "muddies up" the sound in an extremely rude way, yet the clean sounds running in parallel keep it from becoming a hopeless mess. Distortion doesn't do much for snares, which are already pretty dirty anyway. But it can increase the snare's apparent decay by bringing up the low-level decay at the end. Guitar amp distortion seems particularly useful because of the reduced high end, which keeps the sound from getting too "buzzy," and low end rolloff, which avoids muddiness. Guitar amp plug-ins really shine here as well; I particularly like iZotope's Trash (Fig. 4), as it's a multiband (up to four bands) distortion unit. Fig. 4: In this preset, iZotope's Trash is set up to deliver three bands of distortion. This means you can go heavy on, say, lower midrange distortion, while sprinkling only a tiny bit of dirt on the high end. It's also good for mixed loops because multiband operation prevents excessive intermodulation distortion. Feedback. And you thought this technique was just for guitarists...actually, there are a couple ways to make drums feed back. For hardware, one technique is to send an aux bus out to a graphic equalizer, then bring the graphic EQ back into the channel, and turn up the channel's aux send so some signal goes back into the EQ. Playing with individual sliders can cause feedback in the selected frequency range, but this requires a really light touch - it's easy to get speaker-busting runaway feedback. Adding a limiter in series with the EQ is a good idea. My favorite feedback technique uses the Ohm Force Predatohm plug-in, which was already shown in Fig. 1. This is a multiband distortion/compression plug-in with feedback frequency and amount controls. But the killer feature is that all parameters are automatable. You can tweak the amount control rhythmically to give a taste of feedback before it retreats. Similarly, you can alter the frequency with amount set fairly high. As the frequency sweeps through a range where there's lots of audio energy, feedback will kick in - but as it sweeps past this point, the feedback disappears. LET'S NOT FORGET THE TRULY WEIRD A vocoder (Fig. 5) is a great processor for drums, as there are several possible ways to use it. Fig. 5: The Vocoder in Ableton Live. In this example, drums are modulating a guitar's power chord. You have several choices of carriers for the vocoder (circled in green), including internal noise, the modulator (so the modulator signal feeds both the modulator and carrier ins), or pitch tracking, where the carrier is a monophonic oscillator that tracks the modulator signal's pitch. One is to use the room ambience as the carrier, and a submix of the kick, snare, and toms as the modulator. As the drums hit, they bring in sections of the ambience, which if you've been paying attention so far, is probably being run through some weird effect of its own. Another trick I did was bring in an ambience track from a different drum part and modulate that instead. You can also use the drums to "drumcode" something like a bunch of sawtooth waves, a guitar power chord, whatever. These sounds then lose their identities and become an extension of the drums. Both hardware and software vocoders are fairly common. Generally the most whacked-out processors come in plug-in form, such as the GRM Tools series, the entire Ohm Force line (their Hematohm frequency shifter is awesome with drums), Waves' tasty modulation effects like the Enigma and MondoMod, PSP's Vintage Warmer (a superb general-purpose distortion device), and too many others to mention here - go online, and download some demos. Also, let's not forget some of those old friends that can learn new tricks, like flanger, chorus, pitch shifters, and delay - extreme amounts of modulation or swept delays can go beyond their stereotyped functions. Emagic's Logic is also rich in plug-ins, many of which can be subverted into creating filthy effects. The possibilities they open up are so mind-boggling I get tingly all over just thinking about it. SO WHAT'S THE PAYOFF? Drum loops played by a superb human drummer, with all those wonderful little timing nuances that are the reason drum machines have not taken over the world, will give your tracks a "feel" that you just can't get with drum machines. But if you add on really creative processing, the sounds will be so electronified that they'll fit in perfectly with more radical instruments synths, highly processed vocals, and technoid guitar effects. So, get creative - you'll have a good time doing it, and your recordings won't sound like million others. What good are all these great new toys if you don't exploit them? Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. Optimize Your Reverberant Space for the Best Possible Sound By Craig Anderton There's nothing like the sound of real reverb, such as what you hear in a cathedral or symphonic hall. That's because reverb is made up of a virtually infinite number of waves bouncing around within a space, with ever-changing decay times and frequency responses. For a digital reverb to synthesize this level of complexity is a daunting task, but the quality and realism of digital reverb continues to improve. Today's reverbs come in two flavors: convolution and synthetic (also called algorithmic). A convolution reverb is sort of like the reverb equivalent of a sampling keyboard, as it's based on capturing a sonic "fingerprint" of a space (called an impulse), and applying that fingerprint to a sound. Convolution reverbs are excellent at re-creating the sound of a specific acoustical space. Synthetic reverbs model a space via reverberation algorithms. These algorithms basically set up "what if" situations: what would a reverb tail sound like if it was in a certain type of room of a certain size, with a certain percentage of reflective surfaces, and so on. You can change the reverb sound merely by plugging in some different numbers—for example, by deciding the room is 50 feet square instead of 200 feet square. Even though digital synthetic reverbs don't sound exactly like an acoustic space, they do offer some powerful advantages. First, an acoustic space has one "preset"; a digital reverb offers several. Second, digital reverb is highly customizable. Not only can you use this ability to create a more realistic ambience, you can create some unrealistic—but provocative—ambiences as well. However, the only way to unlock the true power of digital reverb is to understand how its parameters affect the sound. Sure, you can just call up a preset and hope for the best. But if you want world-class reverb, you need to tweak it for the best possible match to the source material. By the way, although we'll concentrate on the parameters found in synthetic reverbs, many convolution reverbs have similar parameters. REVERB PARAMETERS The reverb effect has two main elements: The early reflections (also called initial reflections) consist of the first group of echoes that occur when sound waves hit walls, ceilings, etc. (The time before these sound waves actually hit anything is called pre-delay.) These reflections tend to be more defined and sound more like "echo" than "reverb." The decay, which is the sound created by these waves as they continue to bounce around a space. This "wash" of sound is what most people associate with reverb. Following are the types of parameters you'll find on higher-end reverbs. Lower-cost models will likely have a subset of these. Room size. This affects whether the paths the waves take while bouncing around in the virtual room are long or short. If the reverb sound has flutter (a periodic warbling effect that sounds very unrealistic), vary this parameter in conjunction with decay time (described next) for a smoother sound. Decay time. This determines how long it takes for the reflections to run out of energy. Remember that long reverb times may sound impressive on instruments when soloed, but rarely work in an ensemble context (unless the arrangement is very sparse). Decay time and room size tend to have certain "magic" settings that work well together. Preset reverbs lock in these settings so you can't make a mistake. For example, it can sound "wrong" to have a large room size and short decay time, or vice-versa. Having said that, though, sometimes those "wrong" settings can produce some cool effects, particularly with synthetic music where the goal isn't necessarily to create the most realistic sound. Damping. If sounds bounce around in a hall with hard surfaces, the reverb's decay tails will be bright and more defined. With softer surfaces (e.g., wood instead of concrete, or a hall packed with people), the reverb tails will lose high frequencies as they bounce around, producing a warmer sound with less "edge." A processor has a tougher time making accurate calculations for high frequency sounds, so if your reverb produces an artificial-sounding high end, just concede that fact and introduce some damping to create a warmer sound. High and low frequency attenuation. These parameters restrict the frequencies going into the reverb. If your reverb sounds metallic, try reducing the highs starting at 4—8kHz. Remember, many of the great-sounding plate reverbs didn't have much response over 5kHz, so don't fret too much about a reverb that can't do great high frequency sizzle. Having too many lows going through the reverb can produce a muddy, indistinct sound that takes focus away from the kick and bass. Try attenuating from 100—200Hz on down for a tighter low end. Early reflections diffusion (sometimes just called diffusion). This is one of the most critical reverb controls for creating an effect that properly matches the source material. Increasing diffusion pushes the early reflections closer together, which thickens the sound. Reducing diffusion produces a sound that tends more toward individual echoes. For percussive instruments, you generally want lots of diffusion to avoid the "marbles bouncing on a steel plate" effect caused by too many discrete echoes. However, for vocals and other sustained sounds, reduced diffusion can give a beautiful reverberant effect that doesn't overpower the source. With too much diffusion, the voice may lose clarity. Note that there may be a second diffusion control for the reverb decay. With less versatile reverbs, both diffusion parameters may be combined into a single control. Early reflections pre-delay. It takes a few milliseconds before sounds hit the room surfaces and start to produce reflections. This parameter, usually variable from 0 to 100ms or so, simulates this effect. Increase the parameter's duration to give the feeling of a bigger space; for example, if you've dialed in a large room size, you'll probably want to employ a reasonable amount of pre-delay. Reverb density. Lower densities give more space between the reverb's first reflections and subsequent reflections. Higher densities place these closer together. Generally, as with diffusion, I prefer higher densities on percussive content, and lower densities for vocals and sustained sounds. Early reflections level. This sets the early reflections level compared to the overall reverb decay. The object here is to balance them so that the early reflections are neither obvious, discrete echoes, nor masked by the decay. Lowering the early reflections level also places the listener further back in the room, and more toward the middle. High frequency decay and low frequency decay. Some reverbs have separate decay times for high and low frequencies. These frequencies may be fixed, or there may be an additional crossover parameter that sets the dividing line between the lows and highs. These controls have a huge effect on the overall reverb character. Increasing the low frequency decay creates a bigger, more "massive" sound. Increasing high frequency decay gives a more "ethereal" type of effect. An extended high frequency decay, which is generally not found in nature, can sound great on vocals as it adds more reverb to sibilants and fricatives, while minimizing reverb on plosives and lower vocal ranges. This avoids a "muddy" reverberant effect, and doesn't compete with the vocals. ONE REVERB OR MANY? I tend not to use a lot of reverb, and when I do, it's to simulate an acoustic space. Although some producers like putting different reverbs on different tracks, I prefer to insert reverb in an aux bus, and use different send amounts to place the sound source in the reverberant space (more send places the sound further back; less send places it more up front). For this type of "program material" application, I'll use fairly high diffusion coupled with a decent amount of high frequency damping. The only exceptions to this are when I want an "effect" on drums, like gated reverb, or need a separate reverb for the voice. Voices often benefit from a bright, plate-like effect with less diffusion and damping. In general I'll send some vocal into the room reverb and some into the "plate," then balance the two so that the vocal reverb blends well with the room sound. REALITY CHECK The most difficult task for a digital reverb is to create realistic first reflections. If you have a nearby space with hard surfaces like a tile bathroom, basement with hard concrete surfaces, or even just a room with a tiled floor, place a speaker in the room and feed it with an aux bus output. Then add a mic in the space to pick up the reflections. Blend in the real first reflections with the decay from a digital reverb, and the result often sounds a lot more like a real reverb chamber. DOUBLE YOUR (REVERB) PLEASURE I've yet to find a way to make a bad reverb plug-in sound good, but you can make a good reverb plug-in sound even better: "Double up" two instances of reverb (each on their own aux bus), set the parameters slightly differently to create a more "surrounding" stereo image instead of a point source, then pan one reverb somewhat more to the left and the other more to the right. You can even do this with two different reverbs. The difference may be subtle, but it can definitely improve the sound. Curious what this sounds like? Click here to download the sound of one reverb, and click here to download the sound of two reverbs combined together. The difference is very subtle (it's best to listen with headphones), but as with most tweaks involving audio, these differences add up over the course of many tracks in a multitracked production. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. It's Like Viagra for Live Performance by Craig Anderton Jennifer Hudson did it while singing the national anthem at the Super Bowl. Kiss does it. Even classical musicians playing at the President's inaugural do it. Sometimes it seems everyone uses backing tracks to augment their live sound. So why not you? Yes, it's sorta cheating. But somewhere between something innocuous like playing to a drum machine, and lip-synching to a pre-recorded vocal rather than singing yourself, there's a "sweet spot" where you can enhance what is essentially a live performance. A trio might sequence bass lines, for example, or a drummer might add pre-recorded ethnic percussion. However, you want something bullet-proof, easy to change on the fly if the audience's mood changes, and simple. I SYNC, THEREFORE I AM If a drummer's playing acoustic drums and a sequencer's doing bass parts, the drummer will have to follow the sequencer. But what happens if there's no bass to follow at the beginning of a song, or it drops out? The solution is in-ear monitors (besides, monitor wedges are so 20th century!). Assuming whatever's playing the backing part(s) has more than one output available, one channel can be an accented metronome that feeds only the in-ear monitors, while the other channel contains the backing track. If there are only two outputs the backing track will have to be mono, but that doesn't matter too much for live performance. BACKING TRACK OPTIONS The simplest backup is something that plays in the background (e.g., drum machine, pre-recorded backing track on CD, iPod, MP3 player, etc.), and you play to it. RAM-based MP3 players are super-reliable. They don't care about vibration, don't need maintenance, and have no start-up time. However, you can get CD players with enough anti-skip memory to handle tough club environments (just don't forget to clean your CD player's lens if you play smoky clubs). Another advantage of a simple stereo playback device is potential redundancy: Bringing another CD/MP3 player for backup is cheap and easy to swap out. The biggest drawback is musical rigidity. Want to take another eight bars in the solo? Forget it. A few drum machines give you some latitude (even the venerable Alesis SR-16 can switch between patterns and extend them), but with most players, what you put in is what you get out. To change song orders, just use track forward/backward to find the desired track. But the backup track player will always have to start off the song, or you'll need to hit Play at just the right time to bring it in. But these days, it's also possible to use machines designed specifically to play backing tracks - like the Boss JS-10 eBand (Fig. 1). This can play back WAV or MP3 files from an SD card (32GB will give you around 50 hours of playing time - perfect for Grateful Dead tribute bands). You can also create song files specific to the JS-10. THE LAPTOP FACTOR As many of the parts you'll use for backing tracks probably started in a computer sequencer, it makes sense to use it for your backing tracks. This is also the most flexible option; for example, if you sequence your backing track using Ableton Live (or most other hosts), you can change loop points on-the-fly and have a section repeat if you want to extend a solo (Fig. 2). Cool. It's also easy to mute or solo tracks for additional changes. Fig. 2: Move Live's loop locators (the looped portion is shown in red for clarity) on the fly to repeat a portion of music. As to reliability, though, computers can be scary. Few laptops are built to rock and roll specs, although there are exceptions. Connectors are flimsy, too; at least build a breakout box with connectors that patch into your computer, then plug the cables that go to the outside world into the breakout box. Secure your laptop (and the breakout box) to your work surface. Tape down any cables so no one can snag them. On the plus side, the onboard battery will carry you through if the power is iffy, or if someone trips over the AC cord while passing out drunk. Not, of course, that something like that could ever happen at a live performance... THE iPAD OPTION For less rigorous needs, an iPad will tale care of you. In fact, the SyncInside app ($8.99 from the App Store; see Fig. 3) lets you hook up a USB interface using the camera connector kit, and can output stereo tracks as well as a click through headphones (assuming your interface is up to the task). Fig. 3: The SyncInside iPad app was designed specifically for playing backing tracks in live performance situations. OneTrack is another iOS app for playing backing tracks, but it works with iPhone and iPod touch as well as an iPad. iOS solutions can also be convenient because nothing's better for live performance than redundancy. If you have an iPhone and an iPad, then an app like OneTrack can live in both places - if one device dies, you're still good to go. THE SEQUENCER SOLUTION A reliable solution, and very flexible solution, is the built-in sequencer in keyboard workstations (e.g., Roland Fantom, Yamaha Motif, Korg Kronos, etc.). If you're already playing keyboard, hitting a Play button is no big deal. You may also be able to break a song into smaller sequences, creating a "playlist" you can trigger on the fly to adapt to changes in the audience's mood; and with a multitrack sequence, you have the flexibility to mute and mix the various tracks if you want to get fancy (Fig. 4). What's more, as most workstation keyboards have separate outs, sending out a separate click to headphones will probably be pretty simple. Fig. 4: Yamaha's workstations have sophisticated sequencing options, as evidenced in this screen from the Motif XS. Another option is arranger keyboards. Casio's WK-6500 isn't an arranger keyboard in the strictest sense, as it's also a pretty complete synthsizer workstation (Fig. 5). Fig. 5: If you're looking for a keyboard-based backing track solution, arranger keyboards, and keyboards with auto-accompaniment like the Casio WK-6500, will often give you want you want. However, it does include auto-accompniment features and drum patterns with fills, ends, and so on. And with a 76-key keyboard, you can enhance your backing tracks with real playing. How's that for a concept? (The price is right, too - typically under $300.) THE IMPORTANCE OF AN EXIT STRATEGY With live backing tracks, always have an exit strategy. I once had a live act based around some, uh, unreliable gear, so I patched an MP3 player with several funny pieces of audio recorded on it into my mixer. (One piece was a "language lesson," set to music, that involved a word we can't mention here; another had a segment from the "How to Speak Hip" comedy album.) If something needed reloading, rebooting, or troubleshooting, I'd hit Play on the player. Believe me, anything beats dead air! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Can't get your bass to fit right in the mix? Then follow these tips By Craig Anderton If there’s one instrument that messes with people’s minds while mixing, it’s bass. Often the sound is either too tubby, too thin, interferes too much with other instruments, or isn’t prominent enough . . . yet getting a bass to sit right in a mix is essential. So, here are ten tips on how to make your bass “play nice with others” during the mixing process. 1 CHECK YOUR ACOUSTICS Small project studio rooms reveal their biggest weaknesses below a couple hundred Hz, because the length of the bass waves can be longer than your room dimensions—which leads to bass cancelations and additions that don’t tell the truth about the bass sound. Your first acoustic fix should be putting bass traps in the corners, but the better you can treat your room, the closer your speakers will be to telling the truth. If acoustic treatment isn’t possible, then do a reality check with quality headphones. 2 MUCH OF THE SOUND IS IN THE FINGERS Granted, by the time you start mixing, it’s too late to fix the part—so as you record, listen to the part with mixing in mind. As just one example, fretted notes can give a tighter, more defined sound than open strings (which are often favored for live playing because they give a big bottom—but can overwhelm a recording). Also, the more a player can damp unused strings to keep them from vibrating, the “tighter” the part. 3 COMPRESSION IS YOUR FRIEND Normally you don’t want to compress the daylights out of everything, but bass is an exception, particularly if you’re miking it. Mics, speakers, and rooms tend to have really uneven responses in the bass range—and all those anomalies add up. Universal Audio’s LA-2A emulation is just one of many compressors that can help smooth our response issues in a bass setup. Compression can help even out the response giving a smoother, rounder sound. Also, try using parallel compression—i.e., duplicate the bass track, but compress only one of the tracks. Squash one track with the compressor, then add in the dry signal for dynamics. Some compressors include a dry/wet control to make it easy to adjust a blend of dry and compressed sounds. 4 THE RIGHT EQ IS CRUCIAL Accenting the pick/pluck sound can make the bass seem louder. Trying boosting a bit around 1kHz, then work upward to about 2kHz to find the “magic” boost frequency for your particular bass and bassist. Also consider trimming the low end on either the kick or the bass, depending on which one you want to emphasize, so that they don’t fight. Finally, many mixes have a lot of lower midrange buildup around 200-400Hz because so many instruments have energy in that part of the spectrum. It’s usually safe to cut bass a bit in that range to leave space for the other instruments, thus providing a less muddy overall sound; sometimes cutting just below 1kHz, like around 750-900Hz, can also give more definition. 5 TUNING IS KEY If the bass foundation is out of tune, the beat frequencies when the harmonics combine with other instruments are like audio kryptonite, weakening the entire mix. Beats within the bass itself are even worse. Tune, baby, tune! This can’t be emphasized enough. If you get to mixdown and find the bass has notes that are out of tune, cheat: Many pitch correction tools intended for vocals will work with single-note bass lines. 6 PUT HIGHPASS FILTERS ON OTHER INSTRUMENTS To make for a tighter, more defined low end overall, clean up subsonics and low frequencies on instruments that don’t really have any significant low end (e.g., guitars, drums other than kick, etc.). The QuadCurve EQ in Cakewalk Sonar’s ProChannel has a 48dB/octave highpass filter that’s useful for cleaning up low frequencies in non-bass tracks. A low cut filter, as used for mics, is a good place to start. By carving out more room on the low end, there will be more space for the bass to fit comfortably in the mix. The steeper the slope, the better. 7 TWEAK THE BASS IN CONTEXT Because bass is such an important element of a song, what sounds right when soloed may not mesh properly with the other tracks. Work on bass and drums as a pair—that’s why they’re called the “rhythm section”—so that you figure out the right relationship between kick and bass. But also have the other instruments up at some point to make sure the bass supports the mix as a whole. 8 BEWARE OF PHASE ISSUES It’s common to take a direct out along with a miked or amp out, then run them to separate tracks. Be careful, though: The signal going to the mic will hit later than the direct out, because the sound has to travel through the air to get to the mic. If you use two bass tracks, bring up one track, monitor in mono (not stereo), then bring up the other track. If the volume dips, or the sound gets thinner, you have a phase issue. If you’re recording into a DAW, simply slide the later track so it lines up with the earlier track. The timing difference will only be a few milliseconds (i.e., one millisecond for every foot of distance from the speaker), so you’ll probably need to zoom way in in order to align the tracks properly. 9 RESPECT VINYL’S SPECIAL REQUIREMENTS Vinyl represents a tiny amount of market share, but it’s growing and you never know when something you mix will be released on vinyl. So, if your project has even a slight chance of ending up on vinyl, pan bass to the precise center. Bass is one frequency range where there should be no stereo imaging. 10 DON’T FORGET ABOUT BASS AMP SIMS You’ll find some excellent bass amp sims in Native Instrument’s Guitar Rig, Waves GTR, Live 6 POD Farm, and Peavey’s ReValver, as well as the dedicated Ampeg SVX plug-in (from the AmpliTube family) offered by IK Multimedia. IK Multimedia’s Ampeg SVX gives solid bass sounds in stand-alone mode, but when used as a plug-in, can also “re-amp” signals recorded direct. This shows the Cabinet page, where you set up your “virtual mic.” These open up the option of recording direct, but then “re-amping” during the mix to get more of a live sound. You’ll also have more control compared to using a “real” bass amp. Even if you don’t want to use a bass sim as your primary bass sound, don’t overlook the many ways they can enhance a physical bass sound. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered hundreds of tracks), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. It's Not Just about Notes, but about Emotion by Craig Anderton Vocals are the emotional focus of most popular music, yet many self-produced songs don't pay enough attention to the voice's crucial importance. Part of this is due to the difficulty in being objective enough to produce your own vocals; luckily, I've been fortunate to work with some great producers over the years, and have picked up some points to remember when producing myself. So, let's look at a way to step back and put more EDR (Emotional Dynamic Range) into your vocals. WHAT IS EDR? Dynamics isn't just about level variations, but also emotional variations. No matter how well you know the words to a song, begin by printing out or writing a copy of the lyrics. This will become a road map that guides your delivery through the piece. Reviewing a song and showing where to add emphasis can help guide a vocal performance. Grab two different colored pens, and analyze the lyrics. Underline words or phrases that should be emphasized in one color (e.g., blue), and words that are crucial to the point of the song in the other color (e.g., red). For example, here are notes on the second verse for a song I recorded a couple years ago. In the first line, "hot" is an attention-getting word and rhymes with "got," so it receives emphasis. As the song concerns a relationship that revs up because of dancing and music, "music" is crucial to the point of the song and gets added emphasis. In line 2, "feel" and "heat" get emphasis, especially because "heat" refers back to "hot," and is foreshadowing to "Miami" in the fourth line. Line 3 doesn't get a huge emphasis, as it provides the "breather" before hitting the payoff line, which includes the title of the song ("The Miami Beat"). "Dancing" has major emphasis, "Miami beat" gets less because it re-appears several times in the tune . . . no point in wearing out its welcome. By going through a song line by line, you'll have a better idea of where/how to make the song tell a story, create a flow from beginning to end, and emphasize the most important elements. Also, going over the lyrics with a fine-tooth comb is good quality control to make sure every word counts. TYPES OF EMPHASIS Emphasis is not just about singing louder. Other ways to emphasize a word or phrase are: Bend pitch. Words with bent pitch will stand out compared to notes sung "straight." For example, in line 4 above, "dancing" slides around the pitch to add more emphasis. Clipped vs. sustained. Following a clipped series of notes with sustained sounds tends to raise the emotional level. Think of Sam and Dave's song "Soul Man": The verses are pretty clipped, but when they go into "I'm a soul man," they really draw out "soul man." The contrast with the more percussive singing in the verses is dramatic. Throat vs. lungs. Pushing air from the throat sounds very different compared to drawing air from the lungs. The breathier throat sound is good for setting up a fuller, louder, lung-driven sound. Abba's "Dancing Queen" highlights some of these techniques: the section of the song starting with "Friday night and the lights are low" is breathier and more clipped (although the ends of lines tend to be more sustained). As the song moves toward the "Dancing Queen" and "You can dance" climax, the notes are more sustained and less breathy. Timbre changes. Changing your voice's timbre draws attention to it (David Bowie uses this technique a lot). Doubling a vocal line can make a voice seem stronger, but I suggest placing the doubled vocal back in the mix compared to the main vocal—enough to support, not compete. Vibrato. Vibrato is often overused to add emphasis. You don't need to add much; think of Miles Davis, who almost never used vibrato, electing instead to use well-placed pitch-bending. (Okay, so he wasn't a singer...but he used his trumpet in a very vocal manner.) Generally, vibrato "fades out" just before the note ends, like pulling back the mod wheel on a synthesizer. This adds a sense of closure that completes a phrase. "Better" is not always better. Paradoxically, really good vocalists can find it difficult to hit a wide emotional dynamic range because they have the chops to sing at full steam all the time. This is particularly true with singers who come from a stage background, where they're used to singing for the back row. Lesser vocalists often make up for a lack of technical skill by craftier performances, and fully exploiting the tools they have. If you have a great voice, fine—but don't end up like the guitarist who can play a zillion notes a second, but ultimately has nothing to say. Pull back and let your performance "breathe." As vocals are the primary human-to-human connection in a great deal of popular music, reflect on every word, because every word is important. If some words simply don't work, it's better to rewrite the song than rely on vocal technique or artifice to carry you through. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. A Cable Is Not Just a Piece of Wire . . . By Craig Anderton If a guitar player hears something that an engineer says is impossible, lay your bets on the guitarist. For example, some guitarists can hear differences between different cords. Although some would ridicule that idea—wire is wire, right?—different cords can affect your sound, and in some cases, the difference can be drastic. What's more, there's a solid, repeatable, technically valid reason why this is so. However, cords that sound very different with one amp may sound identical with a different amp, or when using different pickups. No wonder guitarists verge on the superstitious about using a particular pickup, cord, and amp. But you needn't be subjected to this kind of uncertainty if you learn why these differences occur, and how to compensate for them. THE CORDAL TRINITY Even before your axe hits its first effect or amp input, much of its sound is already locked in due to three factors: Pickup output impedance (we assume you're using standard pickups, not active types) Cable capacitance Amplifier input impedance We'll start with cable capacitance, as that's a fairly easy concept to understand. In fact, cable capacitance is really nothing more than a second tone control applied across your pickup. A standard tone control places a capacitor from your "hot" signal line to ground. A capacitor is a frequency-sensitive component that passes high frequencies more readily than low frequencies. Placing the capacitor across the signal line shunts high frequencies to ground, which reduces the treble. However the capacitor blocks lower frequencies , so they are not shunted to ground and instead shuffle along to the output. (For the technically-minded, a capacitor consists of two conductors separated by an insulator—a definition which just happens to describe shielded cable as well.) Any cable exhibits some capacitance—not nearly as much as a tone control, but enough to be significant in some situations. However, whether this has a major effect or not depends on the two other factors (guitar output impedance and amp input impedance) mentioned earlier. AMP INPUT IMPEDANCE When sending a signal to an amplifier, some of the signal gets lost along the way—sort of like having a leak in a pipe that's transferring water from one place to another. Whether this leak is a pinhole or gaping chasm depends on the amp's input impedance. With stock guitar pickups, lower input impedances load down the guitar and produce a "duller" sound (interestingly, tubes have an inherently high input impedance, which might account for one aspect of the tube's enduring popularity with guitarists). Impedance affects not only level, but the tone control action as well. The capacitor itself is only one piece of the tone control puzzle, because it's influenced by the amp's input impedance. The higher the impedance, the greater the effect of the tone control. This is why a tone control can seem very effective with some amps and not with others. Although a high amp input impedance keeps the level up and provides smooth tone control action (the downside is that high impedances are more susceptible to picking up noise, RF, and other types of interference), it also accentuates the effects of cable capacitance. A cable that robs highs when used with a high input impedance amp can have no audible effect with a low input impedance amp. THE FINAL PIECE OF THE PUZZLE Our final interactive component of this whole mess is the guitar's output impedance. This impedance is equivalent to sticking a resistor in series with the guitar that lowers volume somewhat. Almost all stock pickups have a relatively high output impedance, while active pickups have a low output impedance. As with amp input impedance, this interacts with your cable to alter the sound. Any cable capacitance will be accented if the guitar has a high output impedance, and have less effect if the output impedance is low. There's one other consideration: the guitar output impedance and amp input impedance interact. Generally, you want a very high amplifier input impedance if you're using stock pickups, as this minimizes loss (in particular, high frequency loss). However, active pickups with low output impedances are relatively immune to an amp's input impedance. THE BOTTOM LINE So what does all this mean? Here are a few guidelines. Low guitar output impedance + low amp input impedance. Cable capacitance won't make much difference, and the capacitor used with a standard tone control may not appear to have much of an effect. Increasing the tone control's capacitor value will give a more pronounced high frequency cut. (Note: if you replace stock pickups with active pickups, keep this in mind if the tone control doesn't seem as effective as it had been.) Bottom line: you can use just about any cord, and it won't make much difference. Low guitar output impedance + high amp input impedance. With the guitar's volume control up full, the guitar output connects directly to the amp input, so the same basic comments as above (low guitar output Z with low amp input Z) applies. However, turning down the volume control isolates the guitar output from the amp input. At this point, cable capacitance has more of an effect, especially of the control is a high-resistance type (greater than 250k). High guitar output impedance + low amp input impedance. Just say no. This maims your guitar's level and high frequency response, and is not recommended. High guitar output impedance + high amp input impedance. This is the common, 50s/60s setup scenario with a passive guitar and tube amp. In this case, cable capacitance can have a major effect. In particular, coil cords have a lot more capacitance than standard cords, and can make a huge sonic difference. However, the amp provides minimum loading on the guitar, which with a quality cord, helps to preserve high end "sheen" and overall level. Taking all the above into account, if you want a more consistent guitar setup that sounds pretty much the same regardless of what cable you use (and is also relatively immune to amplifier loading), consider replacing your stock pickups with active types. Alternately, you can add an impedance converter ("buffer board") right after the guitar output (or for that matter, any effect such as a compressor, distortion box, etc. that has a high input impedance and low output impedance). This will isolate your guitar from any negative effects of high-capacitance cables or low impedance amp inputs. If you're committed to using a stock guitar and high impedance amp, there are still a few things you can do to preserve your sound: Keep the guitar cord as short as possible. The longer the cable, the greater the accumulated cable capacitance. Cable specs will include a figure for capacitance (usually specified in "picofarads per foot"). If you make your own cables, choose cable with the lowest pF per foot, consistent with cable strength. (Paradoxically, strong, macho cables often have more capacitance, whereas light weight cables have less.) Avoid coil cords, and keep your volume control as high up as possible. Don't believe the hype about "audiophile cords." They may make a difference; they may not. If you don't hear any difference with your setup, then save your money and go with something less expensive. Before closing, I should mention that this article does simplify matters somewhat because there's also the issue of reactance, and that too interacts with the guitar cable capacitance. However, I feel that the issues covered here are primarily what influence the sound, so let's leave how reactance factors into this for a later day. Remember, if you axe doesn't sound quite right, don't immediately reach for the amp: There's a lot going on even before your signal hits the amp's input jack. And if a guitarist swears that one cord sounds different from another, that could very well be the case—however, now you know why that is, and what to do about it. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Vocoders Used to Be Expensive and Super-Complex - But No More by Craig Anderton Heard any robot voices lately? Of course you have, because vocoded vocals are all over the place, from commercials to dance tracks. Vocoders have been on hits before, like Styx’s “Mr. Roboto” and Lipps Inc.’s “Funky Town,” but today they’re just as likely to be woven in the fabric of a song (Daft Punk, Air) as being applied as a novelty effect. So, let's take a look at vocoder basics, and how to make them work for you. VOCODER BASICS Vocoders are best known for giving robot voice sounds, but they have plenty of other uses. A vocoder, whether hardware or virtual, has two inputs: instrument (the “carrier” input), and mic (the “modulation” input). As you talk into the mic, the vocoder analyzes the frequency bands where there’s energy, and opens up corresponding filters that process the carrier input. This impresses your speech characteristics onto the carrier’s signal. Clockwise from top: Reason BV512, Waves Morphoder, Ableton Live Vocoder, Apple Logic Evoc 20 Some programs, including Cubase, Logic, Sonar, Reason, and Ableton Live bundle in vocoders. However, until recently, the ability to sidechain a second input to provide the modulator (or carrier) was difficult to implement. Two common workarounds are to include a sound generator within the plug-in and use the input for the mic, which is the approach taken by Waves’ Morphoder; or, insert the plug-in in an existing audio track, and use what’s on the track as the carrier. VOCODER APPLICATIONS Talking instruments. To create convincing “talking instrument” effects, use a carrier signal rich in harmonics, with a complex, sustained waveform. Remember, even though a vocoder is loaded with filters, if nothing’s happening in the range of a given filter, then that filter will not affect the sound. Vocoding an instrument such as flute gives very poor results; a guitar will produce acceptable vocoding, but a distorted guitar or big string pad will work best. Synthesizers generate complex sounds that are excellent candidates for vocoding. Choir effects. To obtain a convincing choir effect, call up a voice-like program (e.g, pulse waveform with some low pass filtering and moderate resonance, or sampled choirs) with a polyphonic keyboard, and use this for the carrier. Saying “la-la,” “ooooh,” “ahhh,” and similar sounds into the mic input, while playing fairly complex chords on the synthesizer, imparts these vocal characteristics to the keyboard sound. Adding a chorus unit to the overall output can give an even stronger choir effect. Backup vocals. Having more than one singer in a song adds variety, but if you don’t have another singer at a session to create “call-and-response” type harmonies, a vocoder might be able to do the job. Use a similar setup to the one described above for choir effects, but instead of playing chords and saying “ooohs” and “ahhhhs” to create choirs, play simpler melody or harmony lines and speak the words for the back-up vocal. Singing the words (instead of speaking them) and mixing in some of the original mic sound creates a richer effect. Cross-synthesis. No law says you have to use voice with vocoder. For a really cool effect, use a sustained sound like a pad for the carrier, and drums for the modulator. The drums will impart a rhythmic, pulsing effect to the pad. Crowd sounds. Create the sound of a chanting crowd (think political rally) by using white noise as the carrier. This multiplies your voice into what sounds like dozens of voices. This technique also works for making nasty horror movie sounds, because the voice adds an organic quality, while the white noise contributes an otherworldly, ghostly component. Don’t forget to tweak. Some vocoders let you change the number of filters (bands) used for analysis; more filters (e.g.,16 and above) give higher intelligibility, whereas fewer filters create a more “impressionistic” sound. Also, many speech components that contribute to intelligibility are in the upper midrange and higher frequencies, yet few instruments have significant amounts of energy in these parts of the frequency spectrum. Some vocoders include a provision to inject white noise (a primary component of unpitched speech sounds) into the instrument signal to allow “S” and similar sounds to appear at the output. Different vocoders handle this situation in different ways. The days when vocoders were noisy, complicated, expensive, and difficult-to-adjust hardware boxes are over. If you haven't experimented with a software vocoder lately, you jut might be in for a very pleasant surprise. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Yes, you really can use multiple audio interfaces simultaneously with a single computer by Craig Anderton You have a go-to interface that’s great, but then one day you run out of mic inputs. Too bad your computer can’t address more than one interface at a time . . . Or can it? Actually, both Macintosh and Windows computer can let you use more than one interface at a time, if you know the rules. For Windows, although with rare exceptions you can’t aggregate ASIO devices, you can aggregate interfaces that work with WDM/KS, WASAPI, or WaveRT drivers. Just select one of these drivers in your host software, and all the I/O will appear as available inputs and outputs in your application (Fig. 1). Fig. 1: Sonar X1 is set to WDM/KS, so all the I/O from a Roland Octa-Capture and DigiTech’s iPB-10 effects processor become available. With the Mac, you can aggregate Core Audio interfaces. Open Audio MIDI Setup (located in Applications/Utilities), and choose Show Audio Window. Click the little + sign in the lower left corner; an Aggregate Device box appears. Double-click it to change its name ("Apollo+MBobMini" in Fig. 2). You'll see a list of available I/O. Check the interfaces you want to aggregate, then check "Resample" for the secondary interface or interfaces (Fig. 2); this tells the computer to treat your primary, or unchecked, interface as the clock source. Now all input and output options will be available in your host program. Fig. 2: Universal Audio's Apollo is being supplemented by an Avid Mbox Mini. If you encounter any problems, just go to the Audio MIDI Setup program’s Help, and search on Aggregation. Choose Combining Audio Devices, and follow the directions. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Sometimes Little Improvements Add Up To Big Improvements By Craig Anderton The whole is equal to the sum of its parts…as anyone who ever used analog tape will attest. Who can forget that feeling of hearing yet another contribution to the noise floor whenever you brought up a fader, as one more track of tape hiss worked its way to the output? With digital recording, tape hiss isn’t an issue any more. But our standards are now more stringent, too. We expect 24-bit resolution, and noise floors that hit theoretical minimums. As a result, every little extra dB of noise, distortion, or coloration adds up, especially if you’re into using lots of tracks. A cheapo mic pre’s hiss might not make a big difference if it’s used only to capture a track of the lead singer in the punk band Snot Puppies of Doom, but if you’re using it to record twelve tracks of acoustic instruments, you will hear a difference. I’ve often stated that all the matters in music is the emotional impact, but still, it’s even better when that emotional impact is married with pristine sound quality. So, let’s get out the "audio magnifying glass" (even though they don’t work for mixing, headphones are great when you need to really pay attention to details on a track), and clean up our tracks … one dB at a time. PREVENTING THE NOISE PROBLEM Even in today’s digital world, there’s hiss from converters, guitar amps, preamps, direct boxes, instrument outputs, and more. The individual contribution in one track may not be much, but when low level signals aren’t masked by noise, you’ll hear a much more "open" sound and improved soundstage. (And if you don’t think extremely low levels of noise make that much of a difference, consider dithering—it’s very low level, but has a significant effect on our perception of sound.) The first way to reduce noise is prevention. Maybe it’s worth spending the bucks on a better mic pre if it’s going to shave a few dB off your noise figure. And what about your direct box? If it’s active, it might be time for an upgrade there as well. If it’s not active but transformer-based instead, then that’s an issue in itself as the transformer may pick up hum (first line of defense: re-orient it). Here are some additional tips: Gain-staging (the process of setting levels as a signal travels from one stage to the next stage, so that one stage neither overloads the next stage, nor feeds it too little signal) is vital to minimizing noise, as you want to send the maximum level short of distortion to the next stage. But be careful. Personally, I’d rather lose a few dB of noise figure than experience distortion caused by an unintentional overload. Crackles can be even more problematic than hiss. Use contact cleaner on your patch cord plugs, jack contacts, and controls. Tiny crackles can be masked during the recording process by everything else that’s making noises, but may show up under scrutiny during playback. In a worst-case situation, the surfaces of dissimilar metals may have actually started to crystallize. Not only can that generate noise, but these crystals are all potential miniature crystal radios, which can turn RFI into audio that gets pumped audio into the connection. Not good. Make sure any unnecessary mixer channels are muted when you record. Every unmuted channel is another potential source of noise. Unless you have a high-end sound card like the Lynx line, avoid sending any analog signals into your computer. Use digital I/O and a separate, remote converter. Although most people use LCD monitors these days, if there's a CRT on while you’re recording, don’t forget that it’s pumping out a high frequency signal (around 15kHz). This can get into your mics. Turn it off while recording. When recording electric guitar, pickups are prone to picking up hum and other interference. Try various guitar positions until you find the one that generates the minimum amount of noise. If you have a Line 6 Variax, consider yourself fortunate —it won’t pick up hum due to using a piezo pickup. No matter how hard you try, though, some noise is going to make it into your recorded tracks. That’s when it’s time to bring out the heavy artillery: noise removal, noise gating, and noise reduction. DEALING WITH NOISE AFTER THE FACT With a typical hard disk-based DAW, you have three main ways to get rid of constant noise (hiss and some types of hum): noise gating, noise removal, and noise reduction. Noise gating is the crudest method of removing noise. As a refresher, a noise gate has a particular threshold level. Signals above this level pass through unimpeded to the gate out. Signals below this threshold (e.g., hiss, low level hum, etc.) cause the gate to switch off, so it doesn’t pass any audio and mutes the output. Early noise gates were subject to a variety of problems, like "chattering" (i.e., as a signal decayed, its output level would criss-cross over the threshold, thus switching the gate on and off rapidly). Newer gates (Fig. 1) have controls that can specify attack time so that the gate ramps up instead of slamming on, decay time controls so the gate shuts off more smoothly, and a "look-ahead" function so you can set a bit of attack time yet not cut off initial transients. Fig. 1: The Gate section of Cubase’s VST Dynamics module (the compressor is toward the right) includes all traditional functions, but also offers gating based on frequency so that only particular frequencies open the gate. This makes it useful as a special effect as well as for reducing noise. In this case, the kick is being isolated and gated. Noise gates are effective with very low level signals and tracks with defined "blocks" of sound with noise inbetween, but the noise remains when signal is present—it’s just masked. (For more about noise gates, check out the article "Noise Gates Don't Have to Be Boring.") Manual noise removal is the manual version of noise gating (Fig. 2). It’s a far more tedious process, but can lead to better results with "problem" material. Fig. 2: The upper vocal track (shown in Cakewalk Sonar) has had the noise between phrases removed manually, with fades added; the lower track hasn't been processed yet. With noise removal, you cut the quiet spaces between the audio you want to keep, adding fades as desired to fade in or out of the silence, thus making any transitions less noticeable. However, doing this for all the tracks in a tune can be pretty time-consuming; in most cases, noise gating will do an equally satisfactory job. Noise reduction subtracts the noise from a track, rather than simply masking it. Because noise reduction is a complex process, you’ll usually need to use a stand-alone application like Adobe Audition (Fig. 3), Steinberg Wavelab, Sony Sound Forge, iZotope RX2, and various Waves plug-ins. Fig. 3: Sound Forge's Noise Reduction tools have been around for years, but remain both effective and easy to use. With stand-alone programs, you’ll likely have to export the track in your DAW as a separate audio file, process it in the noise reduction program, then import it back into your project. Also, you'll generally need a sample of the noise you’re trying to remove (called a "noise print," in the same sense as a fingerprint). It need only be a few hundred milliseconds, but should consist solely of the signal you’re trying to remove, and nothing else. Once you have this sample, the program can mathematically subtract it from the waveform, thus leaving a de-noised waveform. However, some noise reduction algorithms don’t need a noise print; instead, they use filtering to remove high frequencies when only hiss is present. This is related to how a noise gate works, except that it’s a more evolved way to remove noise as (hopefully) only the frequencies containing noise are affected. "Surgical" removal makes it possible to remove specific artifacts, like a finger squeak on a guitar string, or a cough in the middle of a live performance. The main way to do this is with a spectral view that shows not only amplitude and time, but also, frequency. This makes it easy to pick out something like a squeak or cough from the music, then remove it (Fig. 4). Fig. 4: Adobe Audition's spectral view and "Spot Healing Brush Tool" makes it easy to remove extraneous sounds. Here, a cough has been isolated and selected for removal. Audition does elaborate background copying and crossfading to "fill in" the space caused by the removal. While this all sounds good in theory—and 90\% of the time, it’s good in practice too—there are a few cautions. Noise reduction works best on signals that don’t have a lot of noise. Trying to take out large chunks of noise will inevitably remove some of the audio you want to keep. Use the minimum amount of noise reduction needed to achieve the desired result. 6 to 10dB is usually pretty safe. Larger values may work, but this may also add some artifacts to the audio. Let your ears be the judge; like distortion, I find audible artifacts more objectionable than a little bit of noise. You can sometimes save presets of particular noise prints, for example, of a preamp you always use. This lets you apply noise reduction to signals even if you can’t find a section with noise only. In some cases you may obtain better results by running the noise reduction twice with light noise removal rather than once with more extensive removal. So is all this effort worth it? I think you’ll be pretty surprised when you hear what happens to a mix when the noise contributed by each track is gone. Granted, it’s not the biggest difference in the world, and we’re talking about something that happens at a very low level. But minimizing even low-level noise can lead to a major improvement to the final sound … like removing the dust from a fine piece of art. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. When it comes to recording, let’s get physical By Craig Anderton Until digital recording appeared, every function in analog gear had an associated control: Whether you were tweaking levels, changing the amount of EQ gain, or switching a channel to a particular bus, a physical device controlled that function. Digital technology changed that, because functions were no longer tied to physical circuits, but virtualized as a string of numbers. This gave several advantages: Controls are more expensive than numbers, so virtualizing multiple parameters and controlling them with fewer controls lowered costs. Virtualization also saved space, because mixers no longer had to have one control per function; they could use a small collection of channel strips—say, eight—that could bank-switch to control eight channels at a time. But you don’t get something for nothing, and virtualization broke the physical connection between gear and the person operating the gear. While people debate the importance of that physical connection, to me there’s no question that having a direct, physical link between a sound you’re trying to create and the method of creating that sound is vital—for several reasons. THE ZEN OF CONTROLLERS If you’re a guitar player, here’s a test: Quick—play an A#7 chord. Okay, now list the notes that make up the chord, lowest pitch to highest. Chances are you grabbed the A#7 instantly, because your fingers—your “muscle memory”—knew exactly where to go. But you probably had to think, even if only for a second, to name all the notes making up the chord. Muscle memory is like the DMA (Direct Memory Access) process in computers, where an operation can pull data directly from memory without having to go through the CPU. This saves time, and lets the CPU concentrate on other tasks where it truly is needed. So it is with controllers: When you learn one well enough so that your fingers know where to go and you don’t have to parse a screen, look for a particular control, click it with your mouse, then adjust it, the recording process become faster and more efficient. IMPROVING DAW WORKFLOW Would you rather hit a physical button labeled “Record” when it was time to record, or move your mouse around onscreen until you find the transport button and click on it? Yeah, I thought so. The mouse/keyboard combination was never designed for recording music, but for data entry. For starters, the keyboard is switches-only—no faders. The role of changing a value over a range falls to the mouse, but a mouse can do only one thing at a time—and when recording, you often want to do something like fade one instrument down while you fade up another. Sure, there are workarounds: You can group channels and offset them, or set up one channel to increase while the other decreases, and bind them to a single mouse motion. But who wants to do that kind of housekeeping when you’re trying to be creative? Wouldn’t you rather just have a bunch of faders in front of you, and control the parameters directly? Another important consideration is that your ears do not exist in a vacuum; people refer to how we hear as the “ear/brain combination,” and with good reason. Your brain needs to process whatever enters your ears, so the simple act of critical listening requires concentration. Do you really want to squander your brain’s resources trying to figure out workarounds to tasks that would be easy to do if you only had physical control? No, you don’t. But . . . PROBLEM 1: JUST BECAUSE SOMETHING HAS KNOBS DOESN’T GUARANTEE BETTER WORKFLOW Some controllers try to squeeze too much functionality into too few controls, and you might actually be better off assigning lots of functions to keyboard shortcuts, learning those shortcuts, then using a mouse to change values. I once used a controller for editing synth parameters (the controller was not intended specifically for synths, which was part of the problem), and it was a nightmare: I’d have to remember that, say, pulse width resided somewhere on page 6, then remember which knob (which of course didn’t have a label) controlled that parameter. It was easier just to grab a parameter with a mouse, and tweak. On the other hand, a system like Native Instruments’ Kore is designed specifically for controlling plug-ins, and arranges parameters in a logical fashion. As a result, it’s always easy to find the most important parameters, like level or filter cutoff. PROBLEM 2: IT GETS WORSE BEFORE IT GETS BETTER So do you just get a controller, plug it in, and attain instant software/hardware nirvana? No. You have to learn hardware controllers, or you’ll get few benefits. If you haven’t been using a controller, you’ve probably developed certain physical moves that work for you. Once you start using a controller, those all go out the window, and you have to start from scratch. If you’re used to, say, hitting a spacebar to begin playback, it takes some mental acclimation to switch over to a dedicated transport control button. Which begs the question: So why use the transport control, anyway? Well, odds are the transport controls will have not just play but stop, record, rewind, etc. Once you become familiar with the layout, you’ll be able to bounce around from one transport function to another far more easily than you would with a QWERTY keyboard set up with keyboard shortcuts. Think of a hardware controller as a musical instrument. Like an instrument, you need to build up some “muscle memory” before you can use it efficiently. I believe that the best way to learn a controller is to go “cold turkey”: Forget you have a mouse and QWERTY keyboard, and use the controller as often as possible. Over time, using it will become second nature, and you’ll wonder how you got along without it. But realistically, that process could take days or even months; think of spending this time as an investment that will pay off later. DIFFERENT CONTROLLER TYPES There are not just many different controllers, but different controller product “families.” The following will help you sort out the options, and choose a controller that will aid your workflow rather than hinder it. Custom controllers. These are designed to fit specific programs or software like a glove; examples include Ableton's Push controller, Roland’s V-Studio series (including the 700, 100, and 20 controllers), Steinberg’s Cubase-friendly series of CMC controllers, and the like. The text labels are usually program-specific, the knobs and switches have (hopefully) been laid out ergonomically, and the integration between hardware and software is as tight as Tower of Power’s rhythm section. If a control surface was made for a certain piece of software, it’s likely that will be the optimum hardware/software combination. Ableton's Push controller is an ideal match for Live 9 A different type of controller, Softube's Console 1, is a different type of animal—it has software that emulates an analog channel strip and inserts in a DAW, with a hardware controller that provides a traditional, analog-style one-function-per-control paradigm. The control surface itself provides visual feedback, but if you want more detail, you can also see the parameters on-screen. Softube's Control 1 General-purpose DAW controllers. While designed to be as general-purpose as possible, these usually include templates for specific programs. They typically include hardware functions that are assumed to be “givens,” like tape transport-style navigation controls, channel level faders, channel pan pots, solo and mute, etc. A controller with tons of knobs/switches and good templates can give very fluid operation. Good examples of this are the Mackie Control Universal Pro (which has become a standard—many programs are designed to work with a Mackie Control and many hardware controllers can emulate the way a Mackie Control works), Avid Euphonix Artist series controllers (shown in the opening of this article), and Behringer BCF2000. Mackie Control Universal Pro There are also “single fader” hardware controllers (e.g., PreSonus FaderPort and Frontier Design Group AlphaTrack) which while compact and inexpensive, take care of many of the most important control functions you’ll use. Digital mixers. For recording, a digital mixer can make a great hands-on controller if both it and your audio interface have a multi-channel digital audio port (e.g., ADAT optical “light pipe”). You route signals out digitally from the DAW, into the mixer, then back into two DAW tracks for recording the stereo mix. Rather than using the digital mixer to control functions within the program, it actually replaces some of those functions (particularly panning, fader-riding, EQ, and channel dynamics). As a bonus, some digital mixers include a layer that converts the faders into MIDI controllers suitable for controlling virtual synths, effects boxes, etc. Synthesizers/master keyboards. Many keyboards, like the Yamaha Motif series and Korg Kronos, as well as master controllers from M-Audio, Novation, CME, and others build in control surface support. But even those without explicit control functions can sometimes serve as useful controllers, thanks to the wheels, data slider(s), footswitch, sustain switch, note number, and so on. As some sequencers allow controlling functions via MIDI notes, the keyboard can provide those while the knobs control parameters such as level, EQ, etc. Arturai's KeyLab 49 is part of a family of three keyboard controllers that also serve as control surfaces. Really inexpensive controllers. Korg's nanoKONTROL2 is a lot of controller for the money; it's basic, with volume, pan, mute, solo, and transport controls, but it's also Mackie-compatible. But if you're on an even tighter budget, remember that old drum machine sitting in the corner that hasn’t been used in the last decade? Dust it off, find out what MIDI notes the pads generate, and use those notes to control transport functions—maybe even arm record, or mute particular track(s). A drum machine can make a compact little remote if, for example, you like recording guitar far away from the computer monitor. The “recession special” controller. Most programs offer a way to customize QWERTY keyboard commands, and some can even create macros. While these options aren’t as elegant as using dedicated hardware controllers, tying common functions to key commands can save time and improve work flow. Overall, the hardware controllers designed for specific software programs will almost certainly be your best bet, followed by those with templates for your favorite software. But there are exceptions: While Yamaha’s Motif XS and XF series keyboards can’t compete with something like a Mackie Control, they serve as fine custom controllers for Cubase AI—which might be ideal if Cubase is your fave DAW. Now, let’s look at some specific issues involving control surfaces. MIDI CONTROL BASICS Most hardware control surfaces use MIDI as their control protocol. Controlling DAWs, soft synths, processors, etc. is very similar to the process of using automation in sequencing programs: In the studio, physical control motions are recorded as MIDI-based automation data, which upon playback, control mixer parameters, soft synths, and signal processors. If you’re not familiar with continuous controller messages, they’re part of the MIDI spec and alter parameters that respond to continuous control (level, panning, EQ frequency, filter cutoff, etc.). Switch controller messages have two states, and cover functions like mute on/off. There are 128 numbered controllers per MIDI channel. Some are recommended for specific functions (e.g., controller #7 affects master volume), while others are general-purpose controllers. Controller data is quantized into 128 steps, which gives reasonably refined control for most parameters. But for something like a highly resonant filter, you might hear a distinct change as a parameter changes from one value to another. Some devices interpolate values for a smoother response. MAPPING CONTROLS TO PARAMETERS With MIDI control, the process of assigning hardware controllers to software parameters is called mapping. There are four common methods: Novation's low-cost Nocturn controller features their Automap protocol, which identifies plug-in parameters, then maps them automatically. In this screen shot, the controls are being mapped to Solid State Logic's Drumstrip processor for drums. “Transparent” mapping. This happens with controllers dedicated to specific programs or protocols: They’re already set up and ready to go, so you don’t have to do any mapping yourself. Templates. This is the next easiest option. The software being controlled will have default controller settings (e.g., controller 7 affects volume, 10 controls panning, 72 edits filter cutoff, etc.), and loading a template into the hardware controller maps the controls to particular parameters. MIDI learn. This is almost as easy, but requires some setup effort. At the software, you select a parameter and enable “MIDI learn” (typically by clicking on a knob or switch—ctrl-click on the Mac, right-click with Windows). Twiddle the knob you want to have control the parameter; the software recognizes what’s sent and maps it. Fixed assignments. In this case, either the controller generates a fixed set of controllers, and you need to edit the target program to accept this particular set of controllers; or, the target software will have specific assignments it wants to see, and you need to program your controller to send these controllers. THE “STAIR-STEPPING” ISSUE Rotating a “virtual front panel” knob in a soft synth may have higher resolution than controlling it externally via MIDI, which is limited to 128 steps of resolution. In practical terms, this means a filter sweep that sounds totally smooth when done within the instrument may sound “stair-stepped” when controlled with an external hardware controller. While there’s no universal workaround, some synthesizers have a “slew” or “lag” control that rounds off the square edges caused by transitioning from one level to another. RECONCILING PHYSICAL AND VIRTUAL CONTROLS Controllers with motorized faders offer the advantage of having the physical control always track what the corresponding virtual control is doing. But with any controller that doesn’t use motorized faders, one of the big issues is punching in when a track already contains control data. If the physical position of the knob matches the value of the existing data, no problem: Punch in, grab the knob, and go. But what happens if the parameter is set to its minimum value, and the knob controlling it is full up? There are several ways to handle this. Instant jump. Turn the knob, and the parameter jumps immediately to the knob’s value. This can be disconcerting if there’s a sudden and unintended change—particularly live, where you don’t have a chance to re-do the take! Match-then-change. Nothing happens when you change the physical knob until its value matches the existing parameter value. Once they match, the hardware control takes over. For example, suppose a parameter is at half its maximum value, but the knob controlling the parameter is set to minimum. As you turn up the knob, nothing happens until the knob matches the parameter value. Then as you continue to move the knob, the parameter value follows along. This provides a smooth transition, but there may be a lag between the time you start to change the knob and when it matches the parameter value. Add/subtract. This technique requires continuous knobs (i.e., data encoder knobs that have no beginning or end, but rotate continuously). When you call up a preset, regardless of the knob position, turning it clockwise adds to the preset value, while turning it counter-clockwise subtracts from the value. Motorized faders. This requires bi-directional communication between the control surface and software, as the faders move in response to existing automation values—so there’s always a correspondance between physical control settings and parameter values. This is the great: Just grab the fader and punch. The transition will be both smooth and instantaneous. Parameter nulling. This is becoming less common as motorized faders become more economical. With nulling, there are indicators (typically LEDs) that show whether a controller’s value is above or below the existing value. Once the indicators show that the value matches (e.g., both LEDs light at the same time), punching in will give a smooth transition. IS THERE A CONTROLLER IN YOUR FUTURE? Many musicians have been raised with computers, and are perfectly comfortable using a mouse for mixing. However, it’s often the case that when you sit that person down in front of a controller, and they start learning how to actually use it, they can’t go back to the mouse. In some ways, we’re talking about the same kind of difference as there is between a serial and parallel interface: The mouse can only control one parameter at a time, whereas a control surface lets you move groups of controls, essentially turning your mix from a data-entry task into a performance. And I can certainly tell you which one I prefer! Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Improve your mixes by avoiding these seven mixing taboos By Craig Anderton If you listen to a lot of mixes coming out of home and project studios, after a while you notice a definite dividing line between the people who know what they’re doing, and the people who commit one of more of the Seven Deadly Sins of Mixing. You don’t want to be a mixing sinner, do you? Of course not! So, check out these tips. 1. The Disorienting Room Space This comes from using too many reverbs: A silky plate on the voice, a big room on the snare, shorter delays on guitar . . . concert hall, or concert hell? Even if the listener can’t identify the problem, they’ll know that something doesn’t sound quite right because we’ve all logged a lifetime of hearing sounds in acoustical spaces, so we inherently know what sounds “right.” Solution: Choose one reverb as your main reverb that defines the characteristics of your imaginary “room.” Insert this in an aux bus. If you do use additional reverb on, say, voice, use this second reverb as a channel insert effect but don’t rely on it for all your vocal reverb; make up the difference by sending the vocal to the reverb aux bus to add in a bit of the common room reverb. The end result will sound much more realistic. 2. Failure to Mute All those little pops, snorks, hisses, and hums can interfere with a mix’s transparency. Even a few glitches here and there add up when multiplied over several tracks. Solution: Automate mutes for when vocalists aren’t singing, during the spaces between lead guitar solos, and the like. Automating mutes independently of fader-style level automation lets you use each for what it does best. Your DAW may even have some kind of DSP option that, like a noise gate, strips away all signals below a certain level and deletes these regions from your track (Fig. 1). Fig. 1: Sonar’s “Remove Silence” DSP has been applied to the vocal track along the bottom of the window. 3. "Pre-Mastering" a Mix You want your mix to “pop” a little more, so you throw a limiter into your stereo bus, along with some EQ, a high-frequency exciter, a stereo widener, and maybe even more . . . thus guaranteeing your mastering engineer can’t do the best possible job with a fantastic set of mastering processors (Fig. 2). Fig. 2: I was given this file to master, but what could possibly be done with a file that had already been compressed into oblivion? Solution: Unless you really know what you’re doing, resist the temptation to “master” your mix before it goes to the mastering engineer. If you want to listen with processors inserted to get an idea of what the mix will sound like when compressed, go ahead—but hit the bypass switch before you mix down to stereo (or surround, if that’s your thing). 4. Not Giving the Lead Instrument Enough Attention This tends to be more of a problem with those who mix their own music, because they fall in love with their parts and want them all to be heard. But the listener is going to focus on the lead part, and pay attention to the rest of the tracks mostly in the context of supporting the lead. Solution: Take a cue from your listeners. 5. Too Much Mud A lot of instruments have energy in the lower midrange, which tends to build up during mixdown. As a result, the lows and high seem less prominent, and the mix sounds muddy. Solution: Try a gentle, relatively low-bandwidth cut of a dB or two around 300-500Hz on those instruments that contribute the most lower midrange energy (Fig. 3). Or, try the famous “smile” curve that accentuates lows and highs, which by definition causes the midrange to be less prominent. Fig. 3: Reducing some lower midrange energy in one or more tracks (in this case, using SSL’s X-EQ equalizer) can help toward creating a less muddy, more defined low end. 6. Dynamics Control Issues We’ve already mentioned why you don’t want to compress the entire mix, but pay attention to how individual tracks are compressed as well. Generally, a miked bass amp track needs a lot of compression to make up for variations in amp/cabinet frequency response; compression smoothes out those anomalies. You also want vocals to stand out in the mix and sound intimate, so they’re good candidates for compression as well. Solution: Be careful not to apply too much compression, but too little compression can be a problem, too. Try increasing the compression (i.e., lower threshold and/or higher ratio) until you can “hear” the effect, then back off until you don’t hear the compression any more. The optimum position is often within these two extremes: Enough to make a difference, but not enough to be heard as an “effect.” 7. Mixing in an Acoustically Untreated Room If you’re not getting an accurate read on your sound, then you can’t mix it properly. And it won’t sound right on other systems, either. Solution: Even a little treatment, like bass traps, “clouds” that sit above the mix position, and placing near-field speakers properly so you’re hearing primarily their direct sound rather than any reflected sound can help. Also consider using really good headphones as a reality check. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Compressors are Essential Recording Tools - Here's How They Work By Craig Anderton Compressors are some of the most used, and most misunderstood, signal processors. While people use compression in an attempt to make a recording "punchier," it often ends up dulling the sound instead because the controls aren't set optimally. Besides, compression was supposed to become an antique when the digital age, with its wide dynamic range, appeared. Yet the compressor is more popular than ever, with more variations on the basic concept than ever before. Let's look at what's available, pros and cons of the different types, and applications. THE BIG SQUEEZE Compression was originally invented to shoehorn the dynamics of live music (which can exceed 100 dB) into the restricted dynamic range of radio and TV broadcasts (around 40-50 dB), vinyl (50-60 dB), and analog tape (40dB to 105 dB, depending on type, speed, and type of noise reduction used). As shown in Fig. 1, this process lowers signal peaks while leaving lower levels unchanged, then boosts the overall level to bring the signal peaks back up to maximum. (Bringing up the level also brings up any noise as well, but you can't have everything.) Fig. 1: The first, black section shows the original audio. The middle, green section shows the same audio after compression; the third, blue section shows the same audio after compression and turning up the output control. Note how softer parts ot the first section have much higher levels in the third section, yet the peak values are the same. Even though digital media have a decent dynamic range, people are accustomed to compressed sound. Compression has been standard practice to help soft signals overcome the ambient noise in typical listening environments; furthermore, analog tape has an inherent, natural compression that engineers have used (consciously or not) for well over half a century. There are other reasons for compression. With digital encoding, higher levels have less distortion than lower levels—the opposite of analog technology. So, when recording into digital systems (tape or hard disk), compression can shift most of the signal to a higher overall average level to maximize resolution. Compression can create greater apparent loudness (commercials on TV sound so much louder than the programs because of compression). Furthermore, given a choice between two roughly equivalent signal sources, people will often prefer the louder one. And of course, compression can smooth out a sound—from increasing piano sustain to compensating for a singer's poor mic technique. COMPRESSOR BASICS Compression is often misapplied because of the way we hear. Our ear/brain combination can differentiate among very fine pitch changes, but not amplitude. So, there is a tendency to overcompress until you can "hear the effect," giving an unnatural sound. Until you've trained your ears to recognize subtle amounts of compression, keep an eye on the compressor's gain reduction meter, which shows how much the signal is being compressed. You may be surprised to find that even with 6dB of compression, you don't hear much apparent difference—but bypass the sucker, and you'll hear a change. Compressors, whether software- or hardware-based, have these general controls (Fig. 2): Fig. 2: The compressor bundled with Ableton Live has a comprehensive set of controls. Threshold sets the level at which compression begins. Above this level, the output increases at a lesser rate than the corresponding input change. As a result, with lower thresholds, more of the signal gets compressed. Ratio defines how much the output signal changes for a given input signal change. For example, with 2:1 compression, a 2dB increase at the input yields a 1dB increase at the output. With 4:1 compression, a 16dB increase at the input gives a 4dB increase at the output. With "infinite" compression, the output remains constant no matter how much you pump up the input. Bottom line: Higher ratios increase the effect of the compression. Fig. 3 shows how input, output, ratio, and threshold relate. Fig. 3: The threshold is set at -8. If the input increases by 8dB (e.g., from -8 to 0), the output only increases by 2dB (from -8 to -6). This indicates a compression ratio of 4:1. Attack determines how long it takes for the compression to take effect once the compressor senses an input level change. Longer attack times let through more of a signal's natural dynamics, but those signals are not being compressed. In the days of analog recording, the tape would absorb any overload caused by sudden transients. With digital technology, those transients clip as soon as they exceed 0 VU. Some compressors include a "saturation" option that mimics the way tape works, while others "soft-clip" the signal to avoid overloading subsequent stages. Yet another option is to include a limiter section in the compressor, so that any transients are "clamped" to, say, 0dB. Decay (also called Release) sets the time required for the compressor to give up its grip on the signal once the input passes below the threshold. Short decay settings are great for special effects, like those psychedelic '60s drum sounds where hitting the cymbal would create a giant sucking sound on the whole kit. Longer settings work well with program material, as the level changes are more gradual and produce a less noticeable effect. Note that many compressors have an "automatic" option for the Attack and/or Decay parameters. This analyzes the signal at any given moment and optimizes attack and decay on-the-fly. It's not only helpful for those who haven't quite mastered how to set the Attack and Decay parameters, but often speeds up the adjustment process for veteran compressor users. Output control. As we're squashing peaks, we're actually reducing the overall peak level. This opens up some headroom, so increasing the output level compensates for any volume drop. The usual way to adjust the output control is to turn this control up until the compressed signal's peak levels match the bypassed signal's peak levels. Some compressors include an "auto-gain" or "auto makeup" feature that increases the output gain automatically. Metering. Compressors often have an input meter, output meter for matching levels between the input and output, and most importantly, a gain reduction meter. (In Fig. 1, the orange bar to the left of the output meter is showing the amount of gain reduction.) If the meter indicates a lot of gain reduction, you're probably adding too much compression. The input meter in Fig. 1 shows the threshold with a small arrow, so you can see at a glance how much of the input signal is above the threshold. ADDITIONAL FEATURES You'll find the above functions on many compressors. The following features tend to be somewhat less common, but you'll still find them on plenty of products. Sidechain jacks are available on many hardware compressors, and some virtual compressors include this feature as well (sidechaining became formalized in the VST 3 specification, but it was possible to do in prior VST versions. A sidechain option lets you insert filters in the compressor's feedback loop to restrict compression to a specific frequency range. For example, if you insert a high pass filter, only high frequecies are compressed—perfect for "de-essing" vocals. The hard knee/soft knee option controls how rapidly the compression kicks in. With a soft knee response, when the input exceeds the threshold, the compression ratio is less at first, then increases up to the specified ratio as the input increases. With a hard knee curve, as soon as the input signal crosses the threshold, it's subject to the full amount of compression. Sometimes this is a variable control from hard to soft, and sometimes it's a toggle choice between the two. Bottom line: use hard knee when you want to clamp levels down tight, and soft when you want a gentler, less audible compression effect. The link switch in stereo compressors switches the mode of operation from dual mono to stereo. Linking the two channels together allows changes in one channel to affect the other channel, which is necessary to preserve the stereo image. Lookahead. A compressor cannot, by definition, react instantly to a signal because it has to measure the signal before it can decide how much to reduce the gain. As a result, the lookahead feature delays the audio path somewhat so the compressor can "look ahead" and see what kind of signal it will be processing, and therefore, react in time when the actual signal hits. Response or Envelope. The compressor can react to a signal based on its peak or average level, but its compression curve can follow different characteristics as well—a standard linear response, or one that more closely resembles the response of vintage, opto-isolator-based compressors. COMPRESSOR TYPES: THUMBNAIL DESCRIPTIONS Compressors are available in hardware (usually a rack mount design or for guitarists, a "stomp box") and as software plug-ins for existing digital audio-based programs. Following is a description of various compressor types. "Old faithful." Whether rack-mount or software-based, typical features include two channels with gain reduction amount meters that show how much your signal is being compressed, and most of the controls mentioned above (FIg. 4). Fig. 4: Native Instruments' Vintage Compressor bundle includes three different compressors modeled after vintage units. Multiband compressors. These divide the audio spectrum into multiple bands, with each one compressed individually (Fig. 5). This allows for a less "effected" sound (for example, low frequencies don't end up compressing high frequencies), and some models let you compress only the frequency ranges that need to be compressed. Fig. 5: Universal Audio's Precision Multiband is a multiband compressor, expander, and gate. Vintage and specialty compressors. Some swear that only the compressor in an SSL console will do the job. Others find the ultimate squeeze to be a big bucks tube compressor. And some guitarists can't live without their vintage Dan Armstrong Orange Squeezer, considered by many to be the finest guitar sustainer ever made. Fact is, all compressors have a distinctive sound, and what might work for one sound source might not work for another. If you don't have that cool, tube-based compressor from the 50s of which engineers are enamored, don't lose too much sleep over it: Many software plug-ins emulate vintage gear with an astonishing degree of accuracy (Fig. 6). Fig. 6: Cakewalk's PC2A, a compressor/limiter for Sonar's ProChannel module, emulates vintage compression characteristics. Whatever kind of audio work you do, there's a compressor somewhere in your future. Just don't overcompress—in fact, avoid using compression as a "fix" for bad mic technique or dead strings on a guitar. I wouldn't go as far as those who diss all kinds of compression, but it is an effect that needs to be used subtly to do its best. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Who's stealing your headroom? It may be the archenemy of good audio - DC Offset By Craig Anderton It was a dark and stormy night. I was rudely awakened at 3 AM by the ringing of a phone, pounding my brain like a jackhammer that spent way too much time chowing down at Starbucks. The voice on the other end was Pinky the engineer, and he sounded as panicked as a banana slug in a salt mine. "Anderton, some headroom's missing. Vanished. I can't master one track as hot as the others on the Kiss of Death CD. Checked out the usual suspects, but they're all clean. You gotta help." Like an escort service at a Las Vegas trade show, my brain went into overdrive. Pinky knew his stuff...how to gain-stage, when not to compress, how to master. If headroom was stolen right out from under his nose, it had to be someone stealthy. Someone you didn't notice unless you had your waveform Y-axis magnification up. Someone like...DC Offset. Okay, so despite my best efforts to add a little interest, DC offset isn't a particularly sexy topic. But it can be the culprit behind problems such as lowered headroom, mastering oddities, pops and clicks, effects that don't process properly, and other gremlins. DC OFFSET IN THE ANALOG ERA We'll jump into the DC offset story during the 70s, when op amps became popular. These analog integrated circuits pack a tremendous amount of gain in a small, inexpensive package with (typically) two inputs and one output. Theoretically, in its quiescent state (no input signal), the ins and out are at exactly 0.00000 volts. But due to imperfections within the op amp itself, sometimes there can be several millivolts of DC present at one of the inputs. Normally this wouldn't matter, but if the op amp is providing a gain of 1000 (60dB), a typical 5 mV input offset signal would get amplified up to 5000mV (5 volts). If the offset appeared at the inverting (out of phase) input, then the output would have a DC offset of –5.0 volts. A 5mV offset at the non-inverting input would cause a +5.0 DC offset. There are two main reasons why this is a problem. Reduced dynamic range and headroom. An op amp's power supply isbipolar (i.e., there are positive and negative supply voltages with respect to ground). Suppose the op amp's maximum undistorted voltage swing is ±15V. If the output is already sitting at, say, +5V, the maximum voltage swing is now +10/-20V. However, as most audio signals are usually symmetrical around ground and you don't want either side to clip, the maximum voltage swing is really down to ±10V—a 33\% loss of available headroom. Problems with DC-coupled circuits. In a DC-coupled circuit (sometimes preferred by audiophiles due to superior low frequency response), any DC gets passed along to the next stage. Suppose the op amp mentioned earlier with a +5V output offset now feeds a DC-coupled circuit with a gain of 5. That +5V offset becomes a +25V offset—definitely not acceptable! ANALOG SOLUTIONS With capacitor-coupled analog circuits, any DC offset supposedly won't pass from one stage to the next because the capacitor that couples the two stages together can pass AC but not DC. Still, any DC offset limits dynamic range in the stage in which it occurs. (However, if the coupling capacitor is leaky or otherwise defective, some DC may make it through anyway.) There are traditionally two ways to deal with op amp offsets. Use premium op amps that have been laser-trimmed to provide minimum offset. Include a trimpot that injects a voltage equal and opposite to the inherent input offset. In other words, with no signal present, you measure the op amp output voltage while adjusting the trimpot until the voltage is exactly zero. Some op amps even provide pins for offset control so you don't have to hook directly into one of the inputs. (Note: As trimpot settings can drift over time, if you have analog gear with op amps, sometimes it's worth having a tech check for offsets and re-adjust the trimpot setting if needed.) DIGITAL DC OFFSET In digital-land, there are two main ways DC offset can get into a signal. Recording an analog signal with a DC offset into a DC-coupled system More commonly, inaccuracies in the A/D converter or conversion subsystem that produce a slight output offset voltage. As with analog circuits, a processor that provides lots of gain (like a distortion plug-in) can turn a small amount of offset into something major. In either case, offset appears as a signal baseline that doesn't match up with the "true" 0 volt baseline (Fig. 1). Fig. 1: With these two drum hits, the first one has a significant amount of DC offset. The second has been corrected to get rid of DC offset, and as more headroom is available, it can now be normalized for more level if desired. Digital technology has also brought about a new type of offset issue that's technically more of a subsonic problem than "genuine" DC offset, but nonetheless causes some of the same negative effects. As one example, once I transposed a sliding oscillator tone so far down it added what looked like a slowly-varying DC offset to the signal, which drastically limited the headroom (Fig. 2). Fig. 2: The top signal is the original normalized version, while the lower one has been processed by a steep low-cut filter at 20Hz, then re-normalized. Note how the level for the lower waveform is much "hotter." In addition to reduced headroom, there are two other major problems associated with DC offset in digitally-based systems. When transitioning between two pieces of digital audio, one with an offset and one without (or with a different amount of offset), there will be a pop or click at the transition point. Effects or processes requiring a signal that's symmetrical about ground will not work as effectively. For example, a distortion plug-in that clips positive and negative peaks will clip them unevenly if there's a DC offset. More seriously, a noise gate or "strip silence" function will need a higher (or lower) threshold than normal in order to be higher than not just the noise, but the noise plus the offset value. DIGITAL SOLUTIONS There are three main ways to solve DC offset problems with software-based digital audio editing programs. Most pro-level digital audio editing software includes a DC offset correction function, generally found under a "processing" menu along with functions like change gain, reverse, flip phase, etc. This function analyzes the signal, and adds or subtracts the required amount of correction to make sure that 0 really is 0. Many sequencing programs also include DC offset correction as part of a set of editing options (Fig. 3). Fig. 3. Like many programs, Sonar's audio processing includes the option to remove DC offset from audio clips. Apply a steep high-pass filter that cuts off everything below 20Hz or so. (Even with a comparatively gentle 12dB/octave filter, a signal at 0.5Hz will still be down more than 60dB). In practice, it's not a bad idea anyway to nuke the subsonic part of the spectrum, as some processing can interact with a signal to produce modulation in the below 20Hz zone. Your speakers can't reproduce signals this low and they just use up bandwidth, so nuke 'em. Select a 2—10 millisecond or so region at the beginning and end of the file or segment with the offset, and apply a fadein and fadeout. This will create an envelope that starts and ends at 0, respectively. It won't get rid of the DC offset component within the file (so you still have the restricted headroom problem), but at least you won't hear a pop at transitions. CASE CLOSED Granted, DC offset usually isn't a killer problem, like a hard disk crash. In fact, usually there's not enough to worry about. But every now and then, DC offset will rear its ugly head in a way that you do notice. And now, you know what to do about it. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...