Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. Sometimes the "right" way to do things is nowhere near as much fun as the "wrong" way By Craig Anderton Whether giving seminars or receiving emails, I’m constantly asked about the “right” way to record, as if there was some committee on standards and practices dedicated to the recording industry (“for acoustic guitar, you must use a small diaphragm condenser mic, or your guitar will melt”). Although I certainly don’t want to demean the art of doing things right, some of the greatest moments in recording history have come about because of ignorance, unbridled curiosity, luck, trying to impress the opposite sex, or just plain making a mistake that became a happy accident. When Led Zeppelin decided to buck the trend at that time of close miking drums, the result was the Olympian drum sound in “When the Levee Breaks.” Prince decided that sometimes a bass simply wasn’t necessary in a rock tune, and the success of “When Doves Cry” proved he was right. Distortion used to be considered “wrong,” but try imaging rock guitar without it. A lot of today’s gear locks out the chance to make mistakes. Feedback can’t go above 99, while “normalized” patching reduces the odds of getting out of control. And virtual plug-ins typically lack access points, like insert and loop jacks, that provide a “back door” for creative weirdness. But let’s not let that stop us—it’s time to reclaim some our heritage as sonic explorers, and screw up some of the recording process. Here are a few suggestions to get you started. UNINTENDED FUNCTIONS One of my favorite applications is using a vocoder “wrong.” Sure, we’re supposed to feed an instrument into the synthesis input, and a mic into the analysis input. But using drums, percussion, or even program material for analysis can “chop” the instrument signal in rhythmically interesting ways. Got a synth, virtual or real, with an external input (Fig. 1)? Turn the filter up so that it self-oscillates (if it lets you), and mix the external signal in with it. Fig. 1: Arturia’s miniV has an external input. Insert it into a track as an effect, and you can process a signal with the synth’s various modules. The sound will be dirty, rude, and somewhat like FM meets ring modulation. To take this further, set up the VCA so you can do gated/stuttering techniques by pressing a keyboard key to turn it on and off. And we all know headphones are for outputting sound, right? Well, DJs know you can hook it up reverse, like a mic. Sure, the sound is kinda bassy because the diaphragm is designed to push air, not react to tiny vibrational changes. But no problem! Kick the living daylights out of the preamp gain, add a ton o’ distortion, and you’ll generate enough harmonics to add plenty of high frequencies. I was reluctant to include the following tip, as it relies on the ancient Lexicon Pantheon reverb (a DirectX format plug-in included in Sonar, Lexicon Omega, and other products back in the day). I really tried to find a more contemporary reverb that can do the same thing, but I couldn’t. However, this does give a fine example of unintended functionality: having a reverb iprovide some really cool resonator effects. If you have a Pantheon, try these settings (Fig. 2): Reverb type: custom Pre-delay, Room Size, RT60, Damping: minimum settings Mix: 100\% (wet only) Level: as desired Density Regen: +90\% Density Delay: between 0 and 20ms Echo Level (Left and Right): off Spread, Diffusion: 0 Bass boost: 1.0X Fig. 2: The plug-in says it’s a reverb, but here Pantheon is set up as a resonator. Vary the Regen and Delay controls, but feel free to experiment with the others. You can even put two Pantheons in series set for highly resonant, totally spooky sounds. PARAMETER PUSHING The outer edges of parameter values are meant for exploration. For example, digital audio pitch transposition can provide all kinds of interesting effects. Tune a low tom down to turn it into a thuddy kick drum, or transpose slap bass up two octaves to transform it into a funky clav. Or consider the “acidization” process in Acid and Sonar. Normally, you set slice points at every significant transient. But if you set slice points at 32nd or 64th notes, and transpose pitch up an octave or two, you’ll hear an entirely different type of sound. I also like to use Propellerheads’ ReCycle as a “tremolo of the gods” (Fig. 3). Fig. 3: ReCycle can do more than simply convert WAV or AIFF files into stretchable audio—it can also create novel tremolo effects. Load in a sustained sound like a guitar power chord, set slice points and decay time to chop it into a cool rhythm, then send it back to the project from which it came. GUITAR WEIRDNESS For a different type of distortion, plug your guitar directly into your mixer (no preamp or DI box), crank the mic pre, then use EQ to cut the highs and boost the mids to taste. Is this the best distortion sound in the world? No. Will it sound different enough to grab someone’s attention? Yes. When you play compressed or highly distorted guitar through an amp (or even studio monitors, if you like to live dangerously), press the headstock up against the speaker cabinet and you’ll get feedback if the levels are high enough. Now work that whammy bar... Miking guitar amps is also a fertile field for weirdness. Try a “mechanical bandpass filter” with small amps—set up the mic next to the speaker, then surround both with a cardboard box. One of the weirdest guitar sounds I ever found was when I re-amped the guitar through a small amp pointed at a hard wall, set up two mics between the amp and the wall, then let them swing back and forth between the amp and wall. It created a weird stereo phasey effect that sounded marvelous (or at least strange) on headphones. DISTORT-O-DRUM Distortion on drums is one of those weird techniques that can actually sound not weird. You can put a lot of distortion on a kick and not have it sound “wrong”—it just gains massive amounts of punch and presence. One of my favorite techniques is copying a drum track, putting it in parallel with the original drum track, then running the copy through a guitar amp plug-in set for a boxy-sounding cabinet. It gives the feeling of being in a really funky room. Replacing drum sounds can also yield audio dividends. My musical compatriot Dr. Walker, a true connoisseur of radical production techniques, once replaced the high-hat in his drum machine with sampled vinyl noise. That was a high-hat with character, to say the least. If you want a sampled drum sound to have an attack that cuts through a track like a machete, load the sample into a digital audio editor that has a pencil tool. Then, within the first 2 or 3ms of the signal, add a spike (shown in red in the diagram for clarity; see Fig. 4). Fig. 4: Messing up a drum sample’s initial attack adds a whole new kind of flavor. When you play back the sound, the attack will now be larger than life, loaded with harmonics, and ready to jump out the speaker. However, it all happens so fast you don’t really perceive it as distortion. (You can even add more spikes if you dare.) Another drum trick that produces a ton of harmonics at the attack is to normalize a drum sample, then increase gain by a few dB — just enough to clip the first few milliseconds of the signal. Again, the drum sound will slam out of the speakers. FUN WITH FEEDBACK A small hardware mixer is a valuable tool in the quest for feedback-based craziness. Referring to Fig. 5, if you have a hardware graphic equalizer, patch it after the mixer output, split the EQ’s output so that one split returns back into the mixer input, monitor the EQ’s other split from the output, and feed in a signal (or not — you can get this to self-oscillate). Fig. 5: Here’s a generalized setup for adding feedback to a main effect. The additiona effect in the feedback loop isn’t essential, but changing the feedback loop signal can create more radical results. With the EQ’s sliders at 0, set the mixer to just below unity. As you increase the sliders, you’ll start creating tones. This requires some fairly precise fader motion, so turn down your monitors if the distortion runs away—or add a limiter to clamp the output. If you have a hardware pitch shifter, then feed some of the output back to the input (again, the mixer will come in handy) through a delay line at close to unity gain. Each echo will shift further downward or upward, depending on your pitch transposer’s setting. With some sounds, this can produce beautiful, almost bell tree-like effects. Feedback can also add unusual effects with reverb, as the resonant peaks tend to shift. At some settings, the reverb crosses over into a sort of tonality. You may need to tweak controls in real time and ride everything very carefully, but experiment. Hey, that’s the whole message of this article anyway! PREFAB NASTINESS? Lately there’s been a trend to “formalize” weird sounds, like bit reducers, vinyl emulators, and magnetic tape modelers. While these are well-intentioned attempts to screw things up, there’s a big difference between a plug-in that reduces your audio to 8 bits, and playing back a sample on a Mirage sampler, which is also 8 bits. The Mirage added all kinds of other oddities — noises, aliasing, artifacts — that the plug-in can’t match. Playing a tune through a filter, or broadcasting it to a transistor radio in front of a mic (try it sometime!) produce very different results. Bottom line: Try to go to the source for weirdness, or create your own. Once weirdness is turned into a plug-in with 24/96 resolution, I’m not sure it’s really weirdness anymore. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Get more instrument sounds out of a single software synthesizer By Craig Anderton If you think “multiple outs” means getting a double play in baseball, you’re in the wrong place! When applied to virtual instruments, multiple outs lets you separate different sounds within a multitimbral instrument, then send them to different outputs. Multitimbral literally means “different timbres.” A multitimbral instrument can produce multiple sounds, with each sound typically responding to incoming data on a different MIDI channel. For example, you might set up a piano sound to respond to data on MIDI channel 1, bass on MIDI channel 2, lead synthesizers on MIDI channel 3, etc. One of the first uses for multiple outs was drum machines. Sure, they had a stereo out for an overall mix—but generally, you didn’t want the same processing on, say, kick and tambourine. If a drum machine had four outs, you’d often assign kick to one, snare to another, and the rest of the kit to a stereo mix. As hardware synths became multitimbral, multiple outs became more important because different instruments often demand different processing. Deep reverb on strings? Sure. On bass? Probably not. So, you’d assign different instruments to different outs, which went into different mixer channels. Each sound could have its own EQ, dynamics processing, special effects, etc. Clearly, this beat the “one size fits all” approach. THE VIRTUAL LIFE In the virtual world, here’s how multiple outs work. When inserting an instrument with multiple outs into a host like Sonar, you can create a separate audio track for each out (Fig. 1). Other DAWs work similarly. Fig. 1: When inserting a soft synth into Sonar, you can choose a main output (shown here as First Synth Stereo Output) or all available synth audio outputs, in stereo or in mono (as shown here). The individual audio tracks work like any audio track—you can insert effects, mix, pan, automate, etc. You can often have multiple outputs for instruments that are rewired into the host—but note there might be a lot of outputs, especially if you choose mono outputs. For example, rewire in Propellerhead’s Reason, and your DAW will sprout another 126 mono tracks. As a result, many programs give you the option to either create a track for every available output (which may be stereo or mono), or simply a single track—usually a stereo mix—for the main output. Whether you should choose mono or stereo outputs depends on the instrument. XLN’s Addictive Drums (Fig. 2) puts some drums on separate channels of a stereo pair—for example, the kick is in a stereo track’s left channel and the snare is in the track’s right channel. If you insert stereo tracks for the outputs, you’ll be stuck with that particular placement. If you insert mono tracks, then the kick and snare will be on their own mono tracks, and you can pan them anywhere you want in the overall stereo mix. Fig. 2: All of Addictive Drums’ outputs are shown in Sonar’s track view. Note that the drums are brought out to individual mono tracks, and that stereo tracks (like Overhead and Room mics) are also separated into left and right channels. I WANT MORE...MORE... Multitimbral synths or samplers can draw a significant amount of CPU power if you’re taking full advantage of the multitimbrality. To solve this problem, you can “freeze” tracks for instruments; for example, you could freeze instrument #1 before you start working with instrument #2. A DAW’s freeze function essentially converts the virtual instrument track into a more CPU-friendly hard disk audio track, then “disconnects” the instrument from the CPU. As long as you retain the MIDI track that drives the instrument, if you decide the frozen track needs further editing, you can always “thaw” it, edit the MIDI track, then re-freeze. If you actually remove the instrument instead of freeze it, remember to note which patch you used and any other instrument-specific settings so you can reproduce the original sound if needed. HANDLING EFFECTS One big advantage of multiple outs—being able to add effects within the host—has been mitigated somewhat by instruments that include built-in effects. However, these are often optimized to keep CPU power consumption under control. For effects that want lots of CPU power and are intended to process several instrument sounds (like a good hall reverb), disable the instrument’s effects, and insert the reverb into a host aux bus. Then, use each track’s send control to apply some signal to the aux bus. In this case, the reverb is set for processed (wet) sound only, then brought back into the mix. Combining this processed reverb sound with the dry instrument sound will provide the desired amount of ambience. MORE OPTIONS Separate outs also have other creative uses: For simulated stereo imaging, load the same instrument into two programs. Restrict one instrument to a lower keyboard range (e.g., below middle C) and pan its output toward the left, then restrict the other to the upper keyboard range and pan its output more to the right. Lower notes will come out the left, and higher notes from the right. Setting two instruments to respond to the same MIDI channel can provide layering effects. Load the same instrument into two channels, but don’t restrict the key ranges; instead, tune one a few cents sharp, and the other a few cents flat. This can produce a huge stereo image, but check mono playback— you may need to edit the detuning to avoid “beating” or signal cancellation. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. Take a stand—well actually, make a stand by Craig Anderton So there you are, sitting at your computer table iwht your DAW, playing a guitar part. Then it’s time to edit the part, so you put the guitar . . . hmmm, where do you put the guitar? You can keep it strapped on, but then it gets in the way of doing your editing. You can have a guitar stand next to you if you had enough foresight to set it up, or get up and place the guitar in its case—although you’ll probably end up needing to play again soon, in which case you’ll need to go take the guitar out of the case again. Or, you can hand the problem over to Planet Waves' Guitar Dock. Guitar Dock has a screw-on clamp that can accommodate flat surfaces with an edge (like a table) from about 1/4” to 1.5” thick, or similar surfaces (workbench, chair, etc.). Note that you can't see the lower clamping surface in the picture, but it's there. The clamping surfaces have a rubberized texture to avoid scratching the surface to which you’re clamping; you then fold out a cradle that holds your guitar’s neck. You can even fold the cradle up perpendicular to the clamp so you can clamp to a vertical edge, but in that case the back of the neck will hit the plastic thumb screw that tightens the clamp rather than the rubberized block in the middle of the cradle that’s more neck-friendly. Don’t necessarily expect to replace a standard guitar stand. For starters, there’s nothing to secure the bottom of the guitar, only the one point along the neck. I’m not sure I’d trust guitars with wildly unconventional lower shapes (like a Gibson Firebird, Ibanez XF350, or Schecter XGR Avenger), but standard guitars shouldn’t experience any problem. Most importantly, if you’re into leaning a guitar neck against a table, chair, amp, or whatever, the Guitar Dock provides a far more stable option that will make it much harder to knock the guitar off its holder. It's never fun to watch a guitar crash to the floor . . . Guitar Dock is compact and light enough to fit into most guitar cases, so you can always have an emergency stand with you. I find it particularly helpful in the recording scenario mentioned at the beginning, but I’ve also found it handy for clamping on to a keyboard stand so the guitar and keyboard are in the same place. The $42.99 MSRP may seem steep, but you can buy it direct from the Planet Waves store for $27.94. If that keeps your headstock from snapping off due to a floor crash, that’s pretty cheap insurance. For more information, visit the Guitar Dock landing page at www.planetwaves.com Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. As we close out MIDI’s 30th anniversary, it’s instructive to reflect on why it has endured and remains relevant By Craig Anderton The MIDI specification first saw the light of day at the 1981 AES, when Dave Smith of Sequential Circuits presented a paper on the “Universal Synthesizer Interface.” It was co-developed with other companies (an effort driven principally by Roland’s Ikutaro Kakehashi, a true visionary of this industry), and made its prime time debut at the 1983 Los Angeles NAMM show, where a Sequential Circuits Prophet-600 talked to a Roland keyboard over a small, 5-pin cable. I saw Dave Smith walking around the show and asked him about it. “It worked!” he said, clearly elated—but I think I detected some surprise in there as well. Sequential Circuits' Prophet-600 talking to a Roland keyboard at the 1983 NAMM show (photo courtesy the MIDI Manufacturers Association and used with permission) “It” was the Musical Instrument Digital Interface, known as MIDI. Back in those days, polyphonic synthesizers cost thousands of dollars (and “polyphonic” meant 8 voices, if you were lucky and of course, wealthy). The hot computer was a Commodore-64, with a whopping 64 kilobytes of memory—unheard of in a consumer machine (although a few years before, an upstart recording engineer named Roger Nichols was stuffing 1MB memory boards in a CompuPro S-100 computer to sample drum sounds). The cute little Macintosh hadn’t made its debut, and as impossible as it may seem today, the PC was a second-class citizen, licking its wounds after the disastrous introduction of IBM’s PCjr. Tom Oberheim had introduced his brilliant System, which allowed a drum machine, sequencer, and synthesizer to talk together over a fast parallel bus. Tom feared that MIDI would be too slow. And I remember talking about MIDI at a Chinese restaurant with Dave Rossum of E-mu systems, who said “Why not just use Ethernet? It’s fast, it exists, and it’s only about $10 to implement.” But Dave Smith had something else in mind: An interface so simple, inexpensive, and foolproof to implement that no manufacturer could refuse. Its virtues would be low cost, adequate performance, and ubiquity in not just the pro market, but the consumer one as well. Bingo. But it didn’t look like success was assured at the time; MIDI was derided by many pros who felt it was too slow, too limited, and just a passing fancy. 30 years later, though, MIDI has gone far beyond what anyone had envisioned, particularly with respect to the studio. No one foresaw MIDI being part of just about every computer (e.g., the General MIDI instrument sets). This trend actually originated on the Atari ST—the first computer with built-in MIDI ports as a standard item (see "Background: When Amy Met MIDI" toward the end of this article). EVOLUTION OF A SPEC Oddly, the MIDI spec officially remains at version 1.0, despite significant enhancements over the years: the Standard MIDI File format, MIDI Show Control (which runs the lights and other effects at Broadway shows like Miss Saigon and Tommy), MIDI Time Code to allow MIDI data to be time-stamped with SMPTE timing information, MIDI Machine Control for integration with studio gear, microtonal tuning standards, and a lot more. And the activity continues, as issues arise such as how best to transfer MIDI over USB, with smart phones, and over wireless. The guardian of the spec, the MIDI Manufacturers Association (MMA), has stayed a steady course over the past several decades, holding together a coalition of mostly competing manufacturers with a degree of success that most organizations would find impossible to pull off. The early days of MIDI were a miracle: in an industry where trade secrets are jealously guarded, manufacturers who were intense rivals came together because they realized that if MIDI was successful, it would drive the industry to greater success. And they were right. The MMA has also helped educate users about MIDI, through books and online materials such as "An Introduction to MIDI." I had an assignment at the time from a computer magazine to write a story about MIDI. After turning it in, I received a call from the editor. He said the article was okay, but it seemed awfully partial to MIDI, and was unfair because it didn’t give equal time to competing protocols. I tried to explain that there were no competing protocols; even companies that had other systems, like Oberheim and Roland, dropped them in favor of MIDI. The poor editor had a really hard time wrapping his head around the concept of an entire industry willingly adopting a single specification. “But surely there must be alternatives.” All I could do was keep replying, “No, MIDI is it.” Even when we got off the phone, I’m convinced he was sure I was holding back information on MIDI’s competition. MIDI HERE, MIDI THERE, MIDI EVERYWHERE Now MIDI is everywhere. It’s on the least expensive home keyboards, and the most sophisticated studio gear. It’s a part of signal processors, guitars, keyboards, lighting rigs, smoke machines, audio interfaces…you name it. It has gone way beyond its original idea of allowing a separation of controller and sound generator, so people didn’t have to buy a keyboard every time they wanted a different sound. SO WHERE’S IT GOING? “Always in motion, the future…” Well, Yoda does have a point. But the key point about MIDI is that it’s a hardware/software protocol, not just one or the other. Already, the two occasionally take separate vacations. The MIDI data in your DAW that drives a soft synth doesn’t go through an opto-isolators or cables, but flies around inside your computer. One reason why MIDI has lasted so long is because it’s a language that expresses musical parameters, and these haven’t changed much in several centuries. Notes are still notes, tempo is still tempo, and music continues to have dynamics. Songs start and end, and instruments use vibrato. As long as music is made the way it’s being made, the MIDI “language” will remain relevant, regardless of the “container” used to carry that data. However, MIDI is not resting on its laurels, and neither is the MMA—you can find out what they're working on for the future here. Happy birthday, MIDI. You have served us well, and we all wish you many happy returns. For a wealth of information about MIDI, check out The MIDI Association web site. Background: When Amy Met MIDI [attachment=139991:name]After MIDI took off, many people credited Atari with amazing foresight for making MIDI ports standard on their ST series of computers. But the inclusion of MIDI was actually a matter of practicality. Commodore was riding high with the C-64, in large part because of the SID (Sound Interface Device) custom IC, a very advanced audio chip for its time. (Incidentally, Bob Yannes, one of Ensoniq’s founders and also the driving force behind the Mirage sampler, played the dominant role in SID’s development.) Atari knew that if it wanted to encroach on Commodore’s turf, they needed something better than SID. They designed an extremely ambitious sound chip, code-named Amy, that was supposed to be a “Commodore killer.” But Amy was a temperamental girl, and Atari was never able to get good enough yields to manufacturer the chips economically. An engineer suggested putting a MIDI port on the machine, so it could drive an external sound generator; then they wouldn’t have to worry about an onboard sound chip. Although this solved the immediate Amy problem, it also turned out to be a fortuitous decision: Atari dominated the European music-making market for years, and a significant chunk of the US market as well. To this day, a hardy band of musicians still use their aging ST and TT series Atari computers because of the exceptionally tight MIDI timing – a result of integrating MIDI into the core of the operating system. Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. How to Separate Sound from Performance by Craig Anderton It used to be that recording a guitar part set its sound in stone. Sure, you could add EQ, reverb, or other processors while mixing, but they provided variations on a theme, not an entirely different theme. But now, if you wish you’d recorded through a Marshall stack instead of a Vox AC30—no problem. There are two main options for changing your sound after the fact: traditional re-amping, and virtual re-amping with software plug-ins. Although re-amping has been around for a while, the quest for increased sound quality has spawned new re-amping solutions. Furthermore, re-amping isn’t just for guitar any more. Playing back drums, vocals, synthesizers, and other instruments through guitar amps yields entirely new tonalities. But before proceeding, I’d like to thank Peter Janis of Radial Engineering (who make boxes for re-amping, among other things) for his research on the history of re-amping, and for contributing several useful tips. Ready to re-amp? Let’s rock. HARDWARE RE-AMPING “Classic” re-amping was done originally with mixers, recorders, and amps, and applied mostly to guitars. This remains a common technique, and even virtual re-amping may incorporate a bit of hardware-based re-amping. The basic idea is to record the dry guitar sound while monitoring through an amp so the guitarist can get the right “feel” (and if feedback is a component of the sound, suitable sustain characteristics). Typically, you record the amp as well because it might end up being the sound you want. If not, the next re-amping step is to send the dry guitar track out from the recorder and into an amp, set the amp sound as desired, then record the “re-amped” sound. As the recorder’s signal will likely be line level, applying it to a standard guitar amp will really overload the sucker and create some major distortion. If that’s not what you want, you’ll need to pad down the signal feeding the amp to something approximating standard guitar levels. Also note that re-amping makes sense for any instrument (especially synthesizer) that’s recorded direct. Running the track through an amp, and miking the amp and/or room ambience, can impart a new sense of “space.” RE-AMPING IN THE REAL WORLD Producer/engineer Dave Bottrill (who uses a Radial JD7 for re-amping; see Fig. 1) says that “It’s now my standard practice to record a DI along with the rest of the cabinets or combos I record. Invariably one of the songs I am working on for a record needs some kind of re-amping, and on the Godsmack CD, it proved invaluable when we discovered some faults with the signal path when we recorded some of the guitars. There were analog distortions along the line, and we just took the DI and sent it back through the same path (luckily we hadn’t torn down the setup) and were able to recreate the sound exactly without the line crackle. Fig. 1: Radial Engineering's JD7 does re-amping, but can also distribute the signal to multiple amps or effects systems. “Creatively, a re-amping box allows me to send all kinds of signals through my stomp box collection with the correct impedance. For example, I love the sound of drums through my old Electro-Harmonix Micro Synth pedal.” THE LOADING ISSUE To capture a characteristic guitar sound, you need to record the same thing you would hear if the guitar connected directly to an amp. Although many people like the “high-fidelity” sound of a guitar feeding an ultra-high impedance input, others prefer the slight dulling that occurs with a low-impedance load (e.g., around 5-100 kohms) as found with some effects boxes, older solid-state amps, etc. This is especially useful when the guitar precedes distortion, as distorting high frequencies can give a grating, brittle effect that resembles Sponge Bob on helium. There are several ways to load down your guitar: Find a box that loads down your guitar by the desired amount, then split the guitar to both the box and the mixer or audio interface’s “guitar” input. If your recorder, mixer, or sound card has a guitar input, try using one of the regular line level inputs instead. Use a box with variable input impedance (e.g., the “drag control” on Radial products) Create a special patch cord with the desired amount of loading by soldering a resistor between the hot and ground of either one of the plugs. A typical value would be 10 kohms. If you’re going through host software with plug-ins, insert an EQ and roll off the desired amount of highs before feeding whatever produces distortion (e.g., an outboard amp that feeds back into the host, or an amp simulator plug-in). However, this doesn’t sound quite as authentic as actually loading down the pickup, which creates more complex tonal changes. Note that you need to add this load while recording, as it’s the interaction between the pickup’s inductance and load that produces the desired effect. Once the dry track is recorded, the pickup is out of the picture. But just because we have a signal doesn’t mean we can go home and collect our royalties, because this signal now goes through a signal path that may include pedals and other devices. As guitarists are very sensitive to the tone of their rigs, even the slightest variation from what’s expected may be a problem. For example, the transformers in some direct boxes or preamps color the sound slightly, so the guitarist might want to send the signal through the transformer, even though transformer isolation is usually not necessary with a signal coming from a recorder. VIRTUAL RE-AMPING Plug-ins and low-latency audio interfaces have opened up “virtual re-amping” options. Guitar-oriented plug-ins include IK Multimedia AmpliTube, Native Instruments Guitar Rig, Line 6 POD Farm, Scuffham Amps, Waves G|T|R|, iZotope Trash, Peavey ReValver, Overloud TH2, McDSP Chrome Tone, and others. The concept is similar to hardware-based re-amping: Record the direct signal to a track, and monitor through an amp. The key to "virtual re-amping" is that the host records a straight (dry) guitar signal to the track. So, any processing that occurs depends entirely on the plug-in(s) you've selected; you can process the guitar as desired while mixing, including changing "virtual amps." When mixing, you can use different plug-ins for different amp sounds, and/or do traditional hardware re-amping by sending the recorded track through an output, then into a mic’ed hardware amp. Using plug-ins has limitations. If feedback is part of your sound, there’s no easy way to create a feedback loop with a direct-recorded track. This is one reason for monitoring through a real amp, as any effect the amp has on your strings will be recorded in the direct track. Still, this isn’t as interactive as feeding back with the amp that creates your final sound. And plug-ins themselves have limitations; although digital technology does a remarkable job of modeling amp sounds, picky purists may pout that some subtleties that don’t translate well. Furthermore, monitoring through a host program demands low-latency drivers (e.g., Steinberg ASIO, Apple Core Audio, or Microsoft’s low-latency drivers like WDM/KS). Otherwise, you’ll hear a delay as you play. Although there will always be some delay due to the A/D and D/A conversion process, with modern systems total latency can often be under 10ms. For some perspective, 3 ms of latency is about the same delay that would occur if you moved your head 1 meter (3 feet) further from a speaker—not really enough to affect the “feel” of your playing. If latency is an issue, there are other ways to monitor, like ASIO Direct Monitoring. Input signal monitoring (often called zero-latency monitoring) is essentially instantaneous; the signal appearing at the audio interface input is simply directed to the audio interface out, without passing through any plug-ins. With this method you can also feed the output to a guitar amp for monitoring, while recording the straight signal on tape. In any event, regardless of whether you use hardware re-amping, virtual re-amping, or a combination, the fact that the process lets you go back and change a track’s fundamental sound without having to re-record it is significant. If you haven’t tried re-amping yet, give it a shot—it will add a useful new tool to your bag of tricks. Background: A History of Re-Amping by Peter Janis, Radial Engineering As with so many aspects of audio, it’s hard to pin down exactly when a technique was first used, and that goes for re-amping. While Reamp made the first commercial box designed expressly for this purpose, engineers had already been creating re-amping setups for years. Recording historian Doug Mitchell, Associate Professor at Middle Tennessee State University, comments that “The process of ‘re-amping’ has actually been utilized since the early days of recording in a variety of methods. However, the actual process may not have been referred to as re-amping until perhaps the late ’60s or ’70s. From the early possibilities of recording sound, various composers and experimenters utilized what might be termed ‘re-amping’ to take advantage of the recording process and to expand upon its possibilities. The first commercially available box for re-amping has been tweaked and revised over the years. “In 1913 Italian Futurist Luigi Russolo proposed something he termed the ‘Art of Noises.’ Recordings of any sound (anything was legitimate) were made on Berliner discs and played back via ‘noise machines’ in live scenarios and re-recorded on ‘master’ disc cutters. This concept was furthered by Pierre Schaeffer and his ‘Musique Concrète’ electronic music concept in the ’30s and ’40s. Schaeffer would utilize sounds such as trains in highly manipulated processes to compose new music ideas. These processes often involved the replaying and acoustic re-recording of material in a manipulated fashion. Other experimenters in this area included Karlheinz Stockhausen and Edgard Varèse. “With the possibilities presented by magnetic recording, the process of what might be termed re-amping was utilized in other ‘pop’ music areas. Perhaps the first person to take advantage of this was Les Paul. His recordings with Mary Ford often utilized multiple harmonies all performed by Mary. Initially these harmonies were performed via the re-amping process. Later, Les convinced Ampex to make the first 8-track recorder so that he might utilize track comping to perform a similar function. Les is also credited with the utilization of the re-amping process for the creation of reverberant soundfields, by placing a loudspeaker at one end of a long tunnel area under his home and a microphone at the other end. Reverberation time could be altered with the placement of the microphone with respect to the loudspeaker playing back previously recorded material. “Wall of sound pioneer Phil Spector is perhaps the most widely accredited for the use of the re-amping process, and because of his association with the Beatles, is potentially regarded today as the developer of the process. However, Phil was actually refining a process and exploring its possibility for use in rock music. “‘Re-amping’ is often used in film sound design as well. In order for sounds recorded in a post-production environment to match the scene, it is common to re-record them utilizing a re-amping procedure. In film sound this process is also termed ‘worldizing.’ Bob Ohlsson of Motown fame, who has worked with Stevie Wonder, Marvin Gaye, the Kinks, and many others had another perspective: “I began doing it in 1968, shortly after we got 16-track machines, because for the first time we could separately record direct guitars, clavinets, and electric pianos. I had never heard of it being done and am pretty sure I was the first to try it at Motown, but I can’t imagine lots of others weren’t doing the same thing. It seemed like a very obvious thing to do in a world where electric instruments were taken direct primarily to cut down on bleed rather than for tonal quality.” And the late Roger Nichols was another early adopter. “I started using the process in 1972, when I built the re-amper we used on the first Steely Dan album, and almost every one after that. We used it to play direct guitar tracks back through an amp. We were going through a lot of amps; the speakers would get tired or the tubes would melt or something during a night of guitar overdubs. “We would go through one amp to make sure we got the sound we wanted, and then when the right guitar and settings were locked in, we recorded the direct signal and let the amp rest. After the part was completed, we ran the signal back through the guitar amp and it only had to last long enough to print the results to tape. I still have the box around here somewhere.” (According to Jonathan Little of Little Labs, Jeff Harris at the Arizona Conservatory has one of Roger’s early boxes.) In 1980 Jensen Transformers introduced the JT-DBE transformer and in the application note, a paragraph discusses using this transformer to convert low impedance balanced lines to guitar levels. In the 1980s Whirlwind also produced a device that could accommodate low-to-hi conversion using a transformer. The Multi Z PIP is only one of several products from Little Labs that does re-amping. In 1994, Reamp commercialized the process by producing a box that incorporated a transformer and a volume control. This was a follow-up to the original box that Reamp founder John Cuniberti used on sessions with Steve Vai, and allowed the user to adjust the volume at the amplifier instead of at the mix position. In 1996, the first generation Radial JDI was introduced, and was designed with re-amping, among other applications, in mind; the Radial JD7 Injector, released in 2001, offered a balanced output and input to allow re-amping and subsequent re-distribution of signals to multiple amplifiers. Furthermore, the IBP from Little Labs, while intended mainly to provide phase compensation for signals, also provides re-amping functions, as does their Multi Z PIP. Acknowledgment: Thanks to Frank Wells at Pro Sound News and Mitch Gallagher for helping us track down these folks. — Peter Janis Virtual Re-Amping in Sonar X3 Here’s the procedure for re-amping in Sonar X3, but note that the procedure is similar for other DAWs. Fig. 5: Enabling input echo means that the input signal will be processed by whatever plug-ins are inserted in the track. Check that there’s no feedback loop from the host output back to the input. To be safe, turn down your monitor speakers. Enable the driver for the input you’re going to use for plugging in your guitar (under Edit > Preferences > Audio > Devices). Create an audio track, then from the I/O Input field, select the appropriate hardware input that you enabled in the previous step. Turn on the Input Echo function (in the Track view, click on the button to the right of the Record button). It will glow blue; see Fig. 5. Enable the track’s Record button. You should hear your input source. Insert the plug-in(s) of your choice into the FX field. Your input source will play through the plug-in . . . start recording! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Take a look at what you’ve been hearing By Craig Anderton A spectrum analyzer is a tool that can help analyze a track or mix, and reveal frequency or dynamics anomalies. It does this by dividing the audible frequency spectrum into hundreds or even thousands of bands (also called “windows”) using a process called Fast Fourier Transform (FFT), then displaying the level of each band in a graph or 3D display. This feedback is invaluable in training you to correlate what you hear with your ears to hard data about frequency response and amplitude. Most digital audio editing programs (Fig. 1), and even some multitrack hosts (Fig. 2), now include software spectrum analysis tools. Fig. 1: When you call up the parametric EQ in iZotope’s Ozone5 mastering plug-in, you’ll see a superimposed real-time spectrum and the response curve you’ve created (the red line). Fig. 2: Sonar X3’s ProChannel EQ has a “fly-out” for a more detailed view, which also includes a background spectrum analyzer so you can see the results of changes made with the EQ. However, the object is not to aim for a flat response; generally, the highs trail off gently, while what happens in the bass depends on the genre of music. (Interestingly, the Har-Bal mastering EQ program includes representative reference curves for different types of music.) For example, you’ll see more bass on a dance mix with a prominent kick drum. A very uneven average bass response may indicate acoustics-related problems — either from room resonances when miking acoustic sources, or from mixing if you’re using EQ to compensate for room anomalies of which you’re not aware. Spectrum analyzers are also invaluable for analyzing the spectral response of well-mixed, well-mastered recordings. Compare their curves to yours and see where the differences lie. Differences are not necessarily “bad”; it depends on the music and style. But if, for example, your mixes sound muddy and other CDs don’t, investigate what’s happening in the bass and lower midrange. CUSTOMIZING SPECTRUM ANALYSIS RESPONSE Some digital audio editors let you customize the way a spectrum analyzer displays data, as well as alter its analysis process. Here are some common options (Fig. 3). Fig. 3: Sony Sound Forge allows for multiple customization options for its spectrum analyzer display. FFT size determines the number of samples per band. Higher numbers give better frequency resolution, but require more time to compute the display. When you’re looking for frequency anomalies, use a high value, like 16K or 32K. This catches very narrow peaks that you might not see with smaller FFT sizes. FFT overlap sets the amount by which the analysis bands overlap. Higher values (50\% and above) provide a more accurate analysis, but increase display computation time. Smoothing window determines the analysis algorithm. Different algorithms trade off sharpness of peaks and leakage between neighboring bands (i.e., data in one band influences the ones next to it). A Triangular smoothing window is a compromise between peak sharpness and leakage. Rectangular provides accurate drawing of peaks but high leakage, and Blackman-Harris has little leakage, but the peaks look more rounded. 3D vs. 2D shows the information in different ways. 2D shows amplitude vs. frequency, while 3D displays a series of “slices” within the selected region to relate time to frequency and amplitude. Range, reference, etc. are parameters that let you adjust the scale, zoom in on specific areas of the graph, change the 0dB reference point, etc. Linear vs. log response is best set to Log for audio work, as the curve more closely approximates how your hearing responds. Different programs do spectrum analysis differently. Some take (or even save) “snapshots,” some take an average reading over time, and some show what’s happening in real time. A few programs let you compare the input and output spectrum in relation to a signal processing function. Regardless of a spectrum analyzer’s particulars, the bottom line is that they all present useful information about your mix. With practice, someday you’ll probably be able to say “This mix needs a slight boost at 12kHz, a major cut around 350Hz, and a minor notch at 50Hz.” Until then, you can use spectrum analysis to learn more about your mixes. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. It's a whole new world for DJs - and there's a whole new world of DJing options by Craig Anderton If the term “DJ” makes you think “someone playing Barbra Streisand songs at my cousin’s wedding,” then you might think gas is $1.20 a gallon, and wonder how Ronald Reagan will turn out as president. DJing has changed radically over the past two decades, fueled by accelerating world-wide popularity, technological advances, and splits into different styles. While some musicians dismiss DJs because they “just play other peoples’ music, not make their own,” DJing demands a serious skill set that’s more like a conductor or arranger. Sets are long, and there are no breaks—you not only have to pace the music perfectly to match the audience’s mood, but create seamless transitions between cuts that are probably not at the same tempo or key. On top of that, DJs require an encyclopedic knowledge of the music they play so they can always choose the right music at the right time, and build the dynamics of the music into an ongoing series of peaks and valleys—with each peak taking the audience higher than the previous one. What’s more, the bar is always being raised. DJs are no longer expected just to play music, but use tempo-synched effects, and sometimes even trade off with other DJs on the same stage or integrate live musicians—or play instrumental parts themselves on top of what they’re spinning. Quite a few DJs have gotten into not just using other tracks, but creating their own with sophisticated software DAWs. Let’s take a look at some of the variant strains of DJing. These apply to both mobile DJs, the closest to the popular (mis)conception of the DJ as they typically bring their own sound systems, music, and play events including everything from art opening to weddings; and club DJs, who are attractions at dance clubs and renowned for sophisticated DJing techniques (like effects and scratching). VINYL AND TURNTABLES This is where it all started, where DJs have to beat-match by listening carefully to one turntable while the other is spinning, line up the music, then release the second turntable at the precise moment to sync up properly with the current turntable, and crossfade between the two. Vinyl is where scratching originated by moving the record back and forth under the needle. Vinyl is still popular among traditionalists, but there are many more alternatives now. The Stanton STR8-150 is a high-torque turntable with a "skip-proof" straight tone arm, key correction, reverse, up to 50\\\% pitch adjustment, and S/PDIF digital outputs. DJING WITH CDS As CDs replaced vinyl, DJs started looking for DJing solutions involving CDs. Through digital technology, it became possible to DJ with CDs, as well as use vinly record-like controllers to simulate the vinyl DJ experience (scratching and beat-matching) with CDs. Originally frowned on by traditional DJs, CD-based DJs developed their own skill set and figured out how to create an end result with equal validity to vinyl. THE OTHER MP3 REVOLUTION As MP3s replaced CDs, DJs again followed suit. But this time, the full power of the computer started being brought into play. Many MP3-based DJing packages now combine hardware controllers with computer programs that not only play back music, but include effects and allow seeing visual representations of waveforms to facilitate beat-matching. What’s more, effects often sync to tempo and map to controls, so the DJ can add these effects in creative ways that become part of the performance. Native Instruments’ Traktor Kontrol is designed specifically as a match for their Traktor DJing software. MP3-based DJing also meant that DJs were freed forever from carrying around records or CDs, as they could store gigabytes of music on the same laptop running the DJ program itself. ABLETON LIVE: THE DAW FOR DJS This article isn’t really about mentioning products, but in this case, there’s no other option: Live occupies a unique position as a program that straddles the line between DAW and DJ. It’s hard to generalize about how people use Live, because different DJs have very different approaches. Some bring in complete songs and use Live’s “warping” capabilities to beat-match, then crossfade between them on-fly-while bringing in other music; others construct entire compositions out of loops, which they trigger, solo, mute, and arrange in real time. Live’s “Session View” is the main aspect of the program used to create DJ sets out of loops and other digital audio files. Although a runaway favorite of DJs, Live isn’t the only program used by DJs—Propellerhead Reason, Sony Acid, and Apple Logic are three other mainstream programs that are sometimes pressed into service as DJ tools. NONE OF THE ABOVE: OTHER DJ TOOLS A variety of musical instruments are also used for DJing. Although the best-known are probably Akai’s MPC-series beatboxes, people use everything from sampling keyboards to Avid’s Venom synth in multi-timbral mode to do, if not traditional DJing, beats-oriented music that is closer to DJing than anything else. Akai’s MPC5000 is a recent entry in the MPC series, invented by Roger Linn, which popularized the trend of DJs using “beatbox”-type instruments. I've even used M-Audio's Venom synthesizer to do a DJ-type set by calling up Multis and soloing/muting/mixing drum, bass, arpeggiator patterns, and playng lead lines on top of all that. Here's a video whose soundtrack illustrates this application. If you haven’t done any DJing, it’s fun—and if you haven’t heard good DJ sets, internet radio is a great place to find them being played out of Berlin, Paris, Holland, Bangkok, and other musical hotbeds. But be forewarned: You may find a brand new musical addiction. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Unfortunately, Sometimes It's Not the Computer that's Stupid...It's Me by Craig Anderton Usually, I like to write about some clever trick. But this time, I want to explore stupidity, particularly because stupidity breeds stupidity: Sometimes a stupid mistake will make you do an even stupider mistake, like re-installing your computer's operating system when it isn't really necessary. Here's a chance to learn from my stupid mistakes, and those of others. GETTING LOOPED ISN'T JUST ABOUT DRINKING I was trying out a new interface. I booted up my host program, loaded a sequence, hit play, and...got blasted across the room by a loud buzz. Latency problem? Eventually I realized that the loop left and right locators were on top of each other, and loop was enabled. I separated the loop locators, and all was well. DON'T TRY, TRY AGAIN One definition of insanity is doing the same thing over and over but expecting different results. If something goes wrong, and you try to fix it but it still doesn't work, don't try the same fix again. One friend tried to install a program and it wouldn't install. So he tried installing it again. And again, with the same results. Then he tried uninstalling other programs in case there was a conflict. Finally, he called me in desperation. Fig. 1: Installing a program? Uncheck any boxes that enable real-time protection—but only after you’ve downloaded the program you want to install from the internet. I asked if he'd forgotten to disable his anti-virus software during installation (Fig. 1). There was a very long pause at the other end of the phone. ASK THE INTERNET I can’t tell you how many problems I’ve resolved by searching on "[product name] 'known issues' problem [problem symptoms]." Well actually, I can: countless. It never hurts to ask the internet if there’s a solution to your problem before graduating to more drastic measures. SO THAT'S WHY THEY'RE CALLED "READ ME" FILES Here’s an oldie-but-goodie, but the lesson remains valid today: I wanted to try the ReValver SE guitar amp with SONAR 5. So I put a signal through it, and got the sound of a blue jay stuck in a blender, processed by ring modulation. I emailed Cakewalk support, and it turned out that at that time, ReValver wasn't compatible with SONAR's 64-bit double-precision audio engine—as stated in the Read Me file. But the stupid mistake wasn't not reading the Read Me file; I had read it when I first installed the program. The stupid mistake was not checking the Read Me file before contacting tech support and wasting their time. KILLING A COMPUTER WITH KINDNESS I thought I’d do my daughter a big favor, and upgrade her computer with a shiny new graphics card. I turned the power back on, and booted to a...black screen. Then I remembered I hadn't disabled the onboard graphics. Once disabled, it worked great. The same thing often holds true for Windows sound cards: Disable any onboard sound to avoid conflicts. UPGRADES GONE WILD Congratulations! You upgraded your cheapo office supply store computer with two new hard drives, a hot graphics card, DVD-ROM writer, lots more RAM, DSP board, and a couple new interface cards. And your computer's great—until after about 12 minutes, when it starts acting flaky and does spontaneous reboots. Hmm...did you upgrade the power supply, too so there's enough juice to feed all this stuff? Ooops. THE RIGHT WAY TO DO WINDOWS DRIVER UPDATES I've done driver updates wrong so may times let's just proceed directly to how to do them right. A driver is a software routine that provides a bridge for data between a piece of hardware, like an audio interface or graphics card, and your computer's innards. Drivers are updated often, both to improve performance and to eliminate conflicts that either didn't exist or weren't noticed when a product was first introduced. It's important to keep on top of driver updates, and not just for your audio interface. For example, if you use a PCI sound card and PCI graphics card, there could be conflicts between the two that a driver upgrade will resolve. But there are two important caution. First, follow any updating instructions to the letter. For example with USB devices, sometimes the device needs to be connected when you install the driver, and sometimes it needs to be connected at some point during the installation process. Failing to observe the proper order could cause dire consequences, like the meltdown of society as we know it. Fig. 2: Know how to roll back a driver just in case a new driver introduces problems. Second, sometimes a new driver will solve old problems, but introduce new ones. Make sure you know how to roll back a driver (e.g., in Windows, using the Driver Rollback feature in Device Manager; see Fig. 2) before you install a new one, and never install multiple new drivers—try one, then test, then the next, then test, etc. NATURE ABHORS A VACUUM, AND SO DOES A COMPUTER Help keep a computer clean by not letting dust get into it—throw a plastic cover over it when not in use. If it does get dirty inside, use compressed air to blow out the dirt. Do not open up the case and use a vacuum cleaner. They're designed to vacuum Big Things like rooms, not delicate Little Things like computers. SO THAT'S WHY THEY'RE CALLED BETA DRIVERS I have a friend—let's call him, oh, Craig—who because of his gig, needs to stay on top of the latest upgrades. He's smart enough to know that not updating drivers can lead to problems, but not smart enough to know that updating with beta drivers is not a good idea—as he found out when trying out beta drivers for his graphics card, and every time he moved a window in his sequencer there was a symphony of little clicks. Now I—I mean, Craig—knows better. He also knows how to roll back drivers to previous versions. IT PAYS TO LISTEN When your computer is trying to tell you something by making a strange noise, don't be stupid and pretend it will go away. Is it quieter than normal? Maybe a fan died, so your CPU might be next. Is there a grinding noise? The bearings on a hard drive might be going—back up immediately. Your ears can be valuable early warning indicators if you pay attention. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Music Software? It's Not Just for Normal People Any More by Craig Anderton Yes, you can use a digital audio editor for digital audio editing. But why not use it to create a beatbox? Or emulate variable tape speed controls? Or to find more tax deductions? Okay, so it doesn't really do the bit about taxes. But today's software often does a lot more than advertised. Want proof? Keep reading. 1. VARIABLE SPEED ANALOG TAPE IN A DIGITAL WORLD Fig. 1: Sony Sound Forge is one of many programs that allows pitch shift processing. Back in the days when rock was young, payola was rampant, and anything a groupie gave you could be cured by a shot, a common hitmaking trick was to speed up the tape by a percent or two (or in the case of Gary Glitter's "Rock and Roll Part 2," considerably more). It tightened the timing, brightened up the timbre, and made the vocalist sound a bit more youthful. Today's host programs don't have a variable speed control any more, but two two-track editors like Audition. Sound Forge, and Wavelab do - it's just disguised. Go right past the elaborate time-stretch/pitch stretch algorithms, and dial in "bend pitch." (Fig. 1). This provides the effect we want: Raises pitch without preserving duration. Bump it up a per cent or two, and recreate that fabulous hit sound of yesteryear. Or if you want to slow things down, you can do that too. 2. SUPER BANDPASS RESPONSE Fig. 2: Create a bandpass response with skirts that just don't stop. Remember those telephone-type effects you'd hear on vocals? Today's parametrics are a thing of beauty, but there's a problem with the band pass response: It rises out of a flat response, so the bass and treble are still there. What do you do if you want a bandpass response with a rolloff that just doesn't know when to stop? You can throw in some high and low pass filtering to trim the highs and lows-but there's an easier way. Adjust the EQ on the track you want to "bandpassify" for an approximation of the desired effect. Now clone that track. Make sure EQ is disabled for both tracks, then throw one of the tracks out of phase. Adjust the levels so that the tracks cancel. Now enable the bandpass filter on one of the tracks (Fig. 2). You'll hear the highs and lows magically melt away, leaving only the bandpass peak. Vary the filter frequency, then do a little tweaking, and you can also do some really convincing wa-wa pedal effects. 3. THE REX FILE DATA SCRAMBLE Fig. 3: Propellerheads' Reason is the grand-daddy or REX file coolness. REX files slice up a digital audio waveform into little pieces, then play these slices back sequentially. Why? So you can stretch tempo: Slow down the tempo and the slices play further apart, speed up the tempo and they play closer together. What triggers these slices is a companion MIDI file. But hey, it's just MIDI data, so we can move pieces around, copy data and stack it, apply randomization algorithms, or whatever we feel like doing (Fig. 3). If nothing else, this is one way to remove the boredom element out of using loops: Each iteration can sound slightly different. It takes a little work to associate which MIDI note triggers which slice, but once you have that figured out, you're good to go. Click here for an audio example of the REX scramble technique. 4. THE LOOP FACTORY We all know how cool acidized files are. Well, most of us do. Some still struggle with programs that aren't really that adapt at handling acidized loops. Sure, they'll load okay - but you can't edit them, which is often crucial because a lot of commercially-available loop CDs are pretty sloppy about acidizing a loop (the end result: try to stretch them, and they sound horrible). So if you want to create loop files that work at different tempos or keys, you're hosed. Or are you? Not if you have Sony Acid Pro. Load the loop into an Acid project (I find it most convenient to load it into a single track), then if needed, use Acid's toolset to edit the loop points for the best stretching characteristics. Next, copy the loop multiple times on the same track. Insert a tempo change for the desired tempo before each loop, and/or a key change if you want to change keys. Then go Edit > Export Loops (Fig. 4). This saves each loop into the folder of your choice as a WAV file at the desired tempo and key (and the loops are acidized, too). Now you can import these into your acidizationally-challenged host, and rock on. Fig. 4: Use Sony Acid Pro to stretch loops to different tempos, then save them at those tempos for use in programs that don't recognize Acidized files. 5. MAKE YOUR OWN BEATBOX WITH A WAVEFORM EDITOR Fig. 5: You can use a digital audio editor to synthesize drum sounds. The raw materials for those old analog beatboxes were damped sine waves and noise, with transient envelopes. As it so happens, some digital audio waveform editors (e.g., Wavelab, Audition, and Sound Forge) can synthesize those exact types of sounds. Of these, Wavelab works really well. Go Tools > Audio Signal Generator and you'll find waveforms galore, as well as the means to shape frequency, level, and vibrato (Fig. 5). 6. CUSTOMIZE YOUR REASON COMBINATORS Fig. 6: Customize your combinators for coolness. You can create your own skins for Reason's Combinator device. Why bother, you might ask? Because Combinators are really cool, so much so that I use a lot of them...and when you're scrolling around the rack, having distinctive skins makes it easy to parse which one you want. When you load a skin, all that remains of the Combinator are the knobs, buttons, wheels, and whatever names you gave them. The rest is up to you (Fig. 6). All you need to do is right-click (people of the Macintosh tribe should Ctrl-click) on the Combinator, choose Select Backdrop, navigate to the nearest suitable 758 x 134 JPEG or Photoshop graphic, and load it. Done! 7. TURN RECYCLE INTO THE PERCUSSIVATOR Fig. 7: ReCycle is intended to make loops, but it can also do some pretty bizarre signal processing. I sure like pulsing, rhythmic effects. Give me a vocoder and drum machine for a modulator, and I'm a happy guy. But sometimes you have sounds that refuse to be rhythmic, like a power chord, or held organ note. Yes, you can process it through gating or vocoding to impart synchro-sonic, rhythmic characteristics, but with ReCycle, you can build rhythmic characteristics into the sample itself. Just load the sample into ReCycle, and place slices that create a rhythm. For example, you could place a slice every eighth note for a constant eighth note rhythm-but we can get more creative than that, like adding a flurry of 16th-note divisions at the end of a power chord, or syncopations (Fig. 7). Set the attack and decay parameters (decay would typically be a few hundred milliseconds) to give the desired amount of percussification, then save it as a REX file if your host supports REX files. Or, set the tempo to that of your host's project, and export it as a WAV or AIFF file you can import directly into the host (remember to first go Process > Export as One Sample, or you'll save each slice individually). Click here to hear a percussified power chord that's gotten some rhythm, courtesy of ReCycle. 8. MASTER YOUR TUNES IN REASON Fig. 8: Use Reason's MClass mastering effects with standard audio files. Of course you like those MClass mastering effects introduced in Reason 3.0. Now if only Reason had an external input so you could process your files through the effects... It doesn't, but here's the next best thing. Treat the tune as a single sample, and load it into the NN-XT sampler (note that the NN-XT doesn't stream from disk, so any sample has to be able to fit into the available RAM). Now create a one-note sequence (draw the note at C3) that triggers the sample for as long as the tune lasts, and feed the NN-XT output into the MClass processors (Fig. 8). Once you have the sound exactly as desired, render to disk using the File > Export Song As Audio File command. 9. CREATE NASTY VINYL SCRATCHES AND NOISE WITH A DIGITAL AUDIO EDITOR Fig. 9: You don't always need a plug-in to create vinyl-type effects. You gotta love some of those hip-hop drum samples that were taken from funky old vinyl. But what if you have a pristine sample and want to mess it up? There are plug-ins that do vinyl effects, with one of the best ones (because it's free!) being iZotope's Vinyl. But you can create the precise type of noise and scratchiness you want with Wavelab or other digital audio editors. Basically, use the program's signal generating options (see tip #5 above) to throw in some noise, a low-level 60Hz sine wave if you want some hum, some heavily low-pass filtered noise for rumble, and for the crowning touch, draw in some scratches with the pencil tool. Drawing a scratch is easy: Just create a spike where you want a scratch (Fig. 9). And for a really authentic sound, have a scratch repeat every 446 ms if your "virtual record" is spinning at 33.3 RPM. This simulates the effect of a scratch that goes across multiple grooves. Click here to listen to an example of vinyl-type noise generated with this technique. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Here's the Article that Tells you What Not to Do By Craig Anderton Whoa! That was some bad note. So naturally, you re-record the part. But are you paying attention to the other mistakes—the ones that involve the recording process itself? The following mistakes can tear your tone in two, so here's a word to the wise: Avoid them. 1. Mixing direct and miked signals without compensating for delay Sound travels at about one foot per millisecond, while electrons move a whole lot faster. So the miked signal arrives at your mixer at the speed of sound, while the direct signal arrives at close to the speed of light. If the mic is one foot away from your speaker, zoom in on the tracks and shift the miked signal ahead in time by about a millisecond until they line up (Fig. 1). You'll hear a much fuller, punchier tone. This is particularly important with bass. Fig. 1: The green track is the dry signal, and the blue, the miked signal. The upper view shows their original time relationship. The lower view shows the same tracks after being time-aligned. 2. Forgetting to check for mono compatibility You love your cherished, vintage AxeBlaster Flanger with its s u p e r w i d e stereo spread. Only problem is because of the way they get that stereo spread – by flipping the phase 180° on one of the output channels. This may sound great live, but when the signal gets re-combined in mono, portions of it (maybe even all of it) will disappear. Ouch. This can also happen with stereo mics on a single sound source, so always check what a track sounds like in mono before you sign off. 3. Stringing along with dead strings Yes, change your strings before that important recording session and no, adding compression to increase sustain is not a suitable substitute. With new strings, your axe will sound brighter, notes will sustain longer, and tuning will be more consistent. Don't just boil your strings – go ahead and splurge, spend the bucks, and re-string. 4. Using "automatic double tracking" instead of playing the part twice It's that popular preset in your multieffects: Automatic Double Tracking, where the processor copies your signal, delays it a bit, detunes the copy to "humanize" it, then recombines it with the straight signal. Although ADT is a valid effect in its own right if you want a sort of more focused version of chorusing, nothing substitutes for doubling a part by actually playing it twice. Furthermore, when you record each part on a different channel, you can spread the stereo image – one track more right, the other more left – for a bigger, more enveloping sound. 5. Falling into a "mic rut" You found a condenser mic that sounds great on acoustic guitars, and have a favorite dynamic mic for amps. And you've used them forever. But maybe you need to experiment. For example, one of the things that surprised me was just how great a Royer ribbon mic can sound on a guitar amp. And I once got an ultra-fat sound on an acoustic with a dynamic mic. Why be normal? Just don't do anything dumb, like placing a super-sensitive condenser in front of an amp blasting at the levels of a Saturn 5 booster rocket. 6. Not orienting an electric guitar for minimum noise "Pickups" are appropriately named, because they pick up a lot more than strings – like buzzes, electrical hash, dimmer noise, and the like. The good news is that the pickup is directional, and changing the guitar's position can make it less prone to picking up garbage. Don't use your ears; look at the meters, because the levels will be really low. If the noise is hitting at -45dB, it may not be that obvious, but it will be if you start adding effects like compression. Try moving the guitar position, and you may be able to get that noise down to -55 or even -60 dB. 7. Turning up your amp too high We all know that you need to turn an amp up a certain amount to get a good "tone." But don't go too far past that point. Why? Aside from the possibility of overloading your mic, objects in the room will have more of a tendency to rattle, and poor room acoustics may be overemphasized. 8. Forgetting to bring a spare set of tubes Tubes fail, tubes go soft, and they sometimes do so at inopportune moments . . . 'nuff said. And remember, if one tube of a matched set fails, you need to replace them both. It's a good idea not to trust the tubes you buy, but to try them out immediately in your amp to make sure they actually work. Once you're satisfied they're okay, pull them out and save them for when they're needed. 9. Not paying attention to tuning This doesn't just mean tuning up before the session; we all know that's a good idea. But have you adjusted bridge intonation lately? Just changing strings can be enough to throw the intonation out of whack. You may not notice that there's any problem until you start recording, and everyone's listening to your guitar under the audio equivalent of a microscope. In my experience, few things can destroy a session faster than having to adjust intonation on a guitar with dead strings (see mistake #3), because it will be next to impossible to get it in tune. Tempers will fray, harsh words may be exchanged. It's better to take 30 seconds to check tuning before recording a part than have to re-record the part because the tuning was off. 10. Using a stompbox with an AC adapter. Or for that matter, with batteries If you record with a stompbox that can use batteries or AC, try both and see which sounds better. With some old stompboxes, the AC adapter might add some noise or buzz that batteries will eliminate. Conversely, if the batteries aren't super-fresh, the lower voltage may degrade tone. Moral of the story: When you show up at the session, bring both the AC adapter and a fresh set of batteries. Also, note that rechargeable batteries sometimes peak out at a slightly lower voltage than alkaline types. Normally this shouldn't make any significant difference, but if you use rechargeables (which is indeed a good idea), make sure that the sound is equivalent to what you get with standard alkaline batteries. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. There’s much more to mixing than just levels By Craig Anderton When mixing, the usual way to make an instrument stand out is to raise its level. But there are other ways to make an instrument leap out at you, or settle demurely into the background, that don’t involve level in the usual sense. These options give you additional control over a mix that can be very helpful. CHANGING START TIMES CHANGES PERCEIVED LOUDNESS The ear is most interested in the first few hundred milliseconds of a sound, then moves on to the next sound. This may have roots that go way back into our history, when it was important to know if a new sound was leaves rustling in the wind – or a sabre-tooth tiger about to pounce. What happens during those first few hundred milliseconds greatly affects the perception of how “loud” that signal is, as well as the relationship to other sounds happening at the same time. Given two sounds that play at almost the same time, the one that started first will appear to be more prominent. For example, suppose you have kick drum and bass playing together. If you want the bass to be a little more prominent than the kick drum, move it ahead of the kick. To push the bass behind the kick, move it late compared to the kick. The way to move sounds depends on your recording medium. With MIDI sequencers, a track shift function will do the job. With hard disk recorders, you can simply grab a part on-screen and shift it, or use a “nudge” function (if available). Even a few milliseconds of shift can make a big difference. CREATIVE USE OF DISTORTION If you want to bring just a couple instruments out from a mix, patch an exciter or “tube distortion” device set for very little distortion (depending on whether you’re looking for a cleaner or grittier sound, respectively) into an aux bus during mixdown. Now you can turn up the aux send for individual channels to make them jump out from a mix to a greater or lesser degree. TUBES AS PROCESSORS Many members of the “anti-digital” club talk about how tube circuitry creates a mellower, warmer sound compared to solid state devices. Whether you agree or not, one thing is clear: the sound is at the very least different. Fortunately, you can use this to your advantage if you have a digital recorder. As just one example of how to change the mix with tubes, try recording background vocals through a tube preamp, and the lead vocal through a solid-state preamp (or vice-versa). Assuming quality circuitry, the “tubed” vocals will likely sound a little more “in the background” than the solid-state ones. Percussion seems to work well through tubes too, especially when you want the sound to feel less prominent compared to trap drums. PITCH CHANGES IN SYNTH ENVELOPES This involves doing a little programming at your synth, but the effect can be worth it. As one example, take a choir patch that has two layered chorus sounds (the dual layering is essential). If you want this sound to draw more attention to itself, use a pitch envelope to add a slight downward pitch bend to concert pitch on one layer, and a slight upward pitch bend to concert pitch on the other layer. The pitch difference doesn’t have to be very much to create a more animated sound. Now remove the pitch change, and notice how the choir sits further back in the track. Click here for an audio example that plays a short choir part first without the pitch bend, then adds pitch bend. MINI FADE-INS With a hard disk recorder, you can do little fade-ins to make an attack less prominent, thus putting a sound more in the background. However, if you do a fade starting from the beginning of a sound, you’ll lose the attack altogether. Instead, extend the start of the fade to before the sound begins (Fig. 1). Fig. 1: Starting a fade before a sound begins softens the attack without eliminating it. After applying the fade-in operation, the audio doesn’t come up from zero, and the attack will be reduced. VOCAL PANNING One common technique used to strengthen voices is doubling, where a singer sings a part then tries to duplicate it as closely as possible. The slight timing variations add a fuller effect than doubling the sound electronically. However, panning or centering these two tracks makes a big difference during mixing. When centered, the vocal lays back more in the track, and can tend to sound not as full. When panned out to left and right (this needn’t be an extreme amount), the sound seems bigger and more prominent. Some of this is also due to the fact that when panned together, one voice might cover up the other a bit. This doesn’t happen as much when panned. CHORUSING AS KRYPTONITE If you want to weaken a signal, a chorus/flanger can help a lot if it has the option to throw the delayed signal out of phase with the dry signal. Set the chorus/flanger for a short delay (under 10 ms or so), no modulation depth, and use an out of phase output mix (e.g., the output control that blends straight and delayed sounds says -50 instead of +50, or there's an option to invert the signal – see Fig. 2). Fig. 2: A chorus/flanger, when adjusted properly, can "weaken" a sound by applying comb filtering. Alter the mix by starting with the straight sound, then slowly adding in the delayed sound. As the delayed sound’s level approaches the straight sound’s level, a comb-filtering effect comes into play that essentially knocks a bunch of holes in the signal’s frequency spectrum. If you’re trying to make a piano or guitar take up less space in a track, this technique works well. MIXING VIA EQ EQ is a very underutilized resource for mixing. Turning the treble down instead of the volume can bring a track more into the background without having it get “smaller,” just less “present.” A lot of engineers go for really bright sounds for instruments like acoustic guitars, then turn down the volume when the vocals come in (or some other solo happens). Try turning the brightness down a tad instead. And of course, being able to automate EQ changes makes the process go a lot more easily. Overall, when it comes to mixing you have a lot of options other than just changing levels – and implementing changes in this way can make a big difference to the “character” of a mix. Have fun adding some of the above tips to your repertoire. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. Save time and effort in the studio by customizing your DAW software to fit the way you work By Craig Anderton You wouldn't build your studio each day starting with an empty room, and you shouldn't have to start your virtual studio from scratch every day either.Two highly important, time-saving features that many musicians overlook is the ability to create template projects and save particular sets of window layouts.Templates let you open your programs to a familiar, productive environment that gives your studio time a kick start, while window layouts optimize your working environment for specific tasks. We'll give examples of how to create and open template files with several popular programs; chances are any other program you use will follow the same basic ideas. Then we'll touch on the power of window layouts. Templates Defined Most programs call up a default file when you open them. This is one example of a template. Programs may also allow you to create your own template files, and set one as a default; or present you with a list of possible templates when you create a new project. Templates often have a specific file format or distinctive name so a program can recognize it and load it by default. But even if a program doesn't have a specific template feature, you can still create templates: Set up a project exactly the way you want, and before recording any data, Save As… under the desired template name. In the future, open this template project, but before making any changes to the file (like recording), immediately save it under a different name to preserve the original. Then create your masterpiece. Note that what your program saves in a template varies. For example, it may include any data you've put in a project (e.g., a metronomic drum track), or may exclude data and retain only setup info. Some parameters, such as sync options, may not be saved. Consult your program's manual or online help for details. Even if there is a particular template file format, remember that these files usually exist “outside” of the host, as would any other file. If you create template files, you need to back them up as you would any other file. Then if a file becomes corrupted, or you need to reinstall the program, or some other misfortunate befalls you, you'll have access to your templates. Templates - The Dark Side The one caution about using templates is getting stuck in a rut. If you always start projects with the same number of tracks, same virtual synth setup, same processor settings for vocals, and so on, this may influence your music to go in a particular, stereotypical direction. There are two ways to avoid this: Use a very minimalist template. That way you won't have to do tasks like create bunches of tracks just to get going, but you will need to decide which signal processors and instruments to add. Create a template that has everything — virtual instruments, processors, maybe even drum scratch tracks — so you can choose from a huge number of options. You can then remove anything you don't need as the song progresses, which will also lighten your processor's load. Some Practical Examples All of the following assume you've set up a project exactly as desired for a template, and want to save it for future use. Propellerheads Reason Save the file anywhere you want, but an ideal place is the Template Songs folder located within the main Reason folder. This lets it show up under the File > New From Template, which shows the templates that come with Reason. (It's not always obvious where the Template Songs folder appears in Windows, but you can find its location if you choose File > New from Template > Show Template Folder.) You can also specify this template as a default song that opens whenever you create a new song. Choose Edit > Preferences, and under Default Song, click Template. Then, navigate to the desired file in the Template field. In this example, Empty + FX.rnsdemo has been chosen as the default song. Note that your template file can contain synth patches, REX files loaded into Dr. Rex, and so on although of course this data must be present in a way that Reason can access it (e.g., not on a removable drive). Cakewalk Sonar The Edit > Preferences > Folder Locations tab shows the default path where Sonar saves Template files. When you want to create a Template file, select “Save As…,” specify “Template” under “Save As Type” (this adds a .CWT suffix), and save it to the Template folder specified in the path. Sonar has a specific file format that's used for templates. When you select “Create a New Project” in Sonar's Quick Start window, you'll see a list of all available templates. Magix Samplitude/Sequoia To save a Virtual Project template, go File > Save Project As Template, and save it in the Templates folder located within the main program folder. Samplitude's Templates folder already comes with several templates, but you can add your own as well. When you go File > New Virtual Project, a dialog box called “Setup for New Virtual Project” appears; select the desired template from the drop-down “Project Template” menu toward the top of the window. MOTU Digital Performer Go File > Save As Template. A window comes up that lets you name the file, as well as specify whether it will be the default when you open the program. When you choose File > New, you'll see your choice of templates in a side menu. When you want a New file, it can be the default new file, one of the templates that comes with the program, or one that you have created. Steinberg Cubase Go File > Save as Template. The template is stored in a Project Templates folder, located in Cubase's main program folder. You'll see the list of available templates when you go File > New Project. Ableton Live Save any file as a template Live Set by going Options > Preferences, then clicking on the “File/Folder” tab. Under “Save Current Set as Default,” click on “Save.” The file will be saved under the file name Template.als in Live's Preferences folder (with Windows, typically located in the AppData folder), and will be called up whenever you call up a new Live set. Go Options > Preferences to save a Live Set as a template. Window Layouts Almost all programs make it easy to create an arrangement of windows, then save that as a layout (a/k/a screen set, window set, etc.). This is particularly helpful with single-monitor setups, where it's impossible to put all the windows you want on screen at one time, thereby requiring some degree of “window-flipping.” But remember that the purpose of creating layouts is to save time, so strike a balance between creating so many that you spend time scrolling through lists to find what you want, and not creating enough to cover your needs. It's also important to be able to call these up with function keys or simple keystrokes. Ideally, hitting a single key on your QWERTY keyboard should be able to call up a layout. The three most important DAW layouts for my working style (and probably yours too!) are: Tracking Editing and overdubbing Mixing Given the different nature of different programs, it's impossible to come up with a one-size-fits-all approach. For example, Sonar's screensets feature allows saving multiple sets, but if you switch to a new layout, the arrangement just prior to switching is retained in the screenset that was selected just prior to switching. So if you return to the screenset, the layout will be as you left it. If you don't like this protocol, you can also lock layouts so they don't change no matter what you do to them. Then there are programs like Cubase, which lets you save templates of mixer and transport configurations . . . options vary from program to program, and we can't get into all of them here. But by now you should be getting the idea of how templates and layouts can save you time and effort, and hopefully, you're now inspired to streamline your workflow a bit more. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Before you set up your studio monitors, look over this check list of important basics by Craig Anderton If your studio monitors aren’t set up properly, you won’t be able to really hear what you’re doing. Studio monitor placement is important, and the most popular speakers for home studios are the nearfield studio monitors. Let’s look at some tips on how to set up studio monitors. How to Set Up Studio Monitors Studio Monitor Placement: Studio owners will tell you that studio monitor placement is very important. Set your studio monitors up so they’re at ear level, at two corners of an equilateral triangle with your head at the remaining corner, and about 3 feet (one meter) from each ear. Studio Monitor Setup & Sound Reflections: Avoid placing the studio monitors where their signals can reflect off surfaces before they hit your ears. For example, place the monitors to the side of an audio mixer, not behind it. The studio monitors should be placed here to prevent the signals from reflecting off the audio mixer’s surface. When setting up studio monitors, be careful not to place the monitors too close to a wall, and definitely avoid placing the studio monitors in corners as that can cause bass buildups. Monitor Volume: : Monitoring at soft levels doesn’t just save your ears, it sends less energy out into the room, which means fewer reflections off the walls. But monitor at a consistent level, as the ear responds to frequencies differently at different levels. Then before signing off on a mix, check it out at both low and high levels. By doing this, you will make sure the mix works in either context. Studio Monitor Decoupling: If your studio monitors are sitting on a table or speaker stand, vibrations can be transmitted from the speaker through whatever it’s sitting on. Place a layer of neoprene or a similar material (a thick mouse pad works) underneath the speaker to help provide some acoustic decoupling. If you want a more effective solution, try some studio monitor decoupling products at Sweetwater. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. It's not the same as a double-neck, but it does let you do some of the same tricks by Craig Anderton I’ll admit it: I’ve always lusted after a double-neck 6 string/12-string guitar. I love the big, rich, “chorused” sound of a 12-string, but I also like to bend notes and hit those six-string power chords. However, I don’t like the weight or the cost of a double-neck, and there’s a certain inconvenience—there are more strings to change, and let’s not even talk about carrying a suitable case around. So my workaround is to “undouble” the top two strings, turning the 12-string into a 10-string. Remove the E string closest to the B strings, and the B string closest to the G strings. This allows bending notes on the top two strings, but you’ll still have a plenty rich sound when hitting chords. Besides, it’s easy enough to add a chorus pedal afterwards, and get additional richness on strings—producing the same kind of effect on the top two strings that you get from doubling them. Sure, it’s not a real double-neck—but it gets you much of the way there, and best of all, wearing it for a couple hours during a performance won’t turn you into the hunchback of Notre Dame over time. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Yes, the virtual and real worlds can live in harmony By Craig Anderton That vintage hardware reverb that sounds so good…the tube compressor that cost a week’s salary, but was worth every penny…are they all eBay candidates now that you’re working in the DAW’s digital domain? No way! All you need is a hardware interface with multiple ins and outs that can interface with your hardware (digital and/or analog, depending on what gear you want to connect), and a little know-how of to tweak tracks. All other routing can be done within the DAW. We’ll use Cubase SX 3.1 as an example because it has a special feature designed to feed external hardware, but the same principles apply to other DAWs. In short, you assign one of your DAW’s output buses to a hardware out on your interface, patch that to your effect in, then patch the effect out to an interface hardware input. You’ll also need to make sure any drivers feeding the external ins and outs are enabled; you might want to dedicate certain I/O ports to using external effects for consistency’s sake. For example, I use Creamware’s SCOPE system, so I’ve patched channel 8 (stereo) of its ASIO output drivers to the interface’s stereo analog out. This provides an analog signal to the external effect. Similarly, the card’s stereo analog in feeds channel 8 (also stereo) of the ASIO input drivers, and this is where the effect output returns. When you create a bus to send the effect, name it something like “ToExtFX” to avoid confusion. Different programs have different ways of assigning buses to outputs. In Cubase SX, you’d use the Input and Output tabs of the VST Connections window. Starting with Cubase SX 3, though, this window added a new tab for External FX. It works very much like adding standard Input and Output buses, but includes additional fields so you can add a delay if needed to compensate for delays through the external device, change the send and return gain levels, and include a “friendly name” for the device you’re feeding and its associated bus. In Cubase's VST Connections window, under the External FX tab, the ToExtFX bus is assigned to output 8 (left and right channels) of the Creamware SCOPE out. Within SCOPE, this is assigned to the analog stereo hardware output. To send a track’s signal to the external effect, turn up its send control to the appropriate bus. The meters on your external hardware should show that it’s receiving signal. Now turn your attention to creating an effects return within your DAW. In versions of Cubase prior to SX 3, you do not want to use its dedicated FX tracks because they are designed for use with plug-ins. Instead, create a standard audio track, then set the track’s Input field to the input that’s receiving signal from the external effect. This shows the channels involved in feeding external effects when using versions of Cubase prior to SX 3. The left-most channel is a drum track, which is about to be trashed by a Line6 PODxt. Note that its effects send is enabled and set to pre-fader. The Master Out channel is simply the entire mix of the song. The ToExtFX channel is a bus which is assigned to the hardware out that feeds the POD. Finally, the FromExtFX channel receives the POD’s output via a hardware interface input. With Cubase SX 3, the external device is treated as a plug-in. Thus you use a dedicated FX track to accept the external device’s return, select the external device as a plug-in in the desired audio channel (or group channel), and you’re done. WHAT ABOUT LATENCY? You will likely need to compensate for delays due to the signal going out the audio interface, through an effect, then back into the system. Here are your options: If the effect can blend dry and processed sound, set it for the desired blend. Then turn down the volume of the original track feeding the effect, set its Send to pre-fader, and listen only to the effects return. To compensate for latency compared to the non-processed tracks, slide the original track forward in time (to the left) by whatever amount compensates for the delay. If the effect provides processed sound only, “clone” the track to be processed, feed the clone to the external effects bus using pre-fader send, and turn down the clone’s main volume so it doesn’t contribute anything to the mix. The original (non-cloned) tracks provide the dry sound; bring up the effects return level for the desired amount of effect, then slide the cloned tracks ahead in time to compensate for latency. Another option is to record the audio produced by the effect into the assigned track, and slide that track ahead while mixing to compensate for any delay. You may need to do this anyway if you want to use lots of external effects, but don’t have enough I/O to handle them all in real time: Insert one effect at a time, record the results, then move on to the next effect. In Cubase SX 3, as mentioned previously, there’s a parameter that provides delay compensation. This simplifies the process considerably. And now you know how to return your rack mount gear to being productive members of DAW society. I’m sure they’ll be much happier. THE FUTURE Steinberg and Yamaha have collaborated to create the Studio Connections initiative, an open standard that builds on SX 3’s ability to accommodate external effects, and results in a tighter level of integration (e.g., including a graphic interface that resembles working with a software plug-in). This will greatly simplify the above process for compatible gear that includes a suitable software graphic interface (basically, a MIDI plug-in based on an enhanced version of the OPT MIDI plug-in standard). The standard even has a semi-automated way to compensate for any delays created by going out to external gear, then back in again. As of this writing the standard is still in its infancy, but even if the rest of the industry doesn't adopt it, it will probably be incorporated in future Steinberg software. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Reason’s Combinator is a great way to create a “building block” that consists of multiple modules and controls By Craig Anderton Reason’s Combinator device (Combi for short), introduced in Reason 3, provides a way to build workstation-style combi programs with splits, velocity-switched layers, integral processing, and more—then save the combination for later recall. However, note that Combis aren’t limited to creating keyboard instruments (one Combi factory patch combines Reason’s “MClass” mastering processors into a mastering suite). Basically, anything you create in Reason can be “combinated.” Furthermore, four knobs and four buttons on the Combi front panel are assignable to multiple parameters. For example, if you have a stack with five synthesizers, one of the knobs could be a “master filter cutoff” control for all the synths. The knobs and buttons can be recorded as automation in the sequencer, or tied to an external controller. CREATING A COMBINATOR PATCH Let’s look at a real-world application that uses Reason’s Vocoder 512. A Vocoder has two inputs: Modulator and Carrier. Both go through filter banks; the modulator filters generate control signals that control the amplitude of the equivalent filter bands that process the carrier. Thus, The modulator impresses its frequency spectrum onto the carrier. The more filters (bands) in the filter banks, the greater the resolution. Typically, vocoders have a mic plugged into the modulator, so speaking into it impresses speech-like characteristics onto the carrier, and thus creates “talking instrument” sounds. However, no law says you have to use a mic, and my fave vocoder setup uses a big, sustained synth sound as the carrier, and a drum machine (rather than voice) as the modulator. The Combi is ideal for creating this setup. Rather than include the synth within the Combi, we’ll design the “DrumCoder Combi” as a signal processor that accepts any Reason sound generator. The Combi includes a Vocoder, ReDrum drum machine, and Spider Audio Merger (Fig. 1). Remember to load the ReDrum with a drum kit, and create some Patterns for modulating the vocoder. To hear only the patterns, set the Vocoder Dry/Wet control to dry. Fig. 1: “DrumCoder” Combi patching. ReDrum has a stereo out but the vocoder’s input is mono, so a Spider merger combines the drum outs. The Combi out goes to the hardware interface, while the input is available for plugging in a sound source. Let’s program the Combi knobs. Open the Combinator’s programmer section, then click on the Vocoder label in the Combi Programmer. Using Rotary 1’s drop-down menu, assign it to Vocoder Decay. Assign Rotary 2 to Vocoder Shift, and Rotary 3 to HF Emphasis. Rotary 4 works well for Wet/Dry, but if you want to use it to select ReDrum patterns instead, click on ReDrum in the programmer and assign Knob 4 to Pattern Select. I’ve programmed the buttons to mute particular ReDrum drums. Now let’s create a big synth stack Combi (Fig. 2) to provide a signal to the DrumCoder. Layer two SubTractors, then a third transposed down an octave. Assign the Combi knobs to control the synth parameters of your choice; Amp Env Decay for all three is useful. Fig. 2: Two SubTractors each feed a CF-101 Chorus. The “Bass” SubTractor feeds a UN-16 Unison. All three effect outs feed a 6:2 line mixer, which patches to the “Big SubTractor” Combi out. TESTING, TESTING Patch the Super SubTractor Combi out to the Vocoder Combi in, and the Vocoder Combi out to the appropriate audio interface output. Start the sequencer to get ReDrum going, then play your keyboard (which should be feeding MIDI data to the Big SubTractor Combi). You’ll hear the keyboard modulated by the drum beat – cool! Now diddle with some of the Vocoder Combi front panel controls, and you’ll find out why Combis rule. RESOURCES These files are useful for checking out the Combinator examples described in this article. DrumCoder.mp3 is an audio example of drumcoding. BigSubTractor.cmb and DrumCoder.cmb are Combis for Reason, as described in the article. DrumCoder.rns is a Reason song file that contains both Combis and sends the output to Reason’s mixed output. If you don’t have a keyboard handy, you can audition this patch by going to the sequencer and unmuting the Big SubTractor track, which plays a single note into the Big SubTractor instrument. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. You Have Enough Stress in Your Life - Keep It Out of the Studio! By Craig Anderton Recording music is supposed to be fun, not stressful - and any way you can simplify your studio setup (and recording procedures) will help you have a more enjoyable, and more efficient, studio experience. So, here are some tips on how to save both time and stress in the studio. USE AN AUDIO INTERFACE WITH MULTIPLE INPUTS "I'm a solo performer, so I only need a couple inputs." Right? Wrong! You probably have both a condenser and a dynamic mic, as well as some instruments, like guitar, bass and/or keyboard (and don't forget most hardware keyboards have multiple outputs). All these outputs want inputs, and you don't want to re-patch; it's great to have everything ready to go, so all you need to do is record-enable a track to make music. An audio interface with lots of inputs neatly solves those patching problems. The only complication this adds is that in most cases, you'll want to keep everything muted, and unmute whatever you want to record at any given moment. Fig. 1: Yamaha's n8 FireWire mixer has an analog-style interface, but can transfer signals to and from a computer via FireWire. A Yamaha-specific feature is that it has very tight integration with Cubase. This scenario is also a justification for the new generation of Firewire-compatible mixers, because these can be about routing as well as mixing. A FireWire mixer has the look and feel of an analog mixer (Fig. 1), but thanks to a FireWire port for hooking into your computer, it combines mixing functions with an audio interface. As a result, the mixer provides the usual routing and mixing functions, but the outs and buses can appear as inputs inside your host. Furthermore, the mixer can be used in a traditional mixer context, like taking it out for a live gig. USE YOUR DAW'S BUNDLED PLUG-INS AS MUCH AS POSSIBLE Most host sequencers now bundle a decent assortment of plug-ins, including instruments and processors. Using these instead of relying on third-party plug-ins has several advantages: No incompatibility issues - if it comes with the host, it will work with the host. Instrument upgrades usually happen in tandem with host upgrades, so one upgrade takes care of multiple programs. This simplifies file exchanges with others who use the same host, because you know your collaborator will have the same plug-ins. Should you re-visit a file in a few years, odds are the instruments will open properly if you're using the same host. Granted, bundled instruments won't necessarily do everything. But keep your collection of instruments manageable: A few "specialty" instruments, and maybe a good workstation or sampler (Propellerhead Software's Reason is a fine choice for all of the above, as it has several great instruments and can rewire it into just about anything). Avoid the temptation to download a zillion shareware plug-ins "just because you can": It's more to learn, more to maintain, and more that can go wrong. MANAGE SOFTWARE UPGRADES Schedule doing upgrades (e.g., once every month or so), then check for upgrades for your plug-ins, host, operating system, graphics card, etc. Windows users should set a System Restore point before upgrading anything, and Mac users can use Time Machine; all users should test their setup after each upgrade. You'll often find this to be more efficient than upgrading using a more scattered approach. Just remember - it's often not worth being an "early adopter." Check forums and manufacturer web sites for any potential pitfalls before you upgrade. SIMPLER BACKUPS Just get a big external hard drive (or a SATA/USB/FireWire drive enclosure and put a hard drive in it), and copy your data drive over to the backup drive while you enjoy a movie. Not only do you avoid the stress of wondering if something's backed up, but (more importantly!) you avoid the huge amount of stress that happens when your data drive fails. LEARN TO CUT YOUR LOSSES Sometimes a performance or a song just isn't happening. You try some EQ, some effects, some mix changes, maybe an overdub or two...nope. Well, you've written music before, and you'll write music again. If something isn't flowing right, don't complicate your life: Cut your losses and move on. ANYTHING THAT SAVES TIME = GOOD Wasting time gets in the way of inspiration, and reduces what you can do during a given session. Here are some of my favorite time-savers. If applicable, increase your computer's RAM. The more RAM you have, the less often your computer will have to go through the bottleneck of accessing its hard drive. If you have a system with 512MB of RAM, doubling that to 1GB will make you feel you got a new computer. Ditto going from 1GB to 2GB. Use multiple monitors. Moving and re-sizing windows is a major waste of time, and having two monitors will make your life easier. For best results, use a graphics card designed to drive two monitors instead of using two graphics cards, each intended for one monitor. It's worth the investment. Print out a list of keyboard equivalents. Refer to it often; after a few weeks, you'll have the list memorized - and keyboard equivalents save time. Use a scroll wheel mouse. For many functions, a scroll wheel can beat clicking and dragging. Fig. 2: Windows users, just say no: You don't need those fancy graphics options. Strip down your system. Mac fans, forget that "genie-sucking-the-window-into-the-dock" thing. Windows 7 users, under Control Panel > System > Advanced System Settings > Advanced tab > Performance Settings, then choose "Adjust for Best Performance" (Fig. 2). Remove anything that runs automatically (checking the web for updates, browsers that launch automatically on startup, wireless cards if you're not using a wireless connection, etc.) unless it's absolutely essential; with operating systems, less is more. Place an alias (shortcut) on your desktop for everything you use consistently. And add a shortcut for your current project. There are other ways to make life easier: Templates for projects and tracks so you don't have to start from scratch each time, changing strings the night before you record instead of just before the session, and of course...disconnecting the phone just before you start recording! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. This simple technique gives open-back cabinets a closed-back sound by Craig Anderton Closed-back cabinets give a very different sound compared to open-back types: There tends to be more low end, and a “tighter” response. Both have their uses—particularly in the studio, as you can have two tonal options—but if you have only an open-back combo amp, here’s a technique that’s all about getting a closed-back sound out of the same amp. No woodworking required! The trick is to place your open-back cabinet so the back of the cabinet lies flat on a rug, flush against the floor. Not only does the floor block the cabinet back, but the rug helps absorb some of the sound as well. To mic the amp, you’ll need a boom to point the mic down at the speaker. Ideally, the input jack will be on the front, and the power cable connection on the back will be recessed (as is the case with the Peavey Windsor amp shown in the picture). This lets the AC line cord feed out the side; if this raises the back up too much off the ground, thus defeating the closed-back effect, a right-angle female AC connector may do the job. Otherwise, you can always cut a small slot in the side of the cabinet as a cable feed-through. However, there are some important cautions. With many amp designs, ventilation happens through the cabinet back, so putting the cab on its back could block the airflow; this can be particularly problematic with tubes. In this case, you’ll need to monitor temperatures carefully, and record for as short a period of time as possible. Whenever you take a break, move the cab back to its usual position to let it vent for a while—note that even if the amp appears to be performing properly, heat buildup can reduce component life. Sure, this is one of those stupidly simple ideas, but it works as long as you’re aware of any possible heat build-up—try it when you want to get a different sound out of your open-back cabinet. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. An analog tool from yesteryear transitions to digital—and learns a few new tricks in the process By Craig Anderton Step sequencing has aged gracefully. Once a mainstay of analog synths, step sequencing has stepped into a virtual phone booth, donned its Super Sequencer duds, and is now equally at home in the most cutting-edge dance music. In a way, it’s like a little sequencer that runs inside of a bigger host sequencer, or within a musical instrument. But just because it’s little doesn’t mean it isn’t powerful, and several DAWs include built-in step sequencers. Early analog step sequencers were synth modules with 8 or 16 steps, and driven by a low-frequency clock. Each step produced a control voltage and trigger, and could therefore trigger a note just as if you’d triggered a keyboard. The clock determined the rate at which each successive step occurred. As a result, you could set up a short melodic sequence, or feed the control voltage to a different parameter, such as filter cutoff. Step sequencing in a more sophisticated form was the basis of drum machines and boxes like the Roland TB-303 BassLine, and is also built into today’s virtual instruments, such as Cakewalk’s Rapture, and even as a module in processors like Native Instruments’ Guitar Rig (Fig. 1). Fig. 1: Guitar Rig’s 16-step “Analog Sequencer” module is controlling the Pro Filter’s cutoff frequency. Reason’s patch-cord oriented paradigm makes it easy to visualize what’s happening with a typical step sequencer (Fig. 2). Fig. 2: This screen shot, cut and pasted for clarity, shows Reason’s step sequencer graphic interface, as well as how it’s “patched” into the SubTractor synthesizer. The upper Matrix view (second “rack” up from the bottom) shows the page generating a stepped control voltage that’s quantized to a standard musical scale as well as a gate signal; these create notes in the SubTractor and trigger their envelopes, as shown by the patch connections on the rear. The lower Matrix view is generating a control voltage curve from the Curve page, and sending this to the SubTractor synth filter. The short, red vertical strips on the bottom of either Matrix front panel view indicate where triggers occur. THIS YEAR’S MODEL Analog step sequencers typically had little more than a control for the control voltage level, and maybe a pushbutton to advance through the steps manually. Modern step sequencers add a lot of other capabilities, such as . . . Pattern storage. Once you tweaked an analog step sequencer, there was nothing you could do to save its settings other than write them down. Today’s sequencers usually do better. For example, the Matrix module in Reason stores four banks of 8 patterns, which can be programmed into the sequencer to play back as desired. Controller sequencing. Step sequencers aren’t just for notes anymore, and it’s usually possible to generate sequences of controllers along with notes (Fig. 3). Fig. 3: A row in Sonar’s Step Sequencer triggers notes, but you can expand the row to show other controller options. This example shows velocity editing. Variable number of steps. Freed from the restrictions of hardware, software step sequencers can provide any number of steps, although you’ll seldom find more than 128—if you need more, use the host’s sequencing capabilities. Step resolution. Typically, with a 16-step sequencer, each step is a 16th note. Variable step resolution allows each step to represent a different value, like a quarter note, eighth note, 32nd note, etc. Step quantization. With analog sequencers, it seemed almost impossible to “dial in” particular pitches; and when you did, they’d eventually drift off pitch anyway. With today’s digital versions, you can quantize the steps to particular pitches, making it easy to create melodic lines. The step sequencers in Rapture even allow for MIDI note entry, so you can play your line and the steps will conform to what you entered. Smoothing. This “rounds off” the sharp edges of the step sequence, producing a more rounded control characteristic. WHAT ARE THEY GOOD FOR? Although step sequencers are traditionally used to sequence melody lines, they have many other uses. Complex LFO. Why settle for the usual triangle/sawtooth/random LFO waveforms? Control a parameter with a step sequencer instead, and you can create pretty whacked waveforms by drawing them in the step sequencer. Apply smoothing, and the resulting waveform will sound more continuous rather than stepped. Create rhythmic patterns with filters. Feeding the filter cutoff parameter with a step sequencer can provide serious motion to the filter sound. This is the heart of Roger Linn’s AdrenaLinn processor, which imparts rhythmic effects to whatever you send into the input. If the step level is all the way down, the cutoff is all the way down and no sound comes out. Higher-level steps kick the filter open more, thus letting the sound “pulse” through. Polyrhythms. Assuming your step sequencer has a variable number of steps, you can create some great polyrhythmic effects. For example, consider setting up a 4-step sequence (1 measure of 4/4) in one step sequencer, and a 7-step sequence (1 measure of 7/4) in a second step sequencer, each driving different parameters (e.g., filter sweeps in opposite channels, or two different oscillator pitches). They play against each other, but “meet up” every seven measures (28 beats). Double-time and half-time sequences. By changing step resolution in the middle of a sequence, such as switching from 8th notes to 16th notes or vice-versa, it’s possible to change the sequence to double-time or half-time respectively. Complex panning. Imagine a step sequencer generating a percussive sequence by triggering a sound with a very quick decay. Now imagine a step sequencer altering the pan position for each hit – this can add an incredible amount of animation to a percussion mix. Live performance options. The original step sequencers were “set-and-forget” type devices. But nowadays, playing with a step sequencer in real time can turn it into a bona fide instrument (ask the TB-303 virtuosos). Change pitch, alter rhythms, edit triggers . . . the results can be not only hypnotic, but inspiring Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. When you're about to lay down a vocal, one of these tips just might help save a take By Craig Anderton These 16 tips can be helpful while recording, but many are also suitable for live performance...check ‘em out. TO POP FILTER OR NOT TO POP FILTER? Some engineers feel pop filters detract from a vocal, but pops detract from a vocal even more. If the singer doesn’t need a pop filter, fine. Otherwise, use one (Fig, 1). Fig. 1: Don’t automatically assume you need a pop filter, but have one ready in case you do. NATURAL DYNAMICS PROCESSING The most natural dynamics control is great mic technique—moving closer for more intimate sections, and further away when singing more forcefully. This can go a long way toward reducing the need for drastic electronic compression. COMPRESSOR GAIN REDUCTION When compressing vocals, pay close attention to the compressor’s gain reduction meter as this shows the amount by which the input signal level is being reduced. For a natural sound, you generally don’t want more than 6dB of reduction (Fig. 2) although of course, sometimes you want a more “squashed” effect. Fig. 2: The less gain reduction, as illustrated here with Cakewalk’s PC2A Leveler, the less obvious the compression effect. To lower the amount of peark or gain reduction, either raise the threshold parameter, or reduce the compression ratio. NATURAL COMPRESSION EFFECTS Lower compression ratios (1.2:1 to 3:1) give a more natural sound than higher ones. USE COMPRESSION TO TAME PEAKS WHILE RETAINING DYNAMICS To clamp down on peaks while leaving the rest of the vocal dynamics intact, choose a high ratio (10:1 or greater) and a relatively high threshold (around –1 to –6dB; see Fig. 3). Fig. 3: A high compression ratio, coupled with a high threshold, provides an action that’s more like limiting than compression. This example shows Native Instruments’ VC160. To compress a wider range of the vocal, use a lower ratio (e.g., 1.5 or 2:1) and a lower threshold, like –15dB. COMPRESSOR ATTACK AND DECAY TIMES An attack time of 0 clamps peaks instantly, producing the most drastic compression action; use this if it’s crucial that the signal not hit 0dB, yet you want high average levels. But consider using an attack time of 5 - 20ms to let through some peaks. The decay (release) setting is not as critical as attack; 100 - 250ms works well. Note: Some compressors can automatically adjust attack and decay times according to the signal passing through the system. This often gives the optimum effect, so try it first. SOFT KNEE OR HARD KNEE? A compressor’s knee parameter, if present, controls how rapidly the compression kicks in. With soft knee, when the input exceeds the threshold, the compression ratio is less at first, then increases up to the specified ratio as the input increases. With hard knee, once the input signal crosses the threshold, it’s subject to the full amount of compression. Use hard knee when controlling peaks is a priority, and soft knee for a less colored sound. TOO MUCH OF A GOOD THING Compression has other uses, like giving a vocal a more intimate feel by bringing up lower level sounds. However, be careful not to use too much compression, as excessive squeezing of dynamics can also squeeze the life out of the vocals. NOISE GATING VOCALS Because mics are sensitive and preamps are high-gain devices, there may be hiss or other noises when the singer isn’t singing. A noise gate can help tame this, but if the action is too abrupt the voice will sound unnatural. Use a fast attack and moderate decay (around 200ms). Also, instead of having the audio totally off when the gate is closed, try attenuating the gain by around 10dB or so instead. This will still cut most of the noise, but may sound more natural. SHIFT PITCHES FOR RICHER VOCALS One technique for creating thicker vocals is to double the vocal line by singing along with the original take, then mixing the doubled take at anywhere from –0 to –12dB behind the original. However, sometimes it isn’t always possible to cut a doubled line—like when you’re mixing, and the vocalist isn’t around. One workaround is to copy the original vocal, then apply a pitch shift plug-in (try a shift setting of –15 to –30 cents, with processed sound only—see Fig. 4). Fig. 4: Studio One Pro’s Inspector allows for easy “de-tuning.” Mix the doubled track so it doesn’t compete with, but instead complements, the lead vocal. FIXING A DOUBLED VOCAL Sometimes an occasional doubled word or phrase won’t gel properly with the original take. Rather than punch a section, copy the same section from the original (non-doubled) vocal. Paste it into the doubled track about 20 - 30ms late compared to the original. As long as the segment is short, it will sound fine (longer segments may sound echoed; this can work, but destroys the sense of two individual parts being played). REVERB AND VOCALS Low reverb diffusion settings work well with vocals, as the sparser number of reflections prevents the voice from being overwhelmed by a “lush” reverb sound. 50 - 100ms pre-delay works well with voice, as the first part of the vocal can punch through without reverb. INCREASING INTELLIGIBILITY A slight upper midrange EQ boost (around 3 - 4kHz) adds intelligibility and “snap” (Fig. 5). Fig. 5: Sonar’s ProChannel EQ set for a slight upper midrange boost (circled in yellow). Note the extreme low frequency rolloff (circled in red) to get rid of sounds below the range of the vocal, like handling noise. Be very sparing; the ear is highly sensitive in this frequency range. Sometimes a slight treble boost, using shelving EQ, will give equal or better results. NUKE THE LOWS A really steep, low-frequency rolloff (Fig. 5) that starts below the vocal range can help reduce hum, handling noise, pops, plosives, and other sounds you usually don’t want as part of the vocal. “MOTION” FILTERING For more “animation” than a static EQ boost, copy the vocal track and run it through an envelope follower plug-in (processed sound only, bandpass mode, little resonance). Sweep this over 2.5 to 4kHz; adjust the envelope to follow the voice. Mix the envelope-followed signal way behind the main vocal track; the shifting EQ frequency highlights the upper midrange in a dynamic, changing way. Note: If the effect is obvious, it’s mixed in too high. RE-CUT, DON’T EDIT Remember, the title was “16 Quick Vocal Fixes.” Many times, having a singer punch a problematic part will solve the issue a whole lot faster than spending time trying to edit it using a DAW’s editing tools. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Here are some secrets behind getting those wide, spacious, pro-sounding mixes that translate well over any system By Craig Anderton We know them when we hear them: wide, spacious mixes that sound larger than life and higher than fi. A great mix translates well over different systems, and lets you hear each instrument clearly and distinctly. Yet judging by a lot of project studio demos that pass across my desk, achieving the perfect mix is not easy…in fact, it's very hard. So, here are some tips on how to get that wide open sound whenever you mix. The Gear: Keep It Clean Eliminate as many active stages as possible between source and recorder. Many times, devices set to "bypass" may not be adding any effect but are still in the signal path, which can add some slight degradation. How many times do line level signals go through preamps due to lazy engineering? If possible, send sounds directly into the recorder—bypass the mixer altogether. For mic signals, use an ultra-high quality outboard preamp and patch that directly into the recorder rather than use a mixer with its onboard preamps. Although you may not hear much of a difference when monitoring a single instrument if you go directly into the recorder, with multiple tracks the cumulative effect of stripping the signal path to its essentials can make a significant difference in the sound's clarity. But what if you're after a funky, dirty sound? Just remember that if you record with the highest possible fidelity, you can always mess with the signal later on during mixdown. The Arrangement Before you even think about turning any knobs, scrutinize the arrangement. Solo project arrangements are particularly prone to "clutter" because as you lay down the early tracks, there's a tendency to overplay to fill up all that empty space. As the arrangement progresses, there's not a lot of space left for overdubs. Here are some suggestions when tracking: Once the arrangement is fleshed out, go back and recut tracks that you cut earlier on. Try to play these tracks as sparsely as possible to leave room for the overdubs you've added. Like many others, I write in the studio, and often the song will have a slightly tentative feel because it wasn't totally solid prior to recording it. Recutting a few judicious tracks always seems to both simplify and improve the music. Try building a song around the vocalist or other lead instrument instead of completing the rhythm section and then laying down the vocals. I often find it better to record simple "placemarkers" for the drums, bass, and rhythm guitar (or piano, or whatever), then immediately get to work cutting the best possible vocal. When you re-record the rhythm section for real, you'll be a lot more sensitive to the vocal nuances. As Sun Ra once said, "Space is the place." The less music you play, the more weight each note has, and the more spaciousness this creates in the overall sound. Proofing the Tracks Before mixing, listen to each track in isolation and check for switch clicks, glitches, pops, and the like, then kill them. These low-level glitches may not seem that important, but multiply them by a couple dozen tracks, and they can definitely muddy things up. If you don't want to get too heavily into editing, you can do simple fixes by punching in and out over the part to be erased. DAWs may or may not have sophisticated enough editing options to solve particular problems; for example, they'll probably let you cut and paste, but if something like sophisticated noise reduction is not available in a plug-in, this may require opening the track in a digital audio editing program, applying the appropriate processing, then bringing the track back into the DAW. Also note that some recording programs can "link" to a particular digital audio editor. In this case, all you may need to do is, for example, double-click on a track, and you're ready to edit. Equalization The audio spectrum has only so much space, and you need to make sure that each sound occupies its own turf without fighting with other parts. This is one of the jobs of EQ. For example, if a rhythm instrument interferes with a lead instrument, reduce the rhythm instrument's response in the part of the spectrum that overlaps the lead. One common mistake I hear with recordings done by singer/songwriters is that they (naturally) feature themselves in the mix, and worry about "details" like the drums later. However, as drums cover so much of the audio spectrum (from the low-frequency thud of the kick to the high-frequency sheen of the cymbals), and because drums tend to be so upfront in today's mixes, it's usually best to mix the drums first, then find "holes" in which you can place the other instruments. For example, if the kick drum is very prominent, it may not leave enough room for the bass. So, boost the bass at around 800 to 1,000 Hz to bring up some of the pick noise and brightness. This is mostly out of the range of the kick drum, so the two won't interfere as much. Try to think of the song as a spectrum, and decide where you want the various parts to sit, and their prominence (see Fig. 1). I often use a spectrum analyzer when mixing, not because your ears don't work well enough for the task, but because it provides invaluable ear training and shows exactly which instruments take up which parts of the audio spectrum. This can often alert you to a buildup of excessive level in a particular region. Fig. 1: Different instruments sit in different portions of the spectrum (of course, this depends on lots of factors, and this illustration is only a rough approximation). Use EQ to distribute the energy from various instruments so that they use the full spectrum rather than bunch up in one specific range. If you really need a sound to "break through" a mix, try a little bit of boost in the 1 to 3 kHz region. Just don't do this with all the instruments; the idea is to use boosts and cuts to differentiate one instrument from another. To place a sound further back in the mix, sometimes switching in a high-cut filter will do the job by "dulling" the sound somewhat—you may not even need to switch in the main EQ. Also, using the low-pass filter on instruments that veer toward the bass range, like guitar and piano, can help trim their low end to open up more space for the all-important bass and kick drum. Compression When looking for the biggest mix, compression can actually makes things sound "smaller" (but louder) by squeezing the dynamic range. If you're going to use compression, try applying compression on a per-channel basis rather than on the entire mix. Compression is a whole other subject (check out the article Compressors Demystified), but suffice it to say that many people have a tendency to compress until they can "hear the effect." You want to avoid this; use the minimum amount of compression necessary needed to tame unruly dynamic range. If you do end up compressing the stereo two-track, here's a tip to avoid getting an overly squeezed sound: Mix in some of the straight, non-compressed signal. This helps restore a bit of the dynamics yet you still have the thick, compressed sound taking up most of the available dynamic range. Mastering Mastering is the Supreme Court of audio—if you can't get a ruling in your favor there, you have nowhere else to go. A pro mastering engineer can often turn muddy, tubby-sounding recordings into something much clearer and defined. Just don't expect miracles, because no one can squeeze blood from a stone. But a good mastering job might be just the thing to take your mix to the next level, or at least turn a marginal mix into a solid one. The main point of this article is that there is no button you can click on that says "press here for wide open mixes." A good mix is the cumulative result of taking lots of little steps, such as the ones detailed above, until they add up to something that really works. Paying attention to detail does indeed help. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Find Your Own Voice with Vocals by Craig Anderton Yes, you already know about equalizing voice, and how to choose the right mic to flatter a singer. But you're an esteemed visitor to Harmony Central...you want more, better, bigger, and further. This baker's dozen of tips will help take your vocals up one more notch. 1. THE COMPOSITE VOCAL FIX You want a doubled vocal part, and have loop-recorded a vocal on multiple tracks so you can pick and choose among the best bits to create two killer tracks. Unfortunately, for one short phrase, only one track has the perfect take -- maybe the others have flaws, or the singer "hit the jackpot" and couldn't duplicate it properly. Don't worry: Copy the perfect part into the other track, shift its pitch a tiny bit, then delay it by 20-35 ms. 2. OPTIMIZE REVERB DIFFUSION Great vocals demand great reverb, so try low diffusion ("density") parameter values. Here's why: Diffusion controls the echo "thickness." High diffusion places echoes closer together, while low diffusion spreads them out. With percussive sounds, low diffusion creates lots of tightly-spaced attacks, like marbles hitting steel. But with voice, which is more sustained, low diffusion gives plenty of reverb effect without overwhelming the vocal from excessive reflections. 3. OPTIMIZE REVERB DECAY Many reverbs offer a frequency crossover point, with separate decay times (RT) for high and low frequencies. To prevent too much competition with midrange instruments, use less decay on the lower frequencies and increase decay on the highs. This adds "air" to the vocals, as well as emphasizes some of the sibilants and "mouth noises" that humanize a vocal. Vary the crossover setting to determine what works best for a particular voice. 4. THE BEAUTY OF AUTOMATED PANNING With doubled vocals, panning both to center, or panning one more left and one more right, gives a very different overall effect. For example, if background vocals are part of the picture, I almost always put the voice in the center. If I want the voice to cede some of its prominence to the instruments, I'll spread the two tracks out a little bit to "unfocus" the vocal. Use automated panning to set the vocals as appropriate for particular parts of the song. 5. CREEPY VOCALS Remember those creepy, whispery type vocals that Pink Floyd used to do? Try this one on vocals that are more "spoken" than sung, e.g. rap. Plug in your vocoder (software or hardware), use voice as the modulator, and pink noise as the carrier. You may need to reduce the pink noise high frequencies somewhat. Mix it well behind the vocal -- just enough to add a creepy, whispery element. Also try delaying it by some rhythmic value, then adjusting its level as appropriate. 6. SELECTIVE ECHOES I very much like synchronized echo effects added to voice, but only for specific words and passages. You can do this with automated aux send controls; put synchronized delay in an aux bus and turn up the fader when you want delay. This is best if you want apply the same effect to multiple tracks. Or, cut the parts you want to echo, paste them in another track in the same position, and add synchronized delay to that track. This is preferred if you have a limited number of aux buses. 7. STEP UP TO THE PLATE If your digital reverb has multiple algorithms, try using a plate-based preset for voice. In the "old school" days of recording, plate reverbs were often favored for vocals over chamber reverbs, which were used on instruments. "Real" plates have a tighter, somewhat brighter, less diffused sound that works well with vocals. Of course, there's no guarantee your reverb's plate algorithm actually sounds like a real plate, but give it a shot. 8. SHIFTY PITCHES, PART 1 If your studio has digital tape (e.g. ADAT), there's probably a variable speed control. Use this to thicken doubled vocals; when you record the doubled vocal, speed up or slow down the tape a bit so that this vocal has a slightly different timbre when you play it back at the normal pitch. One caution: if you speed up the tape for a lower-pitched sound, the timing of the performance had better be extra good. Slowing the tape down magnifies any timing discrepancies. 9. SHIFTY PITCHES, PART DEUX This trick is as old as the Harmonizer (trademark Eventide), when engineers discovered that shifting pitch downward 10 to 15 cents, and mixing the harmonized signal behind the straight vocal, added a useful thickening effect. You can do this with any digital pitch-shifting processor, hardware or software. If you're planning to triple the vocal, shift up the second pitch shifter by an amount equal to the downward shift. When tripling, you may want to increase the overall amount of shift. 10. GET DOLBY OFF UNEMPLOYMENT At one time, Dolby Noise Reduction units were used in studios to reduce noise with analog tape. But they also were used on a lot of background vocals to give an airy, bright sound by encoding with Dolby (usually type A) while recording, but not decoding on playback. What Dolby did was compress above a certain frequency and add pre-emphasis, which is ideal for souping up a vocal's intelligibility. It's not all that easy to find old Dolby units, but when you do, they tend to be dirt cheap. 11. SAY HELLO TO VOCAL PROCESSORS Vocal processors, by companies such as TC-Helicon, Antares, and DigiTech, provide a whole bunch of vocal effect functions, from harmonies to weird vocal formant shifting that can turn choirboys into crusty blues singers (and vice-versa). The harmony functions are also useful, and few people are aware of what these things do with toms. If you record a lot of vocals, or do voiceover work, these powerhouse processors offer a really deep bag of tricks. 12. MAXIMIZE OR COMPRESS? It's common knowledge that most pop vocals are compressed to some degree. Lately, though, I've been doing very light compression while recording (just enough to smooth out some of the more abrupt level variations), then using loudness maximizer-type processing (e.g., Waves, iZotope Ozone, or Wave Arts processors) on mixdown. To my ears, this gives a more "raw" sound (as opposed to "smooth") than using compression alone. This seems particularly effective on rock vocals. 13. MODULATION ECHOES Okay, we like echoes on voice. A somewhat rare feature in digital-land is the ability to modulate delay time slightly. This "feature" was an inherent part of tape echo, as the tape speed was never perfect. If your delay doesn't offer modulation, you can simulate the same effect by splitting off the delayed sound through a chorus or flanger set for a short delay, with a very slight amount of modulation (try a random modulation source if possible). Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Don't Miss Out on the Next Big Thing in Guitar Distortion By Craig Anderton If you're a guitarist and you're not into multiband distortion...well, you should be. Just as multiband compression delivers a smoother, more transparent form of dynamics control, multiband distortion delivers a "dirty" sound like no other. Not only does it give a smoother effect with guitar, it's a useful tool for drums, bass, and believe it or not, program material – some people (you know who you are!) have even used it with mastering to add a distinctive, unique "edge." As far as I know, the first example of multiband distortion was a do-it-yourself project, the Quadrafuzz, that I wrote up in the mid-'80s for Guitar Player magazine. It remains available from PAiA Electronics (www.paia.com), and is described in the book "Do It Yourself Projects for Guitarists" (BackBeat Books, ISBN #0-87930-359-X). I came up with the idea because I had heard hex fuzz effects with MIDI guitar, where each string was distorted individually, and liked the sound. But it was almost too clean, yet I wasn't a fan of all the intermodulation problems with conventional distortion. Multiband distortion was the answer. However, we've come a long way since the mid-'80s, and now there are a number of ways to achieve this effect with software. HOW IT WORKS Like multiband compression, the first step is to split the incoming signal into multiple frequency bands (typically three or four). These usually have variable crossover points, so each band can cover a variable frequency range. This is particularly important with drums, as it's common to have the low band zero in on the kick and distort it a bit, while leaving higher frequencies (cymbals etc.) untouched. Then, each band is distorted individually (incidentally, this is where major differences show up among units). Then, each band will usually have a volume control so you can adjust the relative levels among bands. For example, it's common to pull back on the highs a bit to avoid "screech," or boost the upper midrange so the guitar "speaks" a little better. With guitar, you can hit a power chord and the low strings will have minimal intermodulation with the high strings, or bend a chord's higher strings without causing beating with the lower ones. SOFTWARE PLUG-INS The first multiband distortion plug-in was a virtual version of the Quadrafuzz, coded as a VST/DX plug-in by Spectral Design for Steinberg. Although I was highly skeptical that software could truly emulate the sound of the hardware design, fortunately a guitarist was on the design team, and he nailed the sound. The Quadrafuzz was included with Cubase SX, and is a currently available from Steinberg as a "legacy" plug-in. But they took it further than the hardware version, offering variable frequency bands (the hardware version is "tuned" specifically for guitar), as well as five different distortion curves for each band, from heavy clipping to a sort of "soft knee" distortion. As a result, it's far more versatile than the original version. A free plug-ins, mda's Bandisto, is basic but a fine way to get started. It offers three bands, with two variable crossover points, and distortion as well as level controls for each of the three bands. There are two distortion modes, unipolar (a harsh sound) and bipolar, which clips both sides of the waveform and gives a smoother overall effect. While the least sophisticated of these plug-ins, you can't beat the price. Bandisto is as good a way as any to get familiar with multiband distortion. Ohm Force's Predatohm provides up to four bands, each of which includes four controls to change the distortion's tonality as well as the channel's overall tone and character. Unique to Predatohm is a feedback option that can add an extremely aggressive edge (it's all over my "Turbulent Filth Monsters" sample CD of hardcore drum loops), as well as a master tone section. Wild, wacky, and wonderful, this plug-in has some serious attitude. Under its spell, even nylon-string guitars can become hardcore dirt machines. iZotope's Trash uses multiband distortion as just one element of a comprehensive plug-in that also incorporates pre- and post-distortion filtering, amp cabinet modeling, multi-band compression, and delay. The number of bands is variable from one to four, but each band can have any one of 47 different algorithms. Also, there are two distortion stages, so you can emulate (for example) a fuzzbox going into an overdriven amp (however, the bands are identical for each of the two stages). The pre- and post-distortion filter options are particularly useful for shaping the distortion's tonal quality. This doesn't just make trashy sounds, it revels in them. Sophisticated trash may be an oxymoron, but in this case, it's appropriate due to the complement of highly capable modules. ROLLING YOUR OWN You're not constrained to dedicated plug-ins. For example, Native Instruments' Guitar Rig has enough options to let you create your own multiband distortion. A Crossover module allows splitting a signal into two bands; placing a Split module before two Crossover modules gives the required four bands. Of course, you can go nuts with more splits and create more bands. You can then apply a variety of amp and/or distortion modules to each frequency split. Yet another option is to copy a track in your DAW for as many times as you want bands of distortion. For each track, insert the filter and distortion plug-ins of your choice. On advantage to this approach is each band can have its own aux send controls, as well as panning. Spreading the various bands from left to right (or all around you, for surround fans!) adds yet another level of satisfying mayhem. In terms of filtering, the simplest way to split a signal into multiple bands is to use a multiband compressor, but set to no compression and with individual bands soloed (most multiband compressors will let you solo or bypass individual bands). For example with three tracks, you could have a high, middle, and low band from each crossover feeding its own distortion plug-in. Here a guitar track has been "cloned" three times in Cakewalk Sonar, with each instance feeding a multiband crossover followed by an amp sim plug-in (Native Instruments' Guitar Rig). The multiband compressors have been edited to act as crossovers, thus feeding different frequency ranges to the amp sims. AND BEST OF ALL... Thanks to today's fast computers, sound cards, and drivers, you can play guitar through plug-ins in near-real time, so you can tweak away while playing crunchy power chords that rattle the walls. Happy distorting! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Convolution-based reverb offers realism—but what are the tradeoffs? By Craig Anderton Acoustic spaces create the most natural reverb, but there’s only one “preset”—and try fitting a concert hall in your project studio. Granted, some people run mics and speakers to a tiled room (e.g., bathroom) for some decent, tight-sounding reverbs. But emulating the classic concrete room sound that was on so many great recordings, let alone other acoustic environments, is not an easy task. OLD SKOOL DSP Before checking out convolution reverb, consider synthesized digital reverb, which has ruled the digital reverb world for several decades (Fig. 1). Fig. 1: Ableton Live’s reverb is an example of an algorithmic type that synthesizes the sonic effects of being in a reverberant space. These generally break down the reverb effect into two processes. The first is “early reflections,” the initial sound that happens when sound waves first bounce off of various surfaces. Then comes the reverb “tail,” which is more of a wash of sound caused by feeding these reflections into an engine that calculates and synthesizes a gazillion of reflections, with their various amplitude and frequency response variations. Most algorithmic reverbs also create diffusion, which determines whether the echoes are more blended or discrete. Many, if not most, digital reverbs are not true stereo devices; they sum stereo inputs into mono, and synthesize a stereo space. Unlike “real world” reverb, though, where you hear different reverb effects from different sound sources in a space, DSP produces a “one size fits all” reverb that subjects all sound sources— regardless of location—to the same reverb effect. While most of the time this is okay, for complex orchestral emulations, standard reverb algorithms lack precision. ENTER CONVOLUTION REVERB Convolution reverbs are based on samples rather than synthesis, which produces a highly realistic sound. Technically speaking, convolution is a mathematical term that describes what happens when you multiply two spectra. For reverb, one of these will be the sound source itself, and the other will be an impulse, which is a recording of an acoustic space’s characteristics. As an analogy, think of the impulse as a “mold” of a particular space, and that the sound is “poured” into the mold. If the space is a concert hall, then the sound takes on the characteristics of the concert hall. But anything can be used as an impulse. For example, convolving a synthesized guitar patch with an impulse recording of an acoustic guitar body creates a more realistic guitar sound. Impulses exist not just for famous concert halls, clubs, etc., but also for amplifiers, tunnels, resonant structures, spring reverbs, filters, and the like. I’ve even used drum loops as impulses—wild. The tradeoff has traditionally been the usual sampler vs. synthesizer issue: Lack of parameter control. But just as some companies have figured out how to get “inside the sample,” convolution reverbs are getting more flexible as well. Waves broke through with their IR-1, which allowed tailoring the sound produced by the impulse (Fig. 2). Fig. 2: Waves' IR-1 started the trend to adding more editing capabilities to convolution reverbs. Most modern convolution reverbs are quite editable, and as easy to use and understand as standard reverbs. You may notice that changing parameters feels a little slow due to all the calculations being performed, but this isn’t a big deal. Thanks to today’s faster processors, convolution reverbs have become commonplace—several virtual instruments, like Native Instruments’ Kontakt and MOTU’s Ethno Instrument, include convolution reverbs, as do several DAWs (Fig. 3). Fig. 3: The Open Air reverb included with PreSonus Studio One Pro can open impulses from other sources, like the one from Sonar’s PerfectSpace convolution reverb. As to the impulses, they’re created by recording a space’s reverberant characteristics after “exciting” the room with a set of sweep tones designed for impulse recording, or firing a shot from a starter pistol. The object is to use a trigger that generates sound throughout the entire audible spectrum, to allow capturing the space’s total frequency response. THE REST OF THE STORY Convolution-based processing slurps CPU power, but fortunately, host software “freeze” functions (which apply effects to a hard disk track, then “disconnects” the effects) tends to make that less of a drawback, even with slower CPUs. Another problem is latency, which is added by the convolution process. For reverb, it’s not too serious; think of it as free pre-delay. Early convolution reverbs used to have latencies in the hundreds of milliseconds, but many now hit under 5-10ms with a fast enough CPU. If the delay is problematic, you can bounce just the processed sound to a track, and slip it forward in time. Finally, a convolution reverb is only as good as its impulses: If someone recorded a room impulse with a tinny-sounding mic, you’ll have a tinny-sounding room. You also want a reverb package that supplies a lot of impulses, so you can really discover convolution reverb’s power. However, note that you can download free impulses from www.noisevault.com, and some are quite good. The site also hosts discussions and news about convolution-based reverbs. But whether you use a stand-alone convolution reverb, the one bundled into a host program, or even one included with an instrument, you’ll find convolution offers extremely convincing reverb emulations—and more. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Amp Modeling: It's Not Just for Guitars Any More . . . By Craig Anderton There's a lot of excellent amp modeling software these days for guitar, but amp modeling with drums? Well, AmpliTube isn't just about guitar and bass: As any keyboard player will tell you, a little judicious grit and really add character to sterile synths. And I've had great luck using guitar modeling on drums (the original version of AmpliTube was one of the "secret ingredients" in my "Turbulent Filth Monsters" drum loop sample CD). So, drum roll, please... CLONING TRACKS: A GOOD THING Generally, I use modelers to provide support for an existing drum track rather than to "take over" the sound. The easiest way to do this is to copy your drum track, then insert AmpliTube as an effect in one of the tracks. Varying the level of the straight and processed tracks lets you determine the intensity of the processed sound. WHICH KIND OF AMP WORKS BEST FOR DRUMS? Of course, that's a matter of taste. Overall, distorted presets sound great for nasty applications, but can also add a kind of tonality to the drums by distorting the decays. Go to the Preset window; Complete Rigs > Crunch has a bunch of useful presets. A good place to start is the "Blues and More" preset, as it's crunchy without getting too nasty. If you push the copied drum track subtly in the background, you'll get a nice crunch that doesn't overwhelm the drums. On the other had if you have a yearning for hardcore techno, be my guest! "Fuzzace2" is the kind of preset that takes your drums back to a Belgian rave in the late 90s. Cleaner presets, while more subtle, can add body and depth. Try the "DarkSoloing" preset under Styles > Jazz for hip-hop type drums; it adds major fullness. THE CABINET One of the most important switches in the amp is the Bypass switch. This allows you to bypass the amp completely, and use only the Cabinet and Mic modeling. These two can add a lot of variety to drums, in a subtle way. For this application, I often use Configuration 2, which creates a parallel chain (Fig. 1). I'll bypass all the effects and the amps, and use two different cabinets and mikings to create two different tonalities. Fig. 1: With a parallel effects chain, you can add even more variations ot the sound by using two different cabinets and mikings. The Level control toward the lower right affects whichever module you've chosen, so it's easy to set a balance of the two chains by adjusting the cabinet levels. EFFECTS! COOL!! AmpliTube's stomp box effects can really help spice up the drum sounds in, uh, interesting (some would say perverse) ways. My flat-out favorite is the Envelope Filter, which can sound superb on drums - funky, greasy, and squishy (Fig. 2). The Envelope filter offers lowpass, bandpass, and highpass filtering; with drums, using LP mode with 24dB/ octave slope creates the most obvious, funky sound but try the other options as well. Fig. 2: AmpliTube's Envelope Filter can create some truly funky sounds. For ultra-percussive effects, check out the Noise Gate function. By setting the threshold really high, you can pretty much nuke the lower-level drum sounds, and let through just the loudest peaks. It's fun to add reverb or delay to just these sounds - the overall result is sparser than affecting all the drums. The Pitch Shifter is another goodie on drums, particularly with toms. Move the Coarse control around, and you'll get "talking drum"-type effects. Note that the Level control is kind of a misnomer; it's more of a wet/dry control. If you're using the Pitch Shifter in a copied track, turn Level up all the way so that you hear the pitch shifted effect only. Considering how great the Pitch Shifter sounds, you might expect the Harmonator to be even better. Although the Harmonitor is indeed more flexible, it isn't really as predictable with drums. But it does do some really bizarre things if you're into more experimental sounds. AUTOMATION I mentioned moving the Pitch Shifter's Coarse control, but of course, you don't want to have to do that every time you play the track. Fortunately, just about everything can be automated using standard VST automation protocols (i.e., set up to record automation, and tweak the control). However, there are a few fine points involving automation. AmpliTube 3 has greatly improved automation and MIDI control compared to older versions, which don't respond directly to MIDI control; in other words, you can't do something like invoke a "MIDI learn" function for a particular parameter, then move an external pedal. The workaround for older versions is that with some hosts, you can tie a MIDI controller to the host's automation. For example, in Sonar Producer Edition's console view, starting with Version 5 there are four sliders for each inserted channel effect that can be assigned to particular parameters, and these sliders can in turn can be remote controlled via MIDI. This allows for "hands-free" parameter control, which is important for guitarists. It's also important to note that an effect can be automated once in each of the two "rigs" (A and B). If you insert two instances of the same effect in the same rig, only the first one can be automated. DRUM FUN I could go on, but I'll spare you some even stranger options. There's a lot you can do with guitar processing and drums, and AmpliTube is just as happy bending your rhythms as it is messing with a guitar or bass...check it out. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...