Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. Guitars are cool because they're so amazingly versatile—treat 'em right, and they'll fit right in with today's cutting-edge dance music By Craig Anderton I’ve been playing guitar with DJs and electronics-oriented groups for 15 years, and it has always been a great musical experience: DJs have someone with whom they can interact, and I get to play with a cool “rhythm section.” With bands, the guitar’s organic quality complements the perfection of the electronics. For me, the question isn’t “real instruments” vs. “laptops and electronics,” the question is how best to integrate the two—if for no other reason than because it’s fun (which is my favorite reason to do anything). People ask me whether an “EDM guitar” needs MIDI, onboard electronics, fancy knobs, built-in touch pads, or other technological marvels. My answer is that an “EDM guitar” needs to do only three things: Look really cool Be light-colored, so it takes on the color of the lighting (extra credit: mirrored pickguard) Be comfortable to play, so you can move around (and do freakish things with the guitar) So it’s probably not a surprise that the Branden Small Snow Falcon, shown in the picture below, is my current favorite nominee for an “EDM guitar.” But really, what makes an “EDM guitar” doesn’t have much to do with the guitar itself—it’s about how you play it, and how you process it. SYNC YOUR SOUND TO THE BEAT Having the guitar’s audio influenced by the music really integrates the guitar into the overall sound; here are some examples of how to do this. Compressor: Both hardware and software compressors can have a sidechain input. This input controls the compression from external audio, like drums or percussion. For example, having extreme compression happen whenever there’s a snare hit causes the guitar to “pump” or “splash.” Noise gate: A noise gate lets the input pass to the output if the sidechain signal is above a settable threshold. A common application is feeding kick and snare into this input to “gate” the guitar sound in time with the drums. Vocoder: No law says you have use a microphone as a vocoder’s modulation input, and a synthesizer as the carrier input. Try distorted guitar power chords as the carrier, then modulate that with the drum track. This makes the guitar sound like percussive, tuned drums. MIDI clock control: MPC beat boxes, sequencers (like Ableton Live, Cakewalk SONAR, Apple Logic, etc.), and “workstation” keyboards typically produce clock signals that correspond to tempo. Processors like Roger Linn’s AdrenaLinn (a great processor for EDM) can synch to the master clock via their MIDI inputs. Many computer plug-ins also respond to the host tempo, and can do all kinds of spectacular beat-synched effects. Footpedals: It’s old school, but you can move wah and volume pedals rhythmically. Also note some guitars, like Gibson’s FBX, have hex outputs (i.e., an individual output for each string). This lets you apply different rhythmic plug-ins to different strings, which can produce sounds that are amazingly synthetic but also very “real.” ADAPT YOUR PLAYING STYLE TO THE BEAT Highly rhythmic guitar playing often fits in best: strumming muted strings, doing rhythmic “chops,” single-note short arpeggios, and the like. But sustained sounds can work really well, too; there’s a device called an “E-Bow” that can drive a string into continuous sustain. DJs appreciate that I can sustain a note (or notes) while they’re doing transitions. I also like to do “sound effects,” like tapping the back of the neck, holding the guitar’s headstock against a speaker cabinet for sustained feedback, pick scrapes, sliding a beer bottle up and down the strings, and even hitting the strings with a drumstick to “trigger” chords. Why be normal? Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), andSound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. If you're into mobile music-making, you'll want to check this out By Craig Anderton You’re a musician on the go. You want a keyboard that can go with you, not take up a lot of weight or space, and not look so weird TSA agents will engage you in long conversations...which brings us to CME’s Xkey 25 and Korg’s nanoKEY2. Price: Xkey 25 $100 street, nanoKEY2 $50 Size: Gotta have full-size keys? Read no further: it’s the Xkey (15.27” x 5.31”). The nanoKEY2 has mini-keys, but it’s a lot smaller (12.8” x 3.27”)—and incidentally, fits perfectly between two ribs in the bottom of my suitcase so it’s well protected from the Minions of Satan who handle airline baggage. The Xkey fits in almost any laptop case. Feel: Both use “chiclet”-type 25-key keyboards, so there’s only about 1/8th inch of travel. You have to get used to this, but it works. The CME has a “crisper” feel that’s closer to what you’d expect from a keyboard; the nanoKEY2 keys feel “looser.” Other controls: The nanoKEY2 has button switches for pitch bend up, pitch bend down, transpose up, transpose down, mod wheel, and sustain. The mod wheel and sustain buttons can be set to any controller. However note that these are all switches, so for mod wheel, bend, and sustain, you need to use the computer applet (described later) to set the time required to reach the maximum bend amount or modulation or sustain delay, as well as minimum and maximum values. The Xkey 25 has the advantage of pressure-sensitive pitch bend and mod wheel buttons, so you have much more control—with the caveat that doing nuanced mod wheel motion is a challenge because you need to exert steady pressure. Pitch bending is quite doable and expressive if the range is only a few semitones. The Xkey 25 is definitely more expressive, and much less dependent on accessing the editor. Blinky lights: Korg wins this one—the octave button color shows the amount of transposition. The CME has a single red LED on the back that indicates it’s connected to USB. Bling factor: Tough call, they both look quite cool. The Xkey 25 has an Apple-esque, brushed aluminum silver outside, and is offered in five different finishes. But even the low-cost nanoKEY2 comes in either white or black, which matters if you’re into the whole Star Wars light side/dark side thing. Overall the nanoKEY2 looks cuter, the CME more authoritative. Aftertouch and velocity: They both have velocity, but here’s a major difference: the nanoKEY2 doesn’t have aftertouch, while the CME offers selectable channel aftertouch or polyphonic aftertouch (check out this article for more on polyphonic aftertouch). Seriously—in a $100 keyboard! The good news is that the poly aftertouch response is surprisingly good. The bad news: good luck finding a soft synth that responds to it properly. Arturia’s CS-80V does a great job, but with NI’s Kontakt, each key has to be its own Zone for poly aftertouch to work. Many synths don’t support it at all, and some accept poly aftertouch messages but treat them as channel aftertouch. However, now that there’s a decent, low-cost keyboard with poly aftertouch, I will be relentless (and probably annoying) in pestering software companies to take advantage of it. Computer applet: Both have cross-platform computer applications that let you do things like choose velocity curves and such, but the Xkey applet gives many more customization options (and also supports iOS/Android). You can draw your own velocity curves (or swipe with iOS devices), and each key can transmit its own program change, a controller value, etc. CME Xkey applet Korg nanoKONTROL2 applet You can even set individual sensitivity for each key, but there’s a major catch: unlike the nanoKEY2, you can’t save any of these settings for future recall (although any changes you make are saved in non-volatile memory). Given the Xkey applet’s flexibility, let’s hope CME adds a save/load preset ability in a future update. On the other hand the Korg software is temperamental—on both my laptop and desktop Windows machines I eventually got the MIDI driver and editor working, although I’m not quite sure how. Regardless, even when the MIDI ports were supposedly not working, I had no trouble actually acessing any program with the nanoKEY2—only accessing the Editor applet was an issue. Neither applet is multi-client; you can’t use it while the program connected to the keyboard is running. So neither applet hits a home run. The Korg would if it installed more smoothly and was multi-client so you wouldn’t have to quit the program to change important performance parameters, and the Xkey would hit a grand-slam homer if it could save and load presets. (According to the company, this is planned for a future update.) Accessories: Both include USB cables but don’t lose the one that comes with the CME—it needs to have a thin profile connector, and not all micro-USB connectors will fit. And the winner is: For serious functionality, full-size keys, more control, smoother app installation, solid construction, and the overachievement of poly aftertouch, CME’s Xkey 25 gets the prize. It’s set up on next to my desktop computer’s QWERTY keyboard for doing quick patch tests, or catching inspirations. But when I’m going on the road, the nanoKEY2 wins. It’s pretty much indestructible—even if you fly United Airlines—weighs 0.54 lbs. compared to the Xkey’s 1.32 lbs, takes up hardly any space at all, and costs half as much so if it gets eaten by a TSA agent, I’ll be out a lot less bucks. Purchase the CME Xkey 25 or Korg NanoKEY2 at B&H Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. There's a reason why Eddie Kramer has endured in a fickle industry, and here a lot more reasons by Craig Anderton If you read Part 1, you know Eddie Kramer is pretty cool. And you know that Eddie knows recording. But his definition of the recording process starts long before he pushes the record button. “I feel strongly that records are made in pre-production, providing the band knows their craft, the songs are great, and I’ve done my job correctly by rehearsing the band or artist to he point where everybody knows what’s happening. And if you get into the studio and the rehearsed song doesn’t quite work, you have to have rehearsed a backup plan. Maybe the tempo or key is wrong; maybe it shouldn’t be electric or acoustic. At least we will have rehearsed alternatives. “But you also can have the situation where a great thing happens by chance because a great bunch of musicians knows their instruments. You have to be sensitive to those circumstances as well.” I of course had to ask about how digital sometimes locks out the chance to do something “wrong” that turns out to be “right,” because digital is normalized to a particular workflow. “The ‘vintage way’ of recording allows one the flexibility of doing things by accident, of saying ‘Wow, check out that sound, I’ve never heard that before.’ When we first started using a digital reverb on Led Zeppelin, I had it set up wrong and it was feeding back on itself . . . it sounded like it was inverted inside of a long tube, and I could never have gotten that sound in the digital world. “Analog is very forgiving. You can do bizarre, wacky things, which goes hand-in-hand with rock and roll. It’s not meant to be perfect.” LET’S GET WRONG Having covered when wrong is right, it seemed right to give equal time to when wrong is just plain wrong. And Eddie has some strong opinions. “I must say this: If you look at the top 20 records and you A-B them, it’s all the bloody same. The level is just ridiculous. The record companies have instructed everyone to cut things as hot as possible. There’s no dynamic range; it’s at the point where it sounds horrible. I rue the day when some twit said ‘Yeah, let’s get it all the way up to the maximum.’ That has produced an insatiable demand for more and more level. Every mastering engineer — [bob] Ludwig, [bernie] Grundman, [George] Marino— they’re all quite upset about this. On the Hendrix stuff I’ve been working on, I turned things back a bit on the level because I want it to sound like the way it was in the studio. “We work our balls off to get the sounds to be cool, to make it sound great, to inspire people. And it comes out sometimes like crap, and that’s terrifying. “And the other thing that’s really terrifying is the sound of MP3s. We are raising a generation of kids who listen to MP3s and have no concept of what a great record should sound like. It’s compressed, it’s rolled off at 8kHz. I don’t know what the answer is; we’re soldiering on regardless. We’ll do the very best we can to serve the music, serve the artist. We are a service industry [laughs]. “Look at the top engineers. Eliot Scheiner, Bruce Swedien, Joe Chicarelli, Jimmy Douglass, Chuck Ainley—all of these guys are struggling with the same thing. We’ve been in the business, we’ve seen all the changes, we love to see the public embrace new technology but not at the expense of the music. “Unfortunately the whole music industry from top to bottom is all about the bottom line. It puts us in a very bad position. We’re supposed to be in charge of helping the artist be creative, make them feel great about their music. But when it comes out as an MP3, all that work is for naught.” THE HENDRIX EXPERIENCE Most interviews with Eddie start off with Hendrix. But that’s arguably wrong, because Eddie is about so much more than just Hendrix. Still, it would also be wrong not to talk about Hendrix, who excelled at making wrong right: He wrote right-handed but played his right-handed guitar left-handed, and strung it like a right-handed version so when he strummed down, he hit the low E first and the high E last. Listening back to his studio albums, there’s a flow that seems to imply a very intuitive artist. Well Eddie was there, so . . . “Look at Jimi’s timeline: 4-track, 1/2" 15 IPS going four-to-four-to-four, three times. Recording drums in mono, then stereo, then hitting the US in ’68 and going to 12-track. Then we scrapped that 1" Scully machine and transferred over to 16 tracks. Think about the way we used to record: I always had a [variable speed oscillator] sitting next to me, and I’d always be fooling around with speeding up and slowing down the tape.” [Editor’s note: The earliest way to implement variable speed was by using a sine wave oscillator feeding a beefy power amp, and driving the capstan motor with the amp’s output. The motor synched to the line frequency, so if the oscillator was set to 60Hz, the motor behaved normally. But if you sped up or slowed down the oscillator, the motor would follow along. — CA.] “The first time we demonstrated phasing to Jimi [while recording] ‘Axis: Bold as Love,’ he flipped out – ‘I’d heard that sound in my dreams.’ Of course that was a calculated sound, not an accident . . . but when you think about tape flanging, it’s never the same. You never knew exactly how it was going to sound. On ‘Electric Ladyland,’ I actually got a sound appearing behind my head for maybe two seconds — this happened by accident, and I could never re-create it. You can imagine how scary that was! “It is true that with Hendrix, he could do stuff in one or two takes. He was very well-prepared, and always knew what he was doing at all times. Still, there were times when he would be dissatisfied with his performance, end up doing 40 takes, then come back the next day and say ‘I can do it even better.’ You can’t generalize; each circumstance in the studio is totally different.” When it comes to latter-day Hendrix, though, Eddie has fallen in love with the potential of the DVD. “I love DVDs for their flexibility, there is a tremendous amount of freedom I can bring to the table. [When we] finished redoing the Hendrix Woodstock performances, we restored Jimi’s original performance back to its original two hours and mixed it in 5.1 surround; you really feel like you’re sitting in the mud. The only thing that’s missing is the mud! “In the process, we found all this footage no one every saw before. That’s creative and exciting, I love being able to do archaeological digs. This is stuff that’s just been sitting around. Here we are, decades later, making the sound and movie sound and look so much better.” Kramer’s photographic work is impressive, so it seemed natural to ask if he’s crossed over to doing video as well as audio. “Well, I can’t help but be part of that video process. I mix first in stereo, then in 5.1, then they lock the picture to my mix. In that regard, the digital world is a big help because you used to have to match the sound to the picture. The beauty of 5.1 is I can place the instruments carefully, and get accurate spatial effects. You can actually hear the delays coming off the towers.” EDDIE’S GEAR ADVENTURES About a decade ago, one of Eddie’s side projects is DigiTech’s Hendrix pedal (since discontinued). It didn’t just model a Strat going through a stack of Marshalls; you wouldn’t need Eddie for that. Instead, it modelled the sound of Jimi’s Strat going through Marshalls recorded to tape, then processed using Eddie’s various engineering/production techniques. DigiTech called it “Production Modelling” because it modelled the entire production chain. Eddie’s doing some more pedals called F-Pedals, and rather than try to describe them, it’s a lot easier just to link to the web site that will tell you all about them. And of course, his Signature Series bundle for Waves is still very much alive and well. You might also want to check out some of his videos, and of course, there's his main site. He’s also working with a bunch of bands, including Xander and the Peace Pirates. But as the interview wound down, it was clear there was more to the story than his projects. Here’s a guy who could rest on his laurels, collect some checks, and kick back in a hot tub on Maui. But he’s remained relevant despite being in this business for decades. Why? “If you don’t learn from each group of musicians you work with, you might as well hang it up. You have to keep your mind open; I’ve learned new techniques, I’ve learned how to work with the digital world, but I’ve found a way to integrate the two worlds to make them compatible with each other . . . to get the best out of both, and make them a unified whole. “When I play the faders, I play them like a piano. I must have that tactile feeling — I want to be connected to the sound with my fingertips.” Which is probably why the music he’s worked on connects with your emotions. Eddie is the first to credit the artist, which is not surprising. But look a little deeper: Eddie has served as an amplifier for the artist’s art — and that’s a type of amplifier no technology can create. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Get the most out of Windows when you’re making music by Craig Anderton Windows is a very powerful operating system, but the price of that power is the process of learning how to get the most out of it. If you’re new to Windows, or switching from the Mac, here are five essential aspects of Windows you need to know. System Restore Unlike the Mac’s tightly controlled hardware environment, there are endless permutations and combinations of motherboards, graphics cards, memory, software, and the like that make up a Windows machine. Although Windows has become pretty much “plug-and-play,” rogue plug-ins, poorly written drivers, and other issues can be the proverbial straw that breaks the camel’s back. System restore takes a “snapshot” of the system so you can restore your computer to a previous state; to access it, click on System in the Control Panel, then click on System Protection (Fig. 1). Fig. 1: System Protection is where you create restore points, and choose a restore point for restoring your computer’s system. This doesn’t affect data or documents, only your system. For example, before installing some shareware program of dubious origin, set a system restore point. If it hoses your system (or for that matter has spyware), you can return your computer to its state prior to installation. Most programs will set a restore point automatically when installing, but don’t count on it—setting a system restore point may not be needed often, if at all. But when it is, you’ll be glad to have one. The Task Manager The Task Manager shows all the processes running on your computer, how much memory and CPU they take, and a brief description (Fig. 2). Right-click on the Taskbar, and select Start Task Manager to open this utility. Fig. 2: The Task Manager shows what’s happening with your silicon pet brain. Perhaps more importantly, it can let you end a task so that it doesn’t run. When you open the Task Manager, odds are you’ll see a bunch of processes running that you don’t really need, like iTuneshelper.exe. Click on a process to highlight it, then click on End Process. If you end a process that your computer really needed, simply restart and it will start up again. Another use is that if a program freezes, you can often end it from the Task Manager, then open it again without having to re-boot your computer—a definite time-saver. And if you see a running program called something like “Destructive Spyware” (well, the name wouldn’t be that obvious, but you get my point) right-click on it to find the file location. If a file seems suspicious, enter the name into a search engine. It may something innocuous, like a graphics card control panel, or it may be something that’s worth uninstalling. Backup and Restore There’s a technical term for people who have data that don’t want to lose, but haven’t made a backup: gambler. Or maybe just “naïve and trusting.” The average hard drive will probably last at least three years, but that means that some will last a lot longer—and some will die within days, weeks, or months. You can find a variety of commercial backup programs, but there’s a backup and restore function built into Windows. Type “backup” into the search box, and Windows will show you the way. You can also create a System Image that’s essentially a clone of your system (and external hard drives if you want), as well as a System Repair disc so you can at least boot up if your hard drive explodes. The Windows backup utility differs from commercial programs in that they latterare often intelligent enough to do “incremental backups”—in other words, they only back up what’s been changed since the last backup, which saves time. But having backup built into Windows means you have no excuse to say “my computer crashed, and all my hard work is gone!!” Make a Date with Updates There are two philosophies about updating: If it ain’t broke, don’t fix and leave a working system alone; the other is to grab every update you possibly can. Since you now have your system backed up and know how to do a system restore, you don’t need to fear updates and there are good reasons for updating (Fig. 3) Fig. 3: You can access Microsoft Update from the Control Panel, or just type system update into the Start button’s search box. One is security. If your computer even thinks about going on the internet, you want the latest patches and security updates. However, that’s not all. Go to Microsoft Update and make sure you have all the latest patches, and it’s also crucial to update your graphics card drivers (yes, they make a big difference with audio). But when you update your graphics drivers, don’t install any extras, like a so-called “HD Audio Driver” (it may really compromise performance), or add-ons that overclock your graphics card for higher frame rates with the ever-popular “Let’s Kill Stuff” video game franchise, or provide “an enhanced gaming experience.” You’re not playing games, you’re making music. A rule of thumb is if your computer has drivers, those drivers may have updates. Also check for drivers for your audio interface, printer...anything. But there are two cautions: First, if you get a pop-up notifying you to install an update, go to the site itself. For example, if you see a box that says you need to update Adobe Flash player, go to the Adobe site and download it from there. Some hackers have been known to create pop-ups that look legit, but take you someplace nefarious. Second, avoid sites that claim to offer drivers and software that will do wonderful things, like scan for viruses. If you can’t get drivers from the manufacturer’s site, stay away. Turn Off Virus Protection When Installing Programs Those ever-vigilant virus protection programs get picky about anything that messes with the system, which an installation of a legitimate program might do. As a result, an installation might not be completed properly, and you may not be aware there’s a problem. If you’ve downloaded something, disconnect your computer from the internet, turn off virus protection, and start the installation. If you have to be connected to the web to install (e.g., for verification of ownership, or to check your version), it’s okay to disable virus protection temporarily as long as you go only to the trusted site that provides the download, the re-enable protection when done. If You Have 64-Bit Windows, Try to Keep Everything 64 Bit That means your programs, drivers, plug-ins...everything. Microsoft deserves kudos for their attention to backward compatibility, and letting 32- and 64-bit programs live in the same system is very considerate. (I still have a paint program from 1995 that works fine.) Some software (like Cakewalk SONAR) includes “bridging” technology so you can use most 32- bit plug-ins with 64-bit software, and there are also the BitBridge and Jbridge accessory programs to add bridging to other programs. But it’s best to stay in the 64-bit family as much as possible for maximum stability. Bridging is inherently tricky, and there’s a reason why many current 64-bit DAWs shut out the option to use 32-bit plug-ins. And there’s your basic Windows survival guide. Windows machines can be very powerful and cost-effective, but you’ll always get the best results if you have at least a working knowledge of what goes on “under the hood.” Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), andSound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. With a “virtual microphone,” of course . . . By Craig Anderton With traditional instruments, one of the usual rituals is placing the mic to capture the sound in the best possible way for your recording medium of choice. While most people think of this as “good mic technique,” what you’re really doing is making a judgment about what type of processing you want to apply to the signal. For example, a condenser mic will give a brighter sound than a dynamic mic; and moving the mic further away from the sound source to pick up more room ambience is like adding an ambience effect. Another important element is the addition of room mics, because part of a studio’s sound is the sound of its room. A room with a suitable balance of reflective and absorptive surfaces, and good diffusion characteristics, can enhance the sound of whatever’s recorded in it because it wraps the sound with a sense of space. However, as many instruments are close-miked and these mics do not pick up much of the room sound, it’s common to set up mics within the room itself that record the signals reflecting off the walls, ceiling, and floor. The signals from these mics are mixed in at low levels so you get a “sense” of the room, rather than “hear” it. When tracking a virtual instrument, it can be a constructive exercise to think how you would mic it if it was a “real” instrument, and process it accordingly. Many times, you’ll be rewarded with a more realistic, satisfying sound. For example . . . STEREO PLACEMENT With instruments like piano, part of mic placement involves creating a convincing stereo image. While this can be simulated for virtual instruments with reverb and delays, one of the simplest ways create stereo is by modulating panning with keyboard note position (Fig. 1): The lower the note, the more the image shifts to the left and the higher the note, the more the image shifts to the right. Fig. 1: Keyboard notes are about to be assigned to modulate pan in Cakewalk’s Rapture, thus creating a stereo spread as you play from left to right. This option is available in many software and hardware synthesizers. In some cases, this may be too “clean” of a spread — with a real miking situation, the left mic will pick up some sound from the right mic, and vice-versa. This tends to create a bit of a build-up in the center, as it “monoizes” the signal somewhat. To circumvent this, add a little ambience (see the next section) and pan it toward the center. This shifts the overall image a bit more toward the center, but without altering the position of the notes themselves. Another way to add a slight artificial stereo spread is to boost the treble subtly in the right channel, and similarly boost the bass in the left channel. This can be effective for instruments like guitar and piano where the higher notes appear more toward the right of the stereo image (this assumes panning from the audience’s perspective). ADDING VIRTUAL ROOM MICS Don’t overlook the value of adding some ambience, even if it’s artificial. This isn’t about an effect like hall reverb, which is a whole other subject; what you want is a very tight ambience, mixed well in the background. Good options include a small room reverb set for minimum room size, or a multi-tap delay with the taps set in the 20-30 ms range (a little delay feedback might help too, especially if the feedback can be cross-channel — in other words, the feedback path bounces between channels). You can even create your own ambience effects with a few stereo delays (Fig. 2). Fig. 2: The Drum Room Ambience FX Chain in Cakewalk SONAR provides four different reflections, each with a level control, along with some other options. Furthermore, remember that room mics are invariably compressed. You can simulate this effect by adding a lot of compression to the ambience, and rolling off a bit at the frequency response extremes. The more you compress the ambience, though, the more important it becomes to mix it in the background. CONDENSER OR DYNAMIC? Condenser mics tend to sound brighter than dynamic mics, so when you “mic” your virtual instrument, think about what type of mic you would use in the real world. For example, loud sound sources are often recorded through a dynamic mic, which tend to accommodate higher sound pressure levels. So, if your virtual instrument has some added distortion, consider lowering the high frequency response just a tad to give that dynamic flavor. Trimming the high frequency response can also make it sound like you’ve moved the mic a bit further away from the sound source—a trick that can help an instrument “sit” better in the mix. To tweak the response, consider using a shelving filter to very gently raise or lower the very highest frequencies to simulate a condenser or dynamic mic, respectively. IS THIS SILLY OR . . . ? Sure, we know we’re not dealing with real mics. But sometimes, thinking about how you’d mic an instrument can give you some clues about “accessorizing” your virtual instrument for the most seamless integration into your mix—and it can make a difference. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. There are as many ways to mix as there are people who mix. But you have to start somewhere—here’s a method that has worked well for me By Craig Anderton A mix should be well-balanced over a full range of frequencies, but note that a lot of instruments tend to bunch up around the midrange. One way to maintain clarity is to use EQ to “carve away” parts of one instrument’s range that step on another instrument. For example, if a piano and guitar part conflict, cut the lower mids a bit on the guitar to thin out the sound and emphasize strum/pick noise, while boosting the piano’s lows to give it more “meat.” As you mix, don’t throw compression on the entire mix (called “bus compression”). Sure, this makes the sound jump out more, and gives higher perceived loudness. But concern yourself with getting the best possible blend of instruments, and save the mastering for later. Also, mix at low levels. Your ear starts to compress naturally at higher levels, and besides, you won’t mix as well if your ears are fatigued. As you get toward the end of the mixing process, crank it at some point to make sure the mix sounds right at high as well as low levels; but remember that if it sounds great at low levels, it’ll probably sound fantastic when you pour on the volume. Okay—let’s mix. Keep your master output close to 0, and reduce gain as needed at the channels themselves. Step 1: Set the master fader to 0. If you need to reduce overall levels during the mixing process, temporarily group all your channel faders and reduce them rather than lower the master level. This makes best use of your system’s headroom. Also, don’t let signals go right up to 0: Leave a few dB of breathing room. You can always flirt with 0 dB during the mastering process. If you start with the faders at a relatively low level, you'll have the room to bring them up as the mix progresses. Step 2: Set all channel panpots to mono, and channel fader levels to some nominal setting, like –10dB. Why mono? Because there’s no quicker way to find out which instruments conflict with each other. As you play the song, start adjusting levels for the best balance. Step 3: Decide which is the most important element of the song, and work on getting the best sound possible for that track—effects and all. Often, this is vocals, but it might be solo piano in an instrumental. Don’t work on each track in isolation, with the goal of getting the fullest, biggest, most bitchin’ sound for each track: When you combine them, you’ll have a big, muddy mess. Get a truly compelling solo sound first, then make sure that the other tracks support it. Compressor and EQ settings that work well with my voice when recording with a dynamic mic. Step 4: Use EQ subtly to give each instrument its own section of the frequency spectrum. Do this while listening to all the tracks (although there will likely be times that you’ll solo two problematic instruments and get those squared away before moving on). You’ll almost certainly need to re-adjust the level faders during this process, because EQ will change a track’s level. Also remember that it’s often better to cut than to boost, as this cedes more space to other instruments. Step 5: Now start moving those panpots, and go for a full stereo field. If your mix sounded cohesive before you started panning, it will sound even better once each track has its own stereo placement and frequency band. Keep tweaking the stereo placement, EQ, and levels until you have a rich, clean, spacious mix. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Keep your silicon sweetie happy by Craig Anderton As musicians, we depend more and more on computers (and lots of hardware instruments have computers inside, too). If mine is down for a few hours, I’m in trouble—and if it’s down for even a few seconds on stage, I’m hosed! So while it’s important to learn about fixing computers, it’s even more important to keep them from getting sick in the first place...and these tips will help you do just that. Ventilate. Heat is the primary enemy of electronic devices because it can shorten the lifetime of components, which may eventually cause a string of failures. It’s imperative that all vents are always unblocked; and leave plenty of room for air to circulate around the computer. Similarly, keep computer gear out of direct sunlight. Fan and filter. If the cooling fan has a filter, check it periodically and clean or replace if necessary—a clogged filter can increases the machine’s internal temperature. Also, the fan itself is often a dust magnet because it’s pulling lots of air over those blades. Wipe the blades periodically with a damp cloth to keep dust from getting into the machine. Furman's F1000-UPS maintains power to your computer during brief power interruptions. Use an uninterruptible power supply. These provide a smooth, regulated, spike-free voltage source that lets you power down elegantly in the event of power problems. Think of a UPS as providing your computer with a diet of gourmet-quality electricity. The first time someone kicks out the plug or a falling tree takes out a power line, you’ll really appreciate what a UPS is all about; this modest investment could end up saving you thousands of dollars. Dust and debris. Dust has two nasty qualities: It interferes with proper contact between moving parts, and forms a layer of insulation on parts, which prevents heat from dissipating. When it’s time for computer “spring cleaning,” take the cover off, go outside, and carefully blow out the dust with a can of compressed air. And if you’re still using a CRT-based monitor, remember that dust and cobwebs can wreak havoc with high-voltage circuits. Unplugging then re-plugging connectors can help clean them, but be very gentle. Connector connections. While the cover’s off, gently unplug then plug in any easily accessible connectors. This helps prevent oxidation from building up on the contacts, and sometimes can even solve pesky intermittent problems. If any internal peripheral boards are installed, lift them up slightly from the motherboard, then re-seat them to clean the contacts. Support your cables. Video cables can weigh quite a bit, and pull downward on delicate connectors. Support your cables so that as little weight as possible pulls on these connectors. Also, dress your cables to make sure nothing can catch on them (or roll over them) accidentally. Don't plug or unplug from ports, particularly FireWire ports, without powering-down first. Don’t hot-swap peripherals. In theory, you should be able to hot-swap (i.e., plug and unplug without turning off power) Firewire and USB peripherals. However, there have been isolated reports of damage to motherboards and peripherals from this practice—better safe than sorry. For laptops, use a USB cable extender for dongles and USB memory sticks. Having a long object poking out of your computer is asking for trouble. A cable extender still sticks out, but nowhere near as far. Add a surge suppressor to your phone/cable modem/T1/DSL line. Even if you use an uninterruptible power supply and turn off the computer when there’s a thunderstorm in the area, any line going from the outside world into your computer can provide a “back door” to electricity from nearby lightning strikes. Quickie keyboard maintenance. Disconnect the keyboard, take it outside, hold it upside down, and shake gently. Then blow into the spaces between the keys and shake gently again. This will get rid of at least some of the dust. Power down before transport. If you put your laptop inside a computer bag, power it down first—standby isn’t good enough. If the computer gets turned on by accident, it will be sitting in a space with no ventilation that’s likely to get jiggled or bumped. No smoking. Smoke does not make computers happy. Make a special AC cord if you’re going to work on your computer. Use this even if you’re just doing something simple, like changing a card. You don’t want to leave your computer plugged in if you take the cover off, but you do want to keep the case grounded. To do this, buy an IEC-type AC cord and cut off the two AC prongs flush with the plug (file them if necessary to make sure they don’t stick out)—but leave the ground plug. Wear a grounding strap if you open up your computer. A grounding strap discharges any static electricity from your body. If you don’t have a grounding strap, at least touch something metal before doing anything inside your computer. Motherboard batteries (which are typically small and round) last a long time, but if they don't last forever--so check the status periodically, especially with older computers. Battery problems. Most computers use batteries to back up functions such as date and time settings. Although today’s batteries are pretty leak-proof, leakage can still happen—which lets corrosive chemicals loose inside your computer. Check the battery periodically for leakage, and replace it when it starts to reach the end of its useful life. Never touch cable pins. Always handle a cable by the casing. Some pins might connect to sensitive parts of a device that could be damaged by static electricity charges (such as what you accumulate by walking across a rug on a low-humidity day). Also, it’s generally good practice to turn off power before connecting cables. Avoid powering a computer off and on in quick succession. Turn-on transients put a strain on components. If you’re going to take an hour break, leave the computer on and turn the monitor brightness down. Some people insist you’re better off just leaving the machine on all the time, but the jury’s still out as to whether that’s better in the long run. Don’t move a hard drive while it’s spinning. Hard drives like to sit in one place; moving them around while spinning places excessive stress on the drive. Follow these tips, and your computer will thank you for it. May your machine never go down in the middle of a crucial session! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. MP3 and the Pro Studio Are MP3s of any use in pro studios? You might be surprised... by Craig Anderton As engineers and/or musicians, we want the best-sounding recordings possible. Okay…but then the MP3 format came along, which some call data compression, but is actually data omission. At lower bit rates all those nuances we slaved over have been impacted, trampled by a mercilessly efficient coding algorithm. What’s more, the reduced file size encouraged the downloading phenomenon that re-shaped the record industry—for better or for worse. But does the MP3 format have anything to offer those involved in professional recording - especially now that with lower storage costs and higher bandwidth, FLAC (which seems to be Microsoft's choice of music formats) is making its move? Read on… MUSIC TO GO: THE REALITY TEST We’ve all heard of the musicians who don’t sign off on a recording until they’ve heard it through a car radio. This makes sense: not only does it test a mix’s real-world transportability, but road noise obscures any subtleties, so that you find out what truly stands out in the mix. For example, you may find out that the guitar figure you could hear perfectly over the studio monitors needs to come up a bit in level to match the other instruments in the mix. MP3s exhibit a similar phenomenon at low bit rates. Below 96kbps (stereo) in particular, if your mix can survive the MP3 torture test, it can probably survive anything. There seems to be a correlation between mixes that can hold up at low bit rates, and their ability to sound good over a variety of systems. Although Fig. 1 shows the main difference as relating to high frequencies, low-level information also takes a hit with MP3. Fig. 1: Note the spectra for a standard 44.1kHz stereo WAV file (top) and an MP3 version encoded to 64kbps stereo (bottom). Above 8 kHz or so, response in the MP3 file falls off a cliff. ARE YOUR SONGS IN ORDER? Smart phones (a/k/a portable music players!) provide a great way to test out song orders. When you’re assembling an album (yes, there are ways to disseminate music other than singles!), do a rough assembly into your smart phone or other MP3 player—then listen to it when walking around, doing yard work, exercising, food shopping (added bonus: it can drown out the lame background music in your local supermarket), or whatever. Repeated listenings can reveal flaws in song orders that you might not catch otherwise. While a lot of players will play back WAV and AIF files, the size reduction for MP3s make them well-suited to smart phones, where you’re often competing for memory with various apps and data. And because your smart phone is with you most of the time, you’ll have instant access to your music. CREATING MP3 FILES Programs that can “rip” audio files to MP3 are cheap and plentiful—including iTunes, Windows Media Player, digital audio editors, many DAWs, and the like. If you want the ne plus ultra of conversion for pro applications (MP3 as well as AAC, surround, and other formats) conversion, check out Sonnox Fraunhofer Pro-Codec (Fig. 2).It’s expensive, but use it for a while and you’ll find out why. It’s not just about simultaneous conversion to multiple formats, but also about signal analysis and comparison. Fig. 2: The Sonnox Fraunhofer Pro-Codec has multiple talents in addition to basic conversion. MP3 RECORDERS: NOT JUST TOYS The MP3 format has even worked its way into recording, with small, hand-held devices (such as those from TASCAM, Roland, etc.). These perform no-moving-parts recording to memory cards like SD or microSD, and most offer recording at 320 kbps. The quality is virtually indistinguishable from WAV or AIF files, but you can pack a lot more data on that memory card. These types of recorders have several possible applications: Field recording. With a quality mic, the results can be usable even in pro situations. Sound effects are usually layered sufficiently in the background so that the data omission isn’t as problematic as it would be for critical musical recording. Notepad. Because of the compact size, these small recorders are easy to carry around for capturing any inspiration you might have. For some instruments, the quality is good enough so that if you capture something really incredible, it can be brought over to your DAW and used. Most people will probably not recognize you’ve slipped an MP3 into the mix. A “record everything” box. These recorders are so easy to set up and use that it’s a no-brainer to just hook the thing up to your mixer’s stereo outs and then record rehearsals, jam sessions, the songwriting process, whatever. An accessory when playing live. Any MP3 playback device can store long samples—pads, sound effects, spoken word sections, drones, etc.—which can be played back and mixed into the set at strategic times. This is particularly good for “groove” type applications where you can mix what’s playing in and out of a tune, although of course, the material has to be something that doesn’t require synchronization. I used to cart around a sampler for this sort of playback, then I downsized to Minidisc, and now it’s a TASCAM DR-22WL. Of course, a huge advantage of solid-state playback is there are no worries about the constant vibration of subwoofers woofing and people dancing. THE RODNEY DANGERFIELD OF PRO AUDIO? MP3s and other data compression formats don’t get much respect from pro audio types, because…well, because they simply aren’t “CD-quality,” despite what marketing weasels would like you to believe. Yet, in the pre-digital days, studios routinely ran off cassette copies for band members to carry around with them and play in their portable players. Data-compressed files are just the latest version of that concept, and if you make peace with their limitations, they still have uses in today’s high-res world. - HC - ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. It's About More than Just Stereo Placement By Craig Anderton Panning controls affect the stereo placement of signals, allowing to create a soundstage with width - you can pan some instruments to the center, some left, some right, and anywhere in between. This gives a more realistic mix, because in the real world, sounds do emanate from different locations (think about a live performance; the drums are almost always in the middle of the band). However, panning isn't only about realism; it's also about keeping instruments from interfering with each other, as well as adding special effects. Here are some tips designed to help further your skills in the art of stereo. FORGET ABOUT STEREO Well, as least temporarily. When starting a mix, some engineers start with all the channels panned to the center to create a mono mix. This makes it easy to tell which sounds are "stepping on" each other. To take an extreme example, if you pan bass all the way to the left channel and the kick drum all the way to the right, you'll have no trouble separating. But when mixed in mono, these two bass signals might blend, and muddy each other. When listening in mono, it's possible to separate the instruments by other techniques, primarily equalization, so that they become even more distinct when spread in stereo. AUDIENCE PERSPECTIVE OR PERFORMER PERSPECTIVE? As you set up stereo placement for instruments, think about your listener's position. For example, for a drummer the high-hat is on the left, and the toms on the right. For the audience, it's the reverse (Fig. 1). Fig. 1: FXpansion's BFD Eco has an option to flip the stereo perspective between audience and drummer perspective. I generally go for the performer's perspective, unless the object is to emulate a concert experience. For concerts, the audience perspective makes more sense because that's how the music would be experienced. FREQUENCY RESPONSE AND PANNING Low frequencies are fairly non-directional, whereas highs are very directional. As a result, pan low frequency sounds (kick drum, bass) toward the center of a mix, and higher frequency instruments (shaker, tambourine) further out to the left and right. DELAY AND PANNING Placing the delays from a delay effect in the same spatial location as the sound being delayed may cause an indistinct sound. One "fix" is to weight your instrument to one side of the stereo spread, and the delayed sound (set to delayed only/no dry signal) to the opposite side. If you're using stereo delay on a lead instrument that's panned to center, you can get some lovely results by panning one channel of echo toward the left, and one toward the right. If the echoes are polyrhythmic, this can also give some ping-pong type effects. Of course, this can sound gimmicky if you're not careful, but if the echoes are mixed relatively low and there's some stereo reverb going on, the sense of spaciousness can be huge. Another option: Filter the echoes so they have more midrange or highs than the sound being delayed. PLAN AHEAD Sure, you can just move panpots around arbitrarily until things sound good. But consider drawing a diagram of the intended "soundstage," much like the way theater people draw "marks" for where actors are supposed to stand. When it's time to mix, this diagram can be a helpful "map." BIGGER GUITARS AND PIANOS Here's a tip from Spencer Brewer (Laughing Coyote Studios) regarding an effect that Alex de Grassi uses a lot on his guitars to create a wider stereo image with two mics. However, note that this effect also works well with piano. Pan the right mic track full right. Pan the left mic track full left. Copy the right mic and left mic tracks. Pan the duplicated tracks to center. Bring the duplicated tracks down about 5-6dB (or to taste). This "fills in" the center hole that normally occurs by panning the two main signals to the extreme left and right. CREATING WIDER-THAN-LIFE SOUNDS WITH DELAY Many signal sources are still essentially mono (voice, vintage synths, electric guitar, etc.), but there are ways to "stereoize" sounds. The easiest option is to copy a track and "slip" it ahead or behind the original track to create a slight delay between the two, then pan the two tracks somewhat oppositely (Fig. 2). Fig. 2: Two copies of the same vocal track in PreSonus Studio One Pro. The upper track is delayed by about a 32nd note; note that the upper track is panned toward the left, while the lower track is panned toward the right, to create a stereo effect. In some cases, it's most effective to slip the original track ahead of the beat and the copy a little late, so that the two end up "averaging out" and hit in the pocket. But you can also use slipping to alter the feel somewhat. To "drag" the part a bit, keep the original on the beat and slip the copy a little later. For a more "insistent" feel, slip the copy ahead. How much slip to add depends on the instrument's frequency range. If the delay is too short, the two signals may cancel to some extent and create comb filtering effects. This can result in a thin sound, much like a flanger stuck on a few milliseconds of delay. Lowering the copied signal's level can reduce these negative effects, but then the stereo image will be correspondingly less dramatic. If the delay is too long, then you'll hear an echo effect. This can also be useful in creating a wider stereo image, but then you have to deal with the rhythmic implications-do you really want an audible delay? And if the delay is long enough, the sound will be more like two mono signals than a wide stereo signal. Thankfully, it's easy to slide parts around in your DAW and experiment. Just be sure to check the final result in mono; if the sound ends up being thin or resonant, increase the delay time a tiny bit until both the stereo and mono sounds work equally well. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. It's easy to add imaging to bass -- just do it virtually By Craig Anderton It is not true Congress passed a law in 1957 forbidding the use of bass in stereo; that’s just a vicious internet rumor. You can do anything you please—well, at least nowadays. One reason why bass loves mono traces back to the days of vinyl, when music was reproduced by dragging a rock through yards and yards of plastic (I’m not making this up). It was difficult for a phonograph needle to track different bass waveforms in opposite channels; it was much safer just to keep the bass in mono. Synth basses have broken the mold somewhat, but there’s a great technique for stereo bass using amp sims—stereo stacks. Guitarists have known about this for years: They split the signal to two different amps, which become their two channels. SHE TALKS IN STEREO Vinyl aside, there’s another reason why bass is usually in mono: It has more strength and power that way. Stereo sound broadens the bass, but diffuses it somewhat as well. So, think of stereo as another way to add a different type of dynamic to a song, where you can dial in whether you want the bass to lead the song, or follow. Another option is to “split the difference”—pan the bass slightly right and slightly left of center. For example, don’t use stereo bass throughout a song, but throw it in during the Big Chorus when the guitar is playing power chords—then fold it back into mono when the verse hits. You can change a song’s emotional character significantly by what you do with two bass amp sims and a couple of panpots. MAKE IT SO! The “universal” way to set up stereo bass is to copy the track, pan the two bass tracks left and right of center, the process them individually. However, many of today’s amp sims make it easy to put amps in parallel, then pan them as desired in the stereo field. IK Multimedia AmpliTube: You can select two parallel paths by clicking on routing #2, which splits the signal into two independent paths. If you want a wide stereo image, pan the Cabinet and Rack for each channel oppositely; to pull things in a bit but still get some spread, leave the Cabinet pans centered, and pan the two Racks left and right. Line 6 POD Farm: Click on the Dual button to create two separate chains. The panpots are located in the Mixer View. Native Instruments Guitar Rig: Use the Split module to create two parallel chains (remember to pan the two Split Mix panpots oppositely). There’s only one bass amp but three bass cabs; however note that the Jazz Amp works well as a second channel. Overloud TH2: The signal path is inherently split into two paths (Fig. 1). Fig. 1: Overloud’s TH2 splits the input into two parallel paths. Peavey ReValver: The Signal Splitter module works like the Split in Guitar Rig. ReValver's main bass amp is the bass channel in the Basic 100 amp—but split it using different cabs, and you can get a very wide bass sound. Waves G|T|R: You can’t really set up a true parallel chain without copying the track and using two instances of G|T|R, but you can come really close by using the Stereo Amp (Fig. 2). You have seven bass amp models and six bass cabinets, so you can split the amp sound into two different cabinets, and pan them oppositely. Experiment with the virtual mic placement, too; this can make a huge difference. Fig. 2: The Amplifier module in Waves’ GTR offers stereo and panning capabilities for each channel’s cabinet. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Now that Apple has firmly established itself as the titan of mobile technology, the number of iOS-compatible devices for making music are only getting better and better. Perhaps thought of as a fad not too long ago, high-quality, mobile A/V recording is very much here to stay and is getting better with every technological advance that brings more power to the iPhone. Case in point is the new iTrack Pocket from Focusrite. Roughly the size and shape of a harmonica, this Lightning-equipped interface truly lives up to its name, offering musicians and video creators the ability to meld the iPhone 5*’s HD video recording with audio captured from its stereo microphones and instrument input, then easily share it via YouTube, all through Focusrite’s easy and powerful Impact app. Totally Tube-ular For musicians, YouTube has been a boon in several ways. Bands use it to promote themselves. Entertainers use it to make a living. Players use it to share their ideas with others. Instructors use it as a tool to teach with. In some cases, people simply use it to show off their chops. With all the uses musicians have for YouTube, it makes sense to have a dedicated interface like the iTrack Pocket for capturing and uploading your creations. Like the iPhone 5, using the iTrack Pocket couldn’t be easier. There’s a slot that holds your iPhone in the landscape position, with a small grille covering the stereo microphones. On one end is a ¼” input for your guitar; on the other is a Level control. There’s a small connection on the back of the iTrack Pocket for the included Lightning cable. Once connected, install the Impact app and you’re ready to start recording and sharing. Maximum Impact The iTrack Pocket gives you different modes to work with depending on what you’re recording. You can use the onboard stereo room mics, the guitar input, or the mics in mono mode with your guitar plugged in. There’s built-in amp modeling with Twin, Brit and V-AC virtual amps at your disposal. Each model has gain and reverb settings, giving you a wide variety of tones to work with. Using the iPhone 5’s headphone jack, you can easily monitor your sound with virtually no latency and get your sound dialed in prior to recording. The Impact app is as intuitive as the iPhone itself, meaning there are no complicated menus or functions that remove you from your creative space. A few taps and swipes and everything is ready to go. Impact also has a few post-production options for getting your videos just right. You can edit the start/stop points, and optimize the audio with effects organized by recording type: acoustic guitar, electric guitar, guitar and vocal, female vocal and male vocal. Once you select a preset, you can adjust the Enhance, Reverb and Compress settings to further fine-tune your recording. It’s impressive how much you can do with these few settings; Focusrite made them all powerful and versatile to suit a variety of performance types. Premium sound The iTrack Pocket’s stereo room mics are great for recording voice and acoustic instruments. I was fortunate to have the iTrack Pocket around when I purchased a mid-‘70s Alvarez 12-string acoustic, and wanted to share with some friends my new acquisition. It was as easy as setting my iPhone on the iTrack Pocket, pulling up Impact, tapping record and playing. The room mics captured the harmonic details of the guitar that made me want to buy it in the first place, which I thought was really cool for such a compact device. Vocals also came through quite well, with warmth and presence that I wasn’t expecting. When using the mics together with the guitar input, everything blended together nicely, without one overpowering the other. Best of all, because the iTrack Pocket captures audio so well up front, it really cuts down on having to do multiple takes to get things sounding right. Capture, edit, share, repeat With the iTrack Pocket, Focusrite has created a way for iPhone 5 users to set up and record HD video and audio for countless types of content, be it a podcast; instructional video; performance piece or even a short, sweet lullaby to your little ones while you’re traveling. It’s also a great companion for capturing those ideas that always seem to arrive at the most inopportune times. The iTrack Pocket is a mobile music-making companion for whenever inspiration strikes. It’s a seamless merging of compact design and fantastic sound quality that iPhone 5 users should definitely add to their holiday wish list. *Officially supports iPhone 5, iPhone 5C and iPhone 5S. Resources Learn more about the iTrack Pocket at focusrite.com Pricing and purchase information on the iTrack Pocket at musiciansfriend.com
  12. Ignorance of the law is no excuse—in this case, panning laws By Craig Anderton The idea of panning seems pretty obvious, right? You turn a panpot (real or virtual) to place a sound somewhere in the stereo field. However, the end result depends not just on the stereo position, but also on whatever panning laws your DAW or hardware mixer follows. These laws govern exactly what happens when a monaural sound moves from left to right in the stereo field, which can be different for different pieces of software. In fact, not knowing about panning laws can create some significant issues if you need to move a project from one host to another. Panning laws may even account for some of the online foolishness where people argue about one host sounding “punchier” or “wimpier” than another when they loaded the same project into different hosts. It’s the same project, right? So it should sound the same, right? Well, not necessarily…keep reading. ORIGINS OF PANNING LAWS Panning laws originated in the days of analog mixers. If there was a linear gain increase in one channel and a linear gain decrease in the other channel to change the stereo position, at the center position the sum of the two channels sounded louder than if the signal was panned full left or full right. To compensate for this, it became common to use a logarithmic gain change response to drop the signal by -3 dB RMS at the center. You could do this by using dual pots for panning with log/antilog tapers, but as those could be hard to find, you could do pretty much the same thing by adding tapering resistors to standard linear potentiometers. Thus, even though signals were being added together from the left and right channels, the apparent level was the same when centered because they had equal power. But this “law” was not a standard. Some engineers preferred to drop the center level a bit more, either because they liked the signal to seem louder as it moved out of the main center zone, or because signals that “clumped up” around the center tended to “monoize” the signal. So, dropping their levels a little further created more of an illusion of stereo. And some of the people using analog consoles had their own little secret tweaks to change the panning characteristics. PANNING MEETS THE DIGITAL AUDIO WORKSTATION With virtual mixers we don’t have to worry about dual ganged panpots, and can create any panning characteristic we want. That’s a good thing, because it allows a high degree of flexibility. But it also adds a degree of chaos that we really didn’t need. For example, Steinberg Cubase has five panning laws in the Project Setup dialog; you get there by going Project > Project Setup and choosing the Stereo Pan Law drop-down menu (Fig. 1). Fig. 1: Cubase’s documentation states that the program’s default pan law is the classic -3 dB setting. Setting the value to 0dB eliminates constant-power panning, and gives the old school, center-channel-louder effect. Since we tried so hard to get away from that, it’s not surprising that Cubase doesn’t defaults to it. You can also choose to drop the center by -4.5dB or -6dB if you want to hype up the extremes somewhat, and make the center a bit more demure. Fair enough; it’s nice to have options. However, note that if you use RME’s Hammerfall DSP audio hardware, you should set the card’s preferences to -3 dB. Adobe Audition has two panning options. L/R Cut Logarithmic is the default, and pans to the left by reducing the right channel volume, and conversely, pans to the right by reducing the left channel volume. As the panning gets closer to hard left or right, the channel being panned to doesn’t increase past what its volume would be when centered. The Equal Power Sinusoidal option maintains constant power by amplifying hard pans to left or right by +3dB, which is conceptually similar to dropping the two channels by -3dB when the signal is centered. Cakewalk SONAR takes the whole process further with six different panning options (Fig. 2), which means you can not only choose the panning law you want for SONAR, but the odds of being able to match another host’s panning law are very good. You can access these by going Preferences > Audio. Fig. 2: SONAR lets you choose from six different pan laws. In the descriptions below, “taper” refers to the curve of the gain and doesn’t have too radical an effect on the sound. The six options are: 0dB center, sin/cos taper, constant power. The signal level stays at 0dB when centered, and increases by +3dB when panned left or right. Although this is the default, I don’t recommend it because of the possibility of clipping if you pan a full-level signal off of center. 0dB center, square root taper, constant power. This is similar, but the gain change taper is different. -3 dB center, sin/cos taper, constant power. The signal level stays at 0dB when panned right or left, but drops by -3 dB in each channel when centered. This is the same as the Cubase default panning law. -3 dB center, square root taper, constant power. This is similar, but the gain change taper is different. -6 dB center, linear taper. The signal level stays at 0dB when panned left or right, but drops by -6 dB when centered. This is for those who like to hype up the sides a bit at the expense of the center. 0dB center, balance control. The signal level stays constant whether the signal is in the left channel, right channel, or set to the middle. Fig. 3 shows what happens when a mono signal of the same level feeds a fader pair, and each pair is subject to different panning laws. Note the difference in levels with the panpot panned to one side or centered. The tracks are in the same order as the descriptions in SONAR’s panning laws documentation and the listing in preferences. Although the sin/cos and square root versions may seem to produce the same results, the taper differs across the soundstage between the hard pans and center. Fig. 3: How panning laws affect signal levels. SO WHICH IS THE BEST LAW TO CHOOSE? As laws go, this particular one is pretty unspecific. In fact, if you compare the three programs mentioned above, they all default to a different law! This can become a real problem when you move a project from one host sequencer to another—unless the selected panning laws match, look out. I often wonder if when some people say a particular host sounds “punchier” than another, the “punchy” one boosts the level when signals are panned hard left or right, while the “unpunchy” one uses the law that drops the level of the center instead. For example, suppose you move a SONAR project to Cubase. It’s going to sound softer, because Cubase drops the center channel to compensate, while SONAR raises the left and right channels to compensate. Conversely, if you move a Cubase project to SONAR, you might have to deal with distortion issues because signals panned hard left and hard right will now be louder. But where these laws really come into play is with surround, because here you’re talking about spatial changes among more than just two speakers. Bottom line: Be consistent in the panning law you use, and document it with the file if a project needs to be moved from one platform to another. Personally, I go for the tried-and-true “-3dB down in the center” option. I designed analog mixers to have that response, and so I’m more than happy to continue the tradition within the various sequencer hosts. Also, this is one option that just about every host provides, whereas some of the more esoteric ones may not be supported by other hosts. SO WHAT DOES IT ALL MEAN? We can’t sign off without mentioning one more thing: The pan law you choose isn’t just a matter of convenience or compatibility, although I’ve stressed the importance of being compatible if you want to move a project from one host to another. The law you choose can make a difference in the overall sound of a mix. This is less of an issue if you use mostly stereo tracks, as panning in that case is really more of a balance control. But for many of us, “multitrack” still means recording at least some mono tracks. I tend to record a mono source (voice, guitar, bass) in mono, unless it’s important to capture the room ambience – and even then, I’m more likely to capture the main sound in mono, and use a stereo pair of room mics (or stereo processing) that go to their own tracks. And if you pan that mono track, you’re going to have to deal with the panning laws...but at least now you know how not to break them. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Get the most out of this rock-solid digital audio editor By Craig Anderton Sony Sound Forge has been around for a long time, and there’s a reason: It does the job and then some, thanks to a clean user interface and a wealth of features. However, it also has lots of shortcuts and other ways to get more out of the program; following are some of my favorites. CHANGE TIMELINE CALIBRATIONS Right-click in the timeline or the Length selection time field the file window’s low right. Choose the calibration from the context-sensitive menu. CUT PREVIEW Select what you want to cut. Type Ctrl-K to initiate a pre-roll before the cut and a post-roll after the cut (you won’t hear the selection itself). To change pre-roll or post-roll times, go Options > Preferences > Previews and change the values under Cut Preview Configuration. CREATE A FAVORITE COLLECTION OF PLUG-INS Go FX Favorites > Organize. Click on folders in the left pane to show the included plug-ins in the right pane. Drag plug-ins from the right pane into the FX Favorites folder. Adding to the FX Favorites folder does not remove the plug-in from its source folder. CUSTOMIZE TOOLBARS Go Options > Preferences > Toolbars and click on the toolbar you want to customize. Click on the Customize button. Click on a tool in the left pane, then click on Add; this adds the desired tool to the toolbar. To remove a tool from the toolbar, click on the tool in the right pane and click on Remove. SET UP CUSTOM VIEWS Go Options > Preferences > Toolbars and enable the Views toolbar. Make the file exactly the way you want to see it, with the desired zoom and selection. In the Views toolbar, click on Set then click a number that will correspond to the view. You can create up to 8 views. Click on the number in the toolbar to jump immediately to that view. CONVERT CHANNELS, SAMPLE RATE, OR BIT DEPTH The lower right of the main Sound Forge window shows fields for sample rate, bit resolution, and channel configuration (mono/stereo/surround/etc.). Right-click on any of these to bring up a corresponding context menu, then change the parameter. Or, go View > File Properties, and make your changes there. Note: To add dithering or noise shaping when decreasing bit depth, go Process > Bit-Depth Converter instead. For more control over channel conversion, go Process > Channel Converter. SCRUB AUDIO The scrub tool is in a file window’s lower bar, just to the right of the mini-transport controls. Drag left or right to scrub. Or, use the keyboard letters J (reverse), K (pause), or L (forward). LOCATE AUDIO EVENT This is similar to scrubbing, but lets you jump to anywhere in a file and play back a section of audio at normal speed. Click and drag in the Overview bar just below the file header. When you stop dragging, a selection of audio plays back and loops as long as the mouse button remains held down. Edit the audio selection duration by going Options > Preferences > Previews and set the Loop Time parameter under Audio Event Locator. QUICK ZOOMING SHORTCUT To zoom in, press the keyboard’s Up Arrow key. Each press zooms in further. Press and hold to zoom in continuously. Use the Down Arrow key similarly to zoom out. INSERT MARKERS DURING PLAYBACK OR RECORDING Type M wherever you want a marker. To play back audio starting at a marker, type the marker number from the QWERTY keyboard (don’t use the numeric keypad), or choose Edit > Go To and choose the marker from the drop-down menu. CREATE TWO ZOOM LEVEL PRESETS Go Options > Preferences > Display tab. Choose the desired zoom levels for Custom Zoom Ratio 1 and 2. Click Apply, then OK. Make sure Num Lock is on for your numeric keypad, and use the 1 and 2 keys to choose the associated zoom level. CHANGE LEVEL CALIBRATION Right-click on the level ruler to the left of the waveform window and select Label in Percent or Label in dB. SET SELECTION START/END DURING PLAYBACK During playback, type I at the desired selection start, and O at the desired selection end. Type Q to make this a loop. SNAP TO ZERO CROSSING SHORTCUT When snapping is enabled, type Crtl+B to toggle between enable/disable snap to zero crossings when making a selection. FIND CLIPS THAT HIT 0 DB Select the audio you want to analyze, then go Tools > Detect Clipping. I usually choose the preset Detect All 0 dB Clipping. Click on OK. Sound Forge will place markers at all 0dB clips lasting more than 3 samples. You can then use the Clipped Peak Restoration function (go Tools > Clipped Peak Restoration) to restore the clipped peak you find. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. If you don't know how to tweak your sounds, you're missing out on a lot of what multieffects have to offer - but they're easier to program than you might think By Craig Anderton Today’s multieffects are woefully underutilized, partly because there is so much potential to explore, but also partly because these devices intimidate many people. The familiar knobs and switches of yesteryear have been replaced by LCDs and little buttons, which demand a new way of looking at effects. For example, few musicians seem to realize that contemporary multieffects allow for using a pedal, MIDI sequencer, and/or footswitch to vary virtually any aspect of a sound (overall level, number of echoes, reverb depth, distortion intensity, etc.) in real time. Many recording engineers don’t exploit the fact that MIDI-controlled multieffects offer a highly sophisticated, yet inexpensive, type of automated mixdown. However, learning how to do these tricks can be confusing—unless you know a few basic bits of knowledge that demystify what multieffects are all about. Let’s look into how digital effects work, then describe a generic approach to programming multieffects that will help you get the sounds you want. UNDERSTANDING PROGRAMMABILITY The concept of programmability was introduced to musicians when synthesizer players became fed up with trying to change sounds rapidly on stage. Early, non-programmable synthesizers had so many knobs and switches they looked somewhat like a jetliner cockpit, and trying to call up a new sound in time for the next song drove many a player nuts. Programmable synthesizers let you edit a particular sound, then press a button to store the control setting information as a program in memory. Reselecting that program at a later date produces the same sound as when you laboriously adjusted the parameter values in the first place. As signal processors became more complex, they also became more difficult to adjust in “real time.” Once again, programmability provided a solution. Now we have multieffects with literally hundreds of parameters, but you can recall all their settings at the touch of a button. However, programmability requires some adjustments in your thinking. Musicians and engineers are used to immediate gratification—bend a string, move a fader, or flick a switch, and the results are immediately apparent. And even with non-programmable rack-mount effects it doesn’t take too terribly long to, say, turn up the feedback for a more intense sound. The downside of programmable effects is that they trade off the convenience of instant recall for the inconvenience of time-consuming programming to get the effect you want. What makes programmability possible is the computer-type chip at the heart of every digital multieffects. Each effect or combination of effects is the result of a computer program that tells the computer how to create chorus, reverb, distortion, and/or other effects. This program was written by a real live human being, but you edit it. When you’re getting the sound you want out of a multieffects devices, you’re actually entering data into a computer program to change what the program will do. For example, echo consists of delaying a signal, feeding some of the delayed output back to the input to create additional echoes, and mixing some echoed sound in with the straight sound. So, a multieffects’ echo program would tell the computer to “delay a signal for X milliseconds, feed back Y\% of the delayed signal back to the input, and mix in Z\% of echoed signal.” The nature of the echo sound will change according to what data we put in for X, Y, and Z. A larger value of X means more milliseconds, thus a longer delay. If we feed back a small amount of signal (small Y value), we’ll hear only a few echoes; larger Y values feed back more of the signal, creating more echoes that take a longer time to fade out. What we enter for Z determines the straight/processed mix. Upon entering the data necessary to get the sound you want, you’ve created a variation which is also called a program (yes, jargon can be confusing). Generally, when we talk about a unit that “stores 100 programs,” we don’t mean the computer program the design engineer wrote but rather, the edited versions you’ve created. From now on when we say program, we’ll mean your variations. These are also called patches or presets, and just to get you used to the real world, we may use these terms as well. ABOUT PARAMETERS Each adjustable element of an effect, whether analog or digital, is called a parameter. For example, a delay’s variable parameters might include initial delay time, feedback, modulation depth, etc. Before digital electronics took over the world, an effects box had one control (switch or knob) per parameter, so changing parameter values was a relatively easy process. But knob-based effects also had problems: Changing a sound (which required a lot of knob-twisting) took time, and if you came up with a great sound, trying to get it back later could be difficult. Also, knobs and switches have always been some of the most expensive components in effects. Digital electronics largely eliminates knobs. Remember our example above where we described an echo sound with X, Y, and Z parameters? With a digital effects unit, each parameter would be given a unique name or number (so you could identify it for editing), and be quantized into a series of discrete steps (Fig. 1). For example, delay time, instead of being continuously variable and selected by a knob, might be quantized into 1 millisecond steps and selected by keying in a three-digit number (e.g., 000 to 999 milliseconds) with a keypad. Fig. 1: Analog controls are continuously variable. A digitally-controlled parameter is like a knob that has been divided (quantized) into a series of discrete steps. The reason for quantizing parameter values is because once these are identified numerically, the same computer doing all the other tasks we described earlier can turn its attention to storing these values in its memory. This lets us call up a particular program at any time. For example, suppose we told the computer that our echo program’s X parameter was 210 milliseconds, Y parameter 40\% feedback, and Z parameter 35\% echoed signal. The computer can remember this group of numbers as a program; once you give the program itself a number, like 26, the computer will file all this information in its little brain under “26” so that next time you ask it for program 26, all parameters will be reset exactly as you specified. Sounds good so far… now it’s time to figure out how to access all these parameters. ACCESSING PARAMETERS As mentioned earlier, many computerized musical devices don’t have knobs that you can twist to change sounds; instead, you need to find individual parameters and alter their values, usually by a process of button-pushing. Fortunately, there are only so many ways to accomplish a given task. If you’re creating a sound from scratch or editing an existing sound, you’re almost always going to use the same basic procedure for any device: 1. Specify the program (patch) to be edited. This reserves a memory location that temporarily holds the parameter edits. 2. Select the program’s structure, which is called an algorithm. The algorithm will determine the sound’s overall character. You may have a choice of several fixed algorithms (e.g., compressor > distortion > chorus > EQ > reverb) or you may be able to choose the order and type of effects. Fig. 2 shows a couple different algorithms that define the effect’s structure. Fig. 2: Two different algorithms. Each one creates a different type of effect, and has variable parameters so you can alter the sound. 3. Specify a parameter within the algorithm that you want to change (echo time, amount of distortion, noise gate threshold, etc.). 4. Enter a new parameter value and listen to what effect this has on the sound. 5. Repeat steps 3 and 4 until all the parameters have been adjusted to give the type of sound you want. The most common data entry tools are a calculator-style keypad for entering numbers, and/or scrolling or “arrow” keys to help locate the different parameters (we’ll see how these work in a little bit). SELECTING DIFFERENT PROGRAMS A very basic function on all units is calling up different programs. When you turn on a digital multieffects, odds are you’ll be greeted with either the last program you selected or a default program (e.g., program #01). Depending on the unit, to select a new program you might punch in a certain program number with the keypad, or scroll through the different programs with the arrow keys. You can think of the programs as forming a list, with a window that scrolls over the list (Fig. 3). The up and down arrow keys move the window over the list to select a particular program. For example, if you’re on program 14, pressing the up arrow key selects program 15; pressing the down arrow calls up program 13. Some units may use a knob to select programs instead. Then again, some devices may arrange their programs “horizontally” instead of in a vertical list, and use right/left arrow buttons to move from one program to another. In any case, the basic principle remains the same. Fig. 3: Using up/down buttons to select a program. PARAMETER SELECTION AND EDITING Now that we have a program, it’s time to select and edit parameters. Each unit has a slightly different way of doing things, but here’s a typical real-world example based on a generic multieffects. Suppose a multieffects has two displays (left display for program number, right for other parameter values), two sets of up/down buttons, and one set of left/right buttons. You would begin by selecting a program, as shown on the left display, with the first set of up/down buttons (Fig. 4). The up button selects the next higher-numbered program, and the down button, the next lower-numbered program. Fig. 4: Program and algorithm selection with a generic multieffects. Upon calling up a program, the right display might then show the name, number, or even a block diagram of the algorithm used in the selected program (remember, each algorithm represents a particular combination of different effects). If you wanted to choose a different algorithm, you could do so with the second set of up/down buttons. Each algorithm has an associated “list” of parameters. As we used the right-hand set of up/down buttons to select an algorithm, it follows that we’ll use the left/right buttons for the next step—parameter selection (Fig. 5). Fig. 5: Each algorithm will have several parameters whose values you can change. As you press the left or right button, the display identifies the selected parameter. To change the parameter value, use the right hand set of up/down buttons; the display will show the parameter’s value. After editing the value, press the left or right button again to select the next parameter on the list (Fig. 6). Fig. 6: In this example, the display shows several parameters; other multieffects may show only one parameter per screen, or more parameters, or may have more sophisticated displays with better graphics. You select a parameter for editing with the left/right buttons; the one being edited has an underline (cursor). In this case, it’s chorus depth, which has a value of 35. The up/down buttons change the parameter value. This process illustrates two important points: • There is a definite order for parameter editing. You must first choose the program, and if applicable, the desired algorithm before you can choose a parameter whose value you want to change. • Note that with many multieffects, the display anticipates your needs. If you press one of the right hand up/down buttons with an algorithm number showing, the display knows that you want to edit algorithms. If you press one of the left/right buttons with an algorithm number showing, the display knows that you want to edit the parameters within that algorithm. If you have a parameter selected and you press an up/down button, the display knows you want to edit values. This demonstrates the good news/bad news of digital effects: if you know what you’re doing, editing flows in a logical fashion. If you don’t know what you’re doing, and you press the wrong button at the wrong time, you may get lost in the program and not be sure what you’re adjusting (or how to get back to a familiar reference point). Different units use variations on a theme. Some boxes have dedicated buttons for turning individual effects in an algorithm on and off (“effects select” buttons). If you wanted to edit one of these effects, you might press an “edit” button to select a parameter editing mode, which would then change the effects select buttons into effect edit buttons. For example, pressing the compressor on/off button while in edit mode would select the compressor for editing. Each successive press of the compressor button would access another compressor parameter, and up/down arrow buttons would set the value. Although the specifics are different from the example given above, you still: • Select a program • Select an algorithm or effect • Specify a parameter • Change the parameter’s value. No multieffects unit (or synthesizer) strays too far from this basic concept. Once you figure out how your multieffects performs these steps, you’re on your way to being a programming expert. SHORTCUTS Because button-pushing is tedious, manufacturers often include shortcuts. For example, scrolling through 99 programs with up/down arrows can take some time. So, one unit might increase the scrolling rate the longer you hold the button down, while another might double the scrolling rate if you press the unused arrow button while holding down the desired arrow button. And because even little buttons cost money, a manufacturer might use a “shift” button (like the shift key on a computer keyboard or typewriter) that changes the function of a set of buttons so that five buttons and a shift button can do the work of ten buttons. Any shortcuts should be documented in the manual. Parameter-controlled effects may be confusing at first, but don’t give up. You have a lot more power at your fingertips, and greater repeatability. Sure, it takes more time to program or tweak a sound initially, but once you find a great sound and store it in memory, you won’t have to find it again. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Profiles of popular effects types - what they are, their main controls, annoyng habits, and hot tips by Craig Anderton Effects are to recording or live performance as spices are to cooking—they can really enhance whatever’s already there, although a little goes a long way. Yet a lot of people aren’t really that familiar with their effects; they just dial up a preset and hope for the best. If you understand how these boxes or plug-ins tick, you can use them much more effectively. The following roundup of common effects clues you in not only to what they are, but to their crucial parameters, annoying quirks, and some of the most popular applications. COMPRESSOR/LIMITER Profile. A compressor/limiter (C/L for short) evens out dynamic range variations by amplifying soft signals to make them louder, and/or attenuating loud signals to make them softer. The result is less level difference between soft and loud signals. How it works. Once a signal exceeds a user-settable threshold, compression occurs where increasing the input signal does not increase the output level by an equivalent amount. For example, with a compression ratio of 2:1, every additional 2 dB of input level results in only 1 dB of additional output level. Crucial parameters. Threshold sets the level above which signals will be compressed or limited. Signal below the threshold are not processed. Ratio selects how the output level changes in relation to the input once the input exceeds the threshold. The higher the ratio, the greater the amount of compression, and the more “squeezed” the sound. Extremely high ratios put an absolute “ceiling” on the signal, which is called limiting. Output adds gain to offset the lower level caused by restricting the dynamic range. Attack sets the reaction time to input level changes. A longer attack time “lets through” more of a signal’s original dynamics before the compression kicks in. For example, adding a bit of attack time retains the initial “thwack” of a kick drum. Release or Decay determines how long it takes for the C/L to return to its normal state after the input goes under the threshold. With short release times, the C/L tracks very slight level changes, which can produce a “choppy” sound. An Auto or Program Dependent switch, if enabled, sets the Attack and Decay times automatically and re-adjusts these settings as needed for different program material. Softube's FET Compressor plug-in Annoying habits. Over-compressing results in a thin, unnatural sound, and brings up noise. Don’t add any more compression than needed. Also, controls interact—for example, changing the ratio can change the threshold. Hot tips. When used with other effects, if possible place the compressor early in the chain so that it doesn’t bring up the noise from previous stages. • If it seems like there’s been a sudden increase in compression but you didn’t increase the compression amount, then the input signal going to the compressor may have increased. • Some music from the 60s featured a drum sound that sounded like it was “sucking” and inhaling. To create this effect, apply lots of compression with an extremely short release time. For more information: “Compressors De-Mystified” “Stompbox Compressors in the...Studio?” DISTORTION Profile. Distortion mimics the way an amplifier behaves when overloaded, so it’s a popular effect for guitar. However, distortion can also spice up drums, synthesizers, and even vocals. How it works. Not all types of distortion (tube, transistor, digital, etc.) sound the same. Some devices include a tube stage or other analog distortion circuit that can be modified under computer control. Others use DSP to emulate particular types of distortion. Most musicians prefer “soft” clipping, where the output signal becomes progressively more distorted as the input signal level increases. With hard clipping, the output signal remains undistorted up to a certain point, then becomes extremely distorted as the input increases past that point. This sounds harsher. An undistorted signal compared to soft- and hard-clipped versions. Crucial parameters. Sensitivity, Drive, or Input determines the amount of signal level needed for the onset of distortion. Maximum sensitivity gives the most distortion. Output. Since distortion often adds a great deal of amplification, the output parameter trims the effect’s output level to something reasonable. Tone controls. Some distortion effects include tone controls. Distortion adds harmonics to the signal, which increases the high frequency content; pulling back on the highs reduces shrillness, while boosting the bass gives more depth. IK Multimedia's AmpliTube is one of many popular amp sim plug-ins. Annoying habits. Because of their high gain, distortion boxes can generate a lot of hiss. Also, because many distortion devices are designed for guitar, it’s hard to find stereo models for mixing applications. Hot tips. Patch a distortion unit into a mixer’s aux bus, and bring the returns back to the mixer. To add some “bite” to a channel, turn up its aux bus send to taste. • A little distortion can really increase the punch of drum and bass sounds. • Distortion can make a synthesizer sound a lot more “rock and roll.” Add some crunch to organ patches that use rotating speaker effects, or to that classic Yamaha DX7 that’s sitting around feeling neglected. For more information: “How to Avoid Hidden Distortion in Amp Sims” “Six Amp Sim Programming Tips” “Reduce Amp Sim Harshness with De-Essing” “How to Make Amp Sims Sound More ‘Analog’” "Stompbox Distortion in the...Studio?" "Create Dual-Band Distortion with Guitar Rig" "The Guitarist's Guide to Multiband Distortion" EQUALIZERS Profile. An equalizer emphasizes (boosts) and/or de-emphasizes (cuts) certain frequencies to change a signal’s timbre. The amount of boosting or cutting is expressed in decibels (dB). How it works. Equalizers use filter circuits that pass certain frequencies and reject others. The four most common filter types are lowpass (passes all frequencies below a certain cutoff frequency), highpass (passes frequencies above the cutoff frequency), bandpass (boosts only those frequencies around its resonant frequency, while rejecting higher and lower frequencies), and notch (all frequencies around the notch frequency are rejected, while frequencies higher and lower than the notch frequency pass through to the filter output). The range of frequencies affected by the boost or notch is called the bandwidth. There are several types of equalizers. Shelving equalizers boost or cut a fixed amount over a range of high or low frequencies. The graphic equalizer uses multiple bandpass filters to split the audio spectrum up into a number of bands, with an individual boost/cut control for each band. A parametric equalizer is a sophisticated form of tone control. Unlike the graphic equalizer, which can boost/cut only at fixed frequencies, a parametric can boost or cut over a continuously variable range of frequencies. In addition, the bandwidth is variable, from broad to sharp. Note that there are also quasi-parametric (also called pseudo-parametric) equalizers that include frequency and boost/cut controls but no bandwidth control. The three main parameters of a parametric equalizer, and how they relate to level and frequency. Crucial parameters. Frequency sets the specific part of the audio spectrum where the boosting or cutting occurs. Boost/cut determines the amount of equalization at the selected frequency. Bandwidth, resonance, or Q. This control determines the sharpness of the boosting or cutting action. Narrow bandwidth settings affect a very small part of the audio spectrum, while broad settings process a broader range. Equalizer responses, from left to right, in Cakewalk’s QuadCurve EQ: Steep highpass; shallow, wide notch; slight high-frequency shelf boost; narrow high frequency notch. Annoying habits. Some equalizers don’t include bypass switches, making it difficult to compare equalized and unequalized versions of a sound. Also, some equalizers have a fixed bandwidth, which always seems too narrow or too broad for the intended application. Hot tips. You’ll have more headroom if you cut rather than boost. For example, it’s often better to cut the midrange than boost the treble and bass. • Frequently compare the equalized and non-equalized sounds. You don’t want to get into a situation where you boost the treble a lot, which makes the bass seems thin so you boost that, which then makes the midrange seem weak so you boost that, and so on. • Always use the minimum amount of equalization necessary. Just a few dB of change can make a big difference. • Suppose you’re playing a rhythmic piano part behind a vocalist, but since the piano and voice occupy a similar frequency range, they conflict. The solution: pull back on the piano’s midrange somewhat to make room for the vocal frequencies. For more information: “10 Guitar EQ Tips for Live Performance” “Bass EQ and Sweet Spots” “What Those Other Filter Responses Mean to Recording” “Making Equalization Work for You” TIME DELAY: FLANGING, CHORUS, ECHO Profile. Time delay produces effects including flanging, echo, chorusing, tapped delay, stereo simulation, and others. Some devices provide dedicated effects for each function; others simply include a general purpose time delay effect that is flexible enough to provide these different effects. How it works. Time delay effects stuff the input signal into digital memory, then read it out a certain amount of time later. Feeding some of the output back to the input recirculates the delayed sound, thus creating a repeating echo effect. Modulation, which varies the delay time over a particular range, produces an animated kind of sound as the delay time sweeps back and forth between a maximum and minimum value. Crucial parameters. Initial delay sets the amount of delay time. With echo, this is the time interval between the straight sound and the first echo. With flanging and chorusing, modulation occurs around this initial time delay. Some devices let you synchronize the delay time to MIDI song tempo. Another option is a tap function, where hitting a switch or button twice sets the delay time interval. Balance, Mix, or Blend. This parameter adjusts the balance between the dry and delayed signals. Flanging typically uses an equal blend of dty and delayed signals, while chorusing uses more dry than delayed sound. Feedback, Recirculation, or Regeneration. This parameter determines how much of the output feeds back into the input. With echo, minimum feedback gives a single echo; more feedback increases the number of echoes. With flanging, adding feedback increases the effect’s sharpness, much like increasing a filter’s resonance control. Sweep Range, Modulation Amount, or Depth determines how much the modulation section (also called LFO, or sweep) varies the delay time. For example, a delay with a 2:1 sweep range can sweep over a 2:1 time interval (e.g., 5 ms to 10 ms, or 100 ms to 200 ms). A wide sweep range is most important for dramatic flanging effects; chorus and echo don’t need much sweep range to be effective. With longer delays, adding a little bit of modulation provides chorusing, but too much modulation will cause detuning effects. Modulation type. The modulation usually comes from periodic waveforms such as triangle or square waves, but some devices include randomized waveforms and/or envelope followers (where the modulation tracks the incoming signal’s dynamics). Modulation Rate sets the modulation frequency. Typical rates are 0.1 Hz (1 cycle every 10 seconds) to 20 Hz. With flanging and chorusing, modulation causes the original pitch to go slightly flat, return to the original pitch, go slightly sharp, then return to the original pitch and start the cycle all over again. Three time-based effects loaded into Native Instruments’ Guitar Rig 5: Chorus/Flanger, Tape Echo, and Delay Man (an stompbox echo emulator). Annoying habits. The delay readouts on older hardware models are not always 100% accurate. Also, changing delay times via MIDI usually results in burping and belching as the device flushes its memory and refills. Hot tips. For vibrato, set a short initial delay (5 ms or so), monitor delayed sound only, and modulate the delay with a triangle or sine wave at a 5 to 14 Hz rate. • To create a “comb filter,” mix a straight signal with the same signal passing through a short, fixed (unmodulated) delay. Try an initial delay of 1 to 10 ms, minimum feedback, no modulation, and an equal blend of processed and straight sound. • For mono to pseudo-stereo conversion, set a stereo chorus depth parameter to maximum and rate to minimum (or off). This creates a stereo spread without the motion that would result from having a higher modulation rate. • To calibrate the echo repeat time to a particular rhythmic value, such as an eighth or quarter note, the following formula translates beats per minute (tempo) into milliseconds per beat (echo time): 60,000/tempo = time (in ms). For more information: "Exploring Time-Based Effects (Part 1)" “A Better Chorus for Avid’s Eleven Rack” "’Through-Zero’ Flanging with Native Instruments Guitar Rig” "Tighten Your Timing with Delay Effects" PITCH TRANSPOSER Profile. The pitch transposer synthesizes a harmony line from an input signal. Simple pitch transposers are limited to parallel harmonies, while more sophisticated models produce “intelligent” harmonies if you specify a key and mode (major, minor, etc.). How it works. A pitch transposer essentially cuts a signal into a little pieces, then glues them all back together—in real time, except for a few milliseconds of processing time—so that they take up less time (shifts pitch up) or more time (shifts pitch down). Crucial parameters. Transposition sets the harmony line interval, typically in semitones but with an additional fine tuning control. Blend or Mix sets the balance of dry and transposed signals. Feedback, Regeneration, or Recirculation feeds some of the output back to the input to create stepped harmonies and other special effects. Intelligent harmony settings consist of key and scale data so the pitch transposer generates harmonies based on the rules of harmony for the specified scale. Waves’ UltraPitch generating a harmony from a vocal track. Annoying habits. It takes a lot of processing power to do pitch transposition, and the sound sometimes suffers. For example, there might be a fluctuating tremolo effect, or occasional glitches. The greater the degree of transposition, the more objectionable the sonic problems. Hot tips. Even if your transposer doesn’t offer “intelligent” harmonization, you can often change the transposition amount via MIDI by using continuous controllers as you play. • For glissando effects, set the transposed pitch very slightly higher than normal (a few cents), then advance the regeneration control. This recirculates and pitch shifts each note, thereby initiating a stepped, upward glissando effect (the harmony pitch control controls the step interval). • Pitch transposers can give excellent flanging/chorusing effects. Set the pitch control for a very slight amount of transposition (1 to 20 cents or so) and add regeneration to taste. For more information: "Transparent Vocal Pitch Correction" NOISE GATE Profile. The noise gate helps remove noise and hiss by shutting off the audio whenever the input signal drops below a certain threshold. As a bonus, some noise gates can also provide special effects. How it works. The presence of a loud musical signal masks hiss, which becomes audible only during quiet parts when the music is not playing. Setting the threshold just above the hiss level will allow the signal to pass if its level exceeds the threshold, but will block the output if the signal level drops below the threshold and consists solely of hiss. Crucial parameters. Threshold or Sensitivity determines the reference level above which the gate opens. High threshold levels are useful for special effects, such as removing substantial amounts of an instrument’s decay to make a more percussive or gated sound. Attenuation. Some noise gates feature adjustable attenuation for the gate-off state. With less attenuation, the gate doesn’t shut down all the way so that some of the signal can still pass through. Decay time sets a fadeout time for the audio when the signal goes under the threshold. Attack time works in reverse: when a signal exceeds the threshold, the noise gate fades in over a specified period of time. Key Input or Sidechain Input allows an external audio signal to open and close the gate. Focusrite Gate, part of their Scarlett suite of plug-ins. Annoying habits. Somet mes the gate dr ps out some sig als that y u do want to hear. Also, noise gates work best on signals that don’t need to be cleaned up too much. Eliminating high noise levels also means nuking substantial portions of the signal. Hot tips. If possible, avoid noise gates for noise reduction since they tend to destroy low-level dynamics. • The key input is very cool for special effects. For example, gate a sustained chord with a kick drum beat to “chop” the chord into rhythmic slices. • For a huge drum sound, mic the drums so they include a lot of room sound, compress the signal, then gate it with a high threshold. This lets through bursts of room sound, but eliminates the reverberant decay. For more information: “Noise Gates Don’t Have to Be Boring” “Gate Your Way to Tighter Bass Grooves” REVERBERATION Profile. Reverberation simulates the sound of audio reflections bouncing around inside an acoustic space (e.g., large hall or auditorium). Digital reverb can also create spaces that don’t exist in nature. How it works. Digital reverb processes digital audio through an algorithm that creates a series of delays with filtering, similar to the reflections that would occur by sound waves bouncing off acoustical surfaces. Crucial parameters. Type or Algorithm determines the kind of reverb to be emulated: room, hall, plate, spring (the classic “twangy” reverb sound used in guitar amps), etc. Room Size determines the room’s volume. Changing this parameter often changes other parameters, such as low and/or high frequency decay. Early Reflections level. Early reflections are closely spaced discrete echoes, as opposed to the later “wash” of sound that constitutes the reverb’s tail. This parameter determines the level of these initial, discrete echoes. Predelay sets the amount of time before the first group of reflections or room reverb sound begins, and is usually 100 ms or less. A longer predelay setting gives the feeling of a larger acoustical space. Decay time adjusts how long it takes for the reverb tail to decay to a particular level (usually -60 dB). Note that there may be separate decay times for different frequency bands so you can more precisely tailor the room’s characteristics. Crossover Frequency applies only to units with separate decay times for high and low frequencies. This parameter determines the “dividing line” between the highs and lows. For example, with a crossover frequency of 1 kHz, frequencies below 1 kHz will be subject to the low frequency decay time, while frequencies above 1 kHz will be subject to the high frequency decay time. High Frequency Rolloff or Damping. In a natural reverberant space, high frequencies tend to dissipate more rapidly than lows. High frequency rolloff helps simulate this effect. Mix, Balance, or Blend. Sets the mix between the reverberated and straight signals. Diffusion is a “smoothness/thickness” parameter. Increasing diffusion packs the early reflections closer together, giving a thicker sound. Decreasing diffusion spreads the early reflections further apart. Some reverb units call this Density, and some diffusion controls affect all reflections, not just the early ones. Universal Audio’s emulation of the classic, and rare, EMT 250 reverb. Annoying habits. Even the best digital reverbs don’t really sound like clapping your hands in a cathedral. An acoustic space remains the best way to do reverb. Hot tips. Different instruments can sound better with different reverb settings. For example, low density settings can be problematic with percussive sounds, since the first reflection could sound more like a discrete echo than part of the reverb. Increasing the density solves this. However, low density settings can work very well with voice to add more fullness to the overall sound. • To create a “bigger” sound, set the low frequency decay longer than the high frequency decay. For a more ethereal sound, do the reverse. For more information: “Understanding Digital Reverb Parameters” "Exploring Time-Based Effects (Part 2)" “Re-Thinking Reverb” “Convolution Reverb Basics” "Wild and Wacky Reverb Effects" TREMOLO Profile. This provides a periodic amplitude change so that the sound seems to “pulsate.” How it works. A modulation source, such as a triangle or sine wave, controls amplitude. Crucial parameters. Modulation Amount, or Depth determines how much the modulation section varies the amplitude. Modulation Rate sets the modulation frequency. Modulation Type. Some tremolos include different modulation waveforms. The tremolo from Line 6’s POD Farm Elements Annoying habits. The tremolo in old guitar amps can’t sync the tremolo modulation frequency to incoming tempo data. Hot tips. Tremolo is the driving sound behind surf music, but it was also used on vocals back in the 60s, when people were so stoned they thought it actually sounded good. EXCITER Profile. The tradtional exciter increases brightness at higher frequencies without necessarily adding equalization; however, the term also applies to adding brightness to lower frequency ranges as well. The result is a brighter, “airier” sound without the stridency that can sometimes occur by simply boosting treble. How it works. Different processes vary, but one popular model adds subtle amounts of high-frequency distortion. Sometimes phase changes will also factor into the sound. Crucial parameters. Exciter Frequency sets the frequency at which the “excitation” starts to kick in. Exciter Mix or Amount varies how much “excited” sound gets added to the dry sound. The multiband Exciter module in iZotope’s Ozone 5 can be very effective for mastering. Annoying habits. People usually turn these up too much, and ruin otherwise perfectly good-sounding songs. Hot tips. Processing an entire mix through one of these boxes can be overkill. Instead, consider feeding the exciter with an aux bus and add in subtle amounts for various channels, as needed. VOCODER Profile. A vocoder primarily creates “talking instrument” effects, but can also be used to modulate one signal with another (e.g., modulate a sustained keyboard pad with drums). How it works. A vocoder has two inputs, the carrier input for an instrument, and the modulator input for a microphone or other signal source. Talking into the microphone superimposes vocal effects on whatever is plugged into the instrument input by opening and closing filters that process the instrument sound according to the frequencies present in the human voice. Crucial parameters. Carrier Input level sets the level of the carrier signal (duh). Modulator Input level adjusts the modulator signal level. Balance sets the blend of mic with vocoded sound. Highpass Filter adds in some high frequencies from the mic channel directly into the output to increase intelligibility. Propellerhead Software’s Reason includes an excellent vocoder, the BV512, and the patching options to take advantage of it. Annoying habits. The filters are so sharp, it’s easy to overload them and get distortion. Hot tips. Vocoders are good for much more than talking instrument effects. For example, play drums into the microphone input instead of voice, and use this to control a keyboard playing sustained chords. • For best results, the instrument being processed should have plenty of harmonics. This is why distorted guitar works well with vocoding. For more information: "Spice Up Your Tracks with a Vocoder" "How to Use Reason's Combinator Function" Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Yes, Eddie Kramer is a part of history — but what’s he’s doing today will be tomorrow’s history By Craig Anderton About 20 seconds into the interview, Eddie says the mic sound from my hands-free phone adapter thingie is “…shall we say, not of the best quality.” So I adjusted the mic placement until Eddie was happier with the sound. Nit-picky prima donna? Absolutely not. He’s just a very nice guy who’s unfailingly polite and helpful. Articulate, too. And that’s about 80% of what you need to know about Eddie: He really, really cares about sound, even if it’s just an interviewer’s mic. Which is probably what helps account for the other 20% you really need to know: He’s been behind the boards for some of the most significant musicians of our time, including Jimi Hendrix, Led Zeppelin, Buddy Guy, Kiss, Peter Frampton, the Beatles, AC/DC, the Rolling Stones, Carly Simon, Traffic, Joe Cocker, David Bowie, Johnny Winter, Bad Company, Sammy Davis Jr., the Kinks, Petula Clark, the Small Faces, Vanilla Fudge, NRBQ, the Woodstock festival, John Mayall, Derek & the Dominoes, Santana, Curtis Mayfield, Buddy Guy, Anthrax, Twisted Sister, Ace Frehley, Alcatraz, Triumph, Robin Trower, and Whitesnake. And let’s talk versatility: country act the Kentucky Headhunters, and classical guitarist John Williams. (For more information, there’s a short-form bio on Wikipedia.) And that’s just the tip of the iceberg, as he’s documented much of what he’s done with an incredible body of work as a photographer. You can find out a lot more about Eddie, including his latest F-Pedals project, at his web site. Given his history, if you think he lives in the past, you’d be one-third right. Another third lives in the present, and the remaining third in the future. During the course of an interview, you can find yourself in 1968 one minute, and 2016 the next. Cool. Eddie has enough savvy to know when it’s important to just go with the flow. Like that famous moment in “Whole Lotta Love” where you hear Robert Plant’s voice way in the background during the break. Print-through? An effect they slaved over for days? “The time of that particular mix was 1969, and this all took place over a weekend at A&R studios in New York. Imagine [mixing] the entire Led Zeppelin II on 8 tracks in two days! As we got into “Whole Lotta Love,” I actually only ended up using seven tracks because tracks 7 and 8 were two vocal tracks. I think I used the vocal from track 7. We’d gotten the mix going, I believe it was a 12-chanel console with two panpots. “During the mixdown, I couldn’t get rid of the extra vocal in the break that was bleeding through. Either the fader was bad, or the level was fairly high — as we were wont to do is those days, we hit the tape with pretty high levels. Jimmy [Page] and I looked at each other and said “reverb,” and we cranked up the reverb and left it in. That was a great example of how accidents could become part of the fabric of your mix, or in this case, a part of history. And I always encourage people today not to be so bloody picky.” Eddie has this wacky idea that the music is actually the important part of the recording process, not editing the living daylights out of it. To wit: “We’re living in the age of [computer-based programs like] Pro Tools, where we can spend hours, day, even weeks on end fixing all the little ‘mistakes.’ And by that time, you’ve taken all of the life out of the music. I don’t want to come off as trashing [Digidesign], but I feel that Pro Tools — which is a wonderful device — has its limitations in certain aspects.” And what might that main limitation be? “The people using it! And it becomes a sort of psychological battle . . . yes I can stretch that drum fill or performance [with bad timing], or I can effectively make a bad vocal sound reasonably decent, but what the hell is the point? Why didn’t the drummer play it right in the first place? Why didn’t the singer sing it right in the first place? “And that begs the question, do we have too many choices . . . and when we do, we sit there thinking ‘we can make it better.’ But for God’s sake, make it better in the performance! I want musicians who will look each other in the face, eyeball to eyeball, and I want interaction. I want these guys to be able to play their instruments properly, and I want them to be able to make corrections on the fly. If I say ‘In the second chorus, could you double up that part?’ I don’t want the guitarist giving me a blank look. “Learn your bloody craft, mates! The way we’re recording today does in fact give a tremendous amount of freedom to create in an atmosphere of relaxed inspiration. The individual can record in very primitive circumstances — bathrooms, garages, hall closets. Unfortunately for a lot of people, this means doing it one track at a time, which I think makes the final product sound very computerized and not organic. The other side of the coin is that many bands can think in terms of ‘let’s find a fairly decent acoustic space, set up mics, look each other in the eyes, and hit record.’” MIXED OUT, MIXED DOWN . . . OR MIXED UP? Ah, the lost art of mixing. If all you do is tweak envelopes with a mouse, that’s not mixing — that’s editing. If you think Eddie Kramer is a fix-it-in-the-mix kinda guy, you haven’t been paying attention. But there’s more. “One of the most exciting things as an engineer is to create that sound as it’s happening; having a great-sounding board, set of mics, and acoustic environment can lead one to a higher plane . . . when you hear the sound of the mixed instruments — not each individual mic — and get the sound now, while it’s happening. I don’t want to have to bugger around with the sound after the fact, other than mixing. There’s a thrill in getting a sound that’s unique to that particular situation. “The idea of mixing ‘in the box’ is anathema. It defeats the purpose of using one’s hand and fingers in an instinctive mode of communication. I am primarily a musician at heart; the technology is an ancillary part of what I do, a means to an end. I want to feel like I’m creating something with my hands, my ears, my eyes, my whole being. I can’t do that solely within the box. It’s counter-intuitive and alien. However, I do use some of the items within the box as addenda to the creative process. It lets me mix with some sounds I would normally not be able to get.” So do you use control surfaces when you’re working with computers, or go the console route? “Only consoles. I love to record with vintage Neve, 24-track Dolby SR [noise reduction] at 15 IPS, then I dump it into Pro Tools or whatever system is available, and continue from that point. If the budget permits, I’ll lock the multitrack with the computer. I’d rather mix on an SSL; they’re flexible and easy to work. I like the combination of the vintage Neve sound with the SSL’s crispness. And then I mix down to an ATR reel-to-reel, running at 15 IPS with Dolby SR. “With the SSL, I’m always updating, always in contact with the faders. I always hear little things that I can tweak. To me, mixing is a living process. If you’re mixing in the moment, you get inspired. I just wish I could do more mixes in 4-5 hours instead of 12, but some bands want to throw you 100 tracks. Sometimes I wish we could put a moratorium on the recording industry — you have three hours and eight tracks! [laughs] I’m joking of course, but . . . “On ‘Electric Ladyland,’ ‘1983’ was a ‘performance’ mix: four hands, Jimi and myself. We did that in maybe one take. And the reason why was because we rehearsed the mix, as if it was a performance. We didn’t actually record the mix until we had our [act] together. We were laughing when we got through the 14 minutes or so. Of course, sometimes I would chop up two-track mixes and put pieces together. But those pieces had to be good.” So do you mix with your eyes closed or open? “The only time I close my eyes when mixing is when I’m panning something. I know which way the note has to flip from one side to the other; panning is an art, and you have to be able to sense where the music is going to do the panning properly.” (to be continued)
  17. Spreadsheets Are Good for More than Doing Your Finances By Craig Anderton Yes, it’s a singles world—but people still buy CDs, and musicians still make them. There’s something about 30-70 minutes of music that allows for kicking back and getting absorbed into the experience. After all, the album with the most staying power on the Billboard charts is Pink Floyd’s Dark Side of the Moon—and if that isn’t the poster boy for putting songs into an album instead of releasing them one at a time, I don’t know what is. Nor are song orders just about albums. When putting a set together for a live performance, a lot of the concepts are the same: the right pacing, variety, creating a cohesive listening experience, and so on. Determining an album's song order is never easy, partly because if you want to know for sure whether the order works or not, you need to listen to the entire project from start to finish. Only then do you realize there are some minor problems—like the first four songs all end in fadeouts, or you have three consecutive songs that feature the same vocalist. So you try another order, and listen again... But there's a quicker way to come up with a possible song order: Use a spreadsheet to create a matrix that lists as many song parameters as possible (not just tempo and key) to help sort out what might give the best flow and coherence. Of course, different types of music require very different parameters, but the point of this article is to present a general approach. Hopefully, you can adapt these principles to your own music. CHOOSING THE PARAMETERS The more accurately you can quantify a song's characteristics, the easier it is to come up with a meaningful matrix. Fig. 1 shows how I used Open Office's spreadsheet program to create the matrix; let's discuss the parameter descriptions. Fig. 1: A spreadsheet can help you get an overview of your songs, making it easier to determine an album's song order. Note the use of color to differentiate the start and end of the CD's two halves. Title, Tempo, and Key are self-explanatory. Attitude describes, however inadequately, the song's main emotional qualities. This parameter is needed mostly to avoid bunching up too many songs with the same kind of feel, but also gives an idea of the basic emotional "road map." Main Lead describes what provides the main lead in the piece. Some of my tunes use actual vocals, some use vocal samples arranged to form a sort of lead line, while others have an instrumental lead (e.g., guitar). Guitar indicates the degree to which various tunes feature guitar (my primary instrument). For example, I didn't want all the songs that featured guitar solos to run together. Intro is how the song starts. I included this parameter because I once came up with a song order where two songs in a row started with sustained guitar fading in; separating the two worked much better. Out is how the song ends. For example, you don't want all songs that fade out to occur right after another. But also, by looking over the Out and its subsequent Intro, you can get a feel for how the songs hang together. Also note that the 1st and 7th songs are in blue, and the 5th and 11th songs in red. This is because I still tend to think of a CD as having two distinct parts, so blue indicates the "start" of a "side" while red indicates the "end." This isn't just a throwback to the days of vinyl; by giving each half its own identity, I think it's a lot easier to listen to a CD all the way through, because the experience is more like listening to two shorter CDs back-to-back. On this CD, an "intermission" separates the two halves. This instrumental transition has no real tempo and consists primarily of long, dreamy lead guitar lines, so it's a good place to "reset" the rhythmic continuity and start over. The second half has a nice climb from 102, to 110, to 130, then a brief dip down to 125 before closing out at more neutral 101. TESTING THE ORDER Here are four useful tools for testing song orders. If you have a portable player like a smartphone, hand-held recorder, iPod, etc., transfer the tunes to it and create various playlists. Listen to them and live with them for a while to determine which ones you like best. A similar idea is to burn a CD with all the tunes, and use a CD player that lets you program a particular song order. Create one huge sound file with all the cuts, then open this up in a digital audio editor capable of creating a playlist. Use the playlist to try out different orders. You can usually audition the playlist transitions, often with a user-settable pre- and post-roll time. Most CD-burning programs make it easy to arrange songs in a particular order, then play through them. Generally, it will also be easy to listen to the transitions between songs. And of course, once you get the order right, you can burn a CD and play it in those cars that still have CD players. WHICH SPREADSHEET? It doesn't really matter what spreadsheet you use. Microsoft's ubiquitous Excel is an obvious choice, but the spreadsheet part of Open Office works just fine too, and it's free. In fact, you don't really have to use a spreadsheet at all; a word processor will often do the job, or for that matter, paper and pencil. SETTING PRIORITIES This may seem like an overly-clinical way to determine song order, but think of it as an idea-starter, not a dictator. At the very least, it will probably help indicate which pairs of songs work well together. The matrix also provides a point of departure, which is always easier than just starting with a "blank page." The final arbiter of a good order is your ears, but check out this approach and see if it's as helpful to you as it has been to me. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. Is Celemony's Melodyne really better than compression in some cases? Try it, and find out... by Craig Anderton Although most people think “pitch correction” when you say “Melodyne,” there’s much more to the program than that. And interestingly, the percussion algorithm can be a fantastic tool for creating more uniform vocal levels as an alternative to slamming vocals with a compressor, or going through clips and adjusting the gain manually. If a singer loses steam on some words, wanders from the mic, or can’t maintain a level when reaching the limits of the vocal range, this is an ideal place to start repairing the track. The procedure is quite simple: 1. Open up the vocal that needs fixing in Melodyne, then choose the Percussive algorithm. 2. The “blobs” represent individual words or in some cases, phrases. Grab the Amplitude tool. 3. Click on a blob; drag higher to raise the level, or lower to decrease the level (the blob in red is having its level raised).. 4. Here’s the result of editing—a smooth vocal line with consistent levels. Yes, it really is that simple. Really. And don’t forget you can split blobs if you need more control, for example, just the end of a word needs to increase. You can keep Melodyne open, but if the vocal is as you want it, you might as well just render the clip so you can cross “fix uneven vocal levels” off your list. If you then want to compress or limit the vocal, you won’t need to use as much processing, and the effect won’t be as obvious. Now, about the downside: Hmmm...actually, I can’t think of any. Just don’t try to increase the levels beyond your available headroom, and be aware that some blobs might be a breath inhale or plosive; you don’t want to raise those, so listen while you adjust. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Once again, it’s time to play “stompbox reloaded” by Craig Anderton [Note: For a related topic, see the article Stompbox Compressors in the...Studio?] Distortion on guitar is the sound of rock and roll. Although some guitarists get distortion simply by turning up their amps, many use distortion stompboxes to get their “sound,” and even overdrive amps with stompboxes to increase distortion. There are various types of distortion, because different distortion elements give different sounds. For example, germanium diodes clip at a lower voltage than silicon diodes, while red LEDs—the basis of the Quadrafuzz multi-band distortion I invented in the 80s—clip at a fairly high voltage, but also change frequency response based on how hard you drive them. (Boston guitarist Tom Scholz is also a fan of red LEDs for distortion.) Another option is to use CMOS or FET-based distortion elements, which can sound very much like tubes—and of course, some distortion boxes use real, physical tubes. Most stompboxes have at least controls for gain, output level, and some kind of tone control. Famous distortion boxes include the Arbiter Fuzz Face, Pro Co RAT, Electro-Harmonix Big Muff Pi (Fig. 1), and Ibanez Tube Screamer. Variations include Roger Mayer’s Octavia (which produced a distorted tone an octave above), and octave dividers like the Mu-Tron Octave Divider, which was designed with Dan Armstrong and includes his “Green Ringer” circuit. Fig. 1: Electro-Harmonix’s Big Muff Pi was introduced over forty years ago, and is still in production. It’s been used by top players like Santana, Jack White, Jimi Hendrix, The Edge, and Dave Gilmour (you’ve heard the Big Muff Pi in the solo for “Comfortably Numb”). APPLICATIONS RELOADED Drums. You probably don’t want to use too much gain, but guitar stompboxes can really “toughen up” analog drums—add a little distortion to TR-808 sounds, and you’ll be amazed how that polite drum sound turns into a monster. Parallel processing combines the full, natural sound of the drum with the distortion, but another option is to use guitar distortion as a send effect, which is particularly good if your drums have multiple outputs. Even a little bit of distortion can add a great edge to drums, including loops of acoustic drums and percussion. Bass. Distortion on bass usually gives a thin sound because of all the harmonics that distortion generates. As with many other effects for bass, it’s generally best to patch the distortion in parallel with your bass signal. With a hardware synthesizer or bass, split the output; feed one output directly into an interface or amp input, and the other output through distortion into a second interface or amp input. With a virtual synthesizer or recorded track, many DAWs have the option to use spare interface ins and outs, along with an additional bus, to treat hardware stompboxes like plug-ins. Bass seems to sound best with relatively low gain distortion settings, as this gives more of a deep “growl” that cuts well through a song, and adds an aggressive effect. Too much distortion starts to compete with the guitar sound, so it becomes difficult to tell the two apart. Vocals. Nine Inch Nails and hardcore/industrial groups add distortion to vocals for a dirty, disturbing effect. Guitar stompboxes are excellent for this because the “voicing” for guitar also works well with vocals. Keyboards. Classic B3 organ sounds often took advantage of overdriving a rotating speaker’s preamp to create distortion; adding stompbox distortion to synthesizer B3 sounds can give extra “dirt” that adds character. Keep the gain fairly low, as you don’t want a “fizzy” sound. Like bass and drums, a parallel connection usually gives the best results. EMULATING STOMPBOX DISORTION WITH MODERN GEAR Fortunately, many modern amp simulator plug-ins have excellent distortion effects—it’s much easier to emulate the characteristics of a stompbox than the more complex characteristics of a preamp, amp, and cabinet. But, distortion was the product of experimenting, so experiment! For example, Cakewalk Sonar Producer’s ProChannel includes a Saturation module (Fig. 2)—and it’s not the only host to include a distortion processor. Fig. 2: Cakewalk Sonar can insert a Saturation module and a Tube distortion module in their ProChannel channel strip. Although these are generally intended to provide relatively subtle, tube saturation effects, you can turn up their input controls to maximum and feed it with as high-level a signal as possible to really “crunch” the sound—that’s rock and roll. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Maybe you don't want your guitar to sound all clean and crispy... By Craig Anderton When mixers, audio interfaces, direct boxes, and all-in-one recording “workstations” started including high impedance inputs for guitar, it seemed like a good idea. After all, high impedance—a characteristic of tube amp inputs—avoids loading your guitar’s pickups. This loading can dull the tone and reduce volume. Well as Newton said, “For every action, there is an equal and opposite reaction.” And as I say, “For every action, the Law of Unintended Consequences may appear.” In this case, it turns out that some guitarists prefer the loading effects of a low impedance input, as found in many solid-state amps and effects boxes. The high frequency reduction can contribute to a smoother, rounder sound when feeding distortion and overdrive boxes. If you find your fi a little too hi, don’t fret— eclaim your duller, vintage solid-state tone with this cheapo do-it-yourself box. As the parts list and schematic show, it’s ridiculously simple and doesn’t even use power. As long as you know which end of a soldering iron to hold, you can probably build the Lo-Fi Converter. Note, however, that this box only applies to standard, passive pickups. It won’t have any effect with active (preamplified) pickups. CONSTRUCTION Drill three 3/8” holes in the box: two for the jacks and one for the switch, a 2-pole, 6-throw type. We’ll use only one of the poles, which connects to each jack’s hot connection. Leave one of the switch throws unconnected. Connect one end of a resistor to each of the other throws, then run a ground wire from the other resistor ends to the jack grounds. Done! USING IT Plug your guitar into either jack (the two jacks are interchangeable), and patch the other jack to the high impedance input you’d like to degrade. With the switch set to the unconnected throw, the circuit doesn’t affect your signal because the input jack connects directly to the output jack. Different resistors load the signal by different amounts, with the 4.7k resistor causing the greatest loading. When using the lower-valued resistors, you’ll probably need to turn up overall volume to compensate for the loss in level. Experiment to find out which amount of loading works best for you. Even if you’re a fan of hi-fi sound, you might be pleasantly surprised at how much a little loading can smooooth out your sound when feeding distortion. PARTS LIST (numbers indicate Radio Shack stock numbers) S1 2-pole, 6-position switch (#275-034) R1 4.7K resistor (#271-1124) R2 10K resistor (#271-1126) R3 22K resistor (#271-1128) R4 47K resistor (#271-1130 ) R5 100K resistor (#271-1131) J1, J2 1/4" phone jack (#274-312) Misc. Small metal case (#270-238), knob (#274-416), wire, solder Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. But it’s not just about guitars—get vintage tape flanging with any signal by Craig Anderton In the psychedelic 60s, there weren't digital delays or DSP, so many effects - like echo - were done with tape, and this includes flanging. It wasn't real time, as signals had to be recorded first and then go past the playback head, but this had the advantage of the processed signal being able to be delayed not only compared to the dry signal, but even move ahead of it. As it moved through the point where there was zero delay between the dry and processed signal, they typically cancelled and thus produced what was called “through-zero” flanging. Although this isn't possible with purely digital delay because there's always going to be some latency, if you're willing to add a tiny bit of delay to the dry signal, you can indeed get through-zero flanging with plug-ins. This example uses Native Instruments' Guitar Rig, but the same principle applies to other plug-ins as well. (Note that you don't have to have the full version of Guitar Rig 4; this even works with Guitar Rig 4 LE, as shown in the screen shots.) Also note that flanging sounds most dramatic when preceded with a signal that has lots of distortion, like a fuzz or distorted amp. Start by inserting Guitar Rig in the audio track you want to process. Click on the Components button, then the Categories tab, then open up the Tools section so you can select the Split module (Fig. 1). Fig. 1 Next, drag the Split module into the rack (Fig. 2). We need two signal paths so we can delay one against the other, which also has a slight amount of delay. Fig. 2 Under Components, open the Modulation section. Drag one Chorus/Flanger between Split A and Split B, then drag another Chorus/Flanger between Split B and the Split Mix module (Fig. 3). Fig. 3 Now adjust the Chorus/Flanger control settings for both modules (Fig. 4): Choose Pitch Mod Mode Set the Intensity controls up halfway Set the Width controls up all the way Turn Speed to minimum (0.10Hz) for one Chorus+Flanger, and the other's Speed to 0.15Hz At the Split Mix module, set the Crossfader halfway, both pans to center, and click on the +/- button to throw one of the paths out of phase. This gives the most authentic tape flanging sound. Fig. 4 And there you have it: light some incense, put on your love beads, and enjoy the sound of vintage tape flanging! Of course, you don't need to stop there...there are plenty of options for experimentation. Vary the Intensity controls to change the overall effect; for a less-defined, more "swimming" sound, set one or both of the Chorus/Flangers to Flanger instead of Pitch Mod. You can also get some wild psycho-acoustic effects by panning each Split oppositely in the Split Mix, although you'll lose the flanging effect. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Meet the ghost in your machine By Craig Anderton Musicians are used to an instant response: Hit a string, hit a key, strike a drum, or blow into a wind instrument, and you hear a sound. This is true even if you’re going through a string of analog processors. But if you play through a digital signal processor, like a digital multieffects, there will be a very slight delay called latency—so small that you probably won’t notice it, but it’s there. Converting an analog signal to digital takes about 600 microseconds at 44.1kHz; converting back into analog takes approximately the same amount, for a “round trip” latency of about 1.2 milliseconds. There may also be a slight delay due to processing time within the processor. Because sound travels at about 1 foot (30 cm) per millisecond, the delay of doing analog/digital/analog conversion is about the same as if you moved a little over a foot away from a speaker, which isn’t a problem. However, with computers, there’s much more going on. In addition to converting your “analog world” signal to digital data, pieces of software called drivers have the job of taking the data generated by an analog-to-digital converter and inserting it into the computer’s data stream. Furthermore, the computer introduces delays as well. Even the most powerful processor can do only so many millions of calculations per second; when it’s busy scanning its keyboard and mouse, checking its ports, moving data in and out of RAM, sending out video data, and more, you can understand why it sometimes has a hard time keeping up. As a result, the computer places some of the incoming audio from your guitar, voice, keyboard, or other signal source in a buffer, which is like a “savings account” for your input signal. When the computer is so busy elsewhere that it can’t deal with audio, it makes a “withdrawal” from the buffer instead so it can go deal with other things. The larger the buffer, the less likely the computer will run out of audio data when it needs it. But a larger buffer also means that your instrument’s signal is being diverted for a longer period of time before being processed by the computer, which increases latency. When the computer goes to retrieve some audio and there’s nothing in the buffer, audio performance suffers in a variety of ways: You may hear stuttering, crackling, “dropouts” where there is no audio, or worse case, the program might crash. The practical result of latency is that if you listen to what’s you’re playing after it goes through the computer, you’ll feel like you’re playing through a delay line, set for processed sound only. If the delay is under 5 ms, you probably won’t care too much. But some systems can exhibit latencies of tens or even hundreds of milliseconds, which can be extremely annoying. Because you want the best possible “feel” when playing your instrument through a computer, let’s investigate how to obtain the lowest possible latency, and what tradeoffs will allow for this. MINIMIZING LATENCY The first step in minimizing delay is the most expensive one: Upgrading your processor. When software synthesizers were first introduced, latencies in the hundreds of milliseconds were common. With today’s multi-core processors and a quality audio interface, it’s possible to obtain latencies well under 10 ms (and often less) at a 44.1kHz sampling rate. The second step toward lower latency involves using the best possible drivers, as more efficient drivers reduce latency. Steinberg devised the first low-latency driver protocol specifically for audio, called ASIO (Advanced Streaming Input Output). This tied in closely with the CPU, bypassing various layers of both Mac and Windows operating systems. At that time the Mac used Sound Manager, and Windows used a variety of protocols, all of which were equally unsuited to musical needs. Audio interfaces that supported ASIO were essential for serious musical applications. Eventually Apple and Microsoft realized the importance of low latency response and introduced new protocols. Microsoft’s WDM and WASAPI in exclusive mode were far better than their previous efforts; starting with OS X Apple gave us Core Audio, which was tied in even more closely with low-level operating system elements. Either of these protocols can perform as well as ASIO. However for Windows, ASIO is so common and so much effort is put into developing ASIO drivers that most musicians select ASIO drivers for their interfaces. So we should just use the lowest latency possible, yes? Well, that’s not always obtainable, because lower latencies stress out your computer more. This is why most audio interfaces give you a choice of latency settings (Fig. 1), so you can trade off between lowest latency and computer performance. Note that latency is given either in milliseconds or samples; while milliseconds is more intuitive, the reality is that you set latency based on what works best (which we’ll describe later, as well as the meaning behind the numbers). The numbers themselves aren’t that significant other than indicating “more” or “less.” Fig. 1: Roland’s VS-700 hardware is being set to 64 samples of latency in Cakewalk Sonar. If all your computer has to do is run something like a guitar amp simulator in stand-alone mode, then you can select really low latency. But if you’re running a complex digital audio recording program and playing back lots of tracks or using virtual software synthesizers, you may need to set the latency higher. So, taking all this into account, here are some tips on how to get the best combination of low latency and high performance. If you have a multi-core-based computer, check whether your host recording program supports multi-core processor operation. If available, you’ll find this under preferences (newer programs are often “multiprocessor aware” so this option isn’t needed). This will increase performance and reduce latency. With Windows, download your audio interface’s latest drivers. Check the manufacturer’s web site periodically to see if new drivers are available, but set a System Restore point before installing them—just in case the new driver has some bug or incompatibility with your system. Macs typically don’t need drivers as the audio interfaces hook directly into the CoreAudio services (Fig. 2), but there may be updated “control panel” software for your interface that provides greater functionality, such as letting you choose from a wider number of sample rates. Fig. 2: MOTU’s Digital Performer is being set up to work with a Core Audio device from Avid. Make sure you choose the right audio driver protocol for your audio interface. For example, with Windows computers, a sound card might offer several possible driver protocols like ASIO, DirectX, MME, emulated ASIO, etc. Most audio interfaces include an ASIO driver written specifically for the audio interface, and that’s the one you want to use. Typically, it will include the manufacturer’s name. There’s a “sweet spot” for latency. Too high, and the system will seem unresponsive; too low, and you’ll experience performance issues. I usually err on the side of being conservative rather than pushing the computer too hard. Avoid placing too much stress on your computer’s CPU. For example, the “track freeze” function in various recording programs lets you premix the sound of a software synthesizer to a hard disk track, which requires less power from your CPU than running the software synthesizer itself. MEASURING LATENCY So far, we’ve mostly talked about latency in terms of milliseconds. However, some manufacturers specify it in samples. This isn’t quitebas easy to understand, but it’s not hard to translate samples to milliseconds. This involves getting into some math, so if the following makes your brain explode, just remember the #1 rule of latency: Use the lowest setting that gives reliable audio operation. In other words, if the latency is expressed in milliseconds, use the lowest setting that works. If it’s specified in samples, you still use the lowest setting that works. Okay, on to the math. With a 44.1kHz sampling rate for digital audio (the rate used by CDs and many recording projects), there are 44,100 samples taken per second. Therefore, each sample is 1/44,100th of a second long, or about 0.023 ms. (If any math wizards happen to be reading this, the exact value is 0.022675736961451247165532879818594 ms. Now you know!) So, if an audio interface has a latency of 256 samples, at 44.1 kHz that means a delay of 256 X 0.023 ms, which is about 5.8 ms. 128 samples of delay would be about 2.9 ms. At a sample rate of 88.2 kHz, each sample lasts half as long as a sample at 44.1 kHz, so each sample would be about 0.0125 ms. Thus, a delay of 256 samples at 88.2 kHz would be around 2.9 ms. From this, it might seem that you’d want to record at higher sample rates to minimize latency, and that’s sort of true. But again, there’s a tradeoff because high sample rates stress out your computer more. So you might indeed have lower latency, but only be able to run, for example, half the number of plug-ins you normally can. SNEAKY LATENCY ISSUES Audio interfaces are supposed to report their latency back to the host program, so it can get a readout of the latency and compensate for this during the recording process. Think about it: If you’re playing along with drums and hear a sound 6 ms late, and then it takes 6 ms for what you play to get recorded into your computer, then what you play will be delayed by 12 ms compared to what you’re listening to. If the program knows this, it can compensate during the playback process so that overdubbed parts “line up” with the original track. However, different interfaces have different ways to report latency. You might assume that a sound card with a latency of 5.8 milliseconds is outperforming one with a listed latency of 11.6 ms. But that’s not necessarily true, because one might list the latency a signal experiences going into the computer (“one-way latency”), while another might give the “round-trip” latency—the input and output latency. Or, it might give both readings. Furthermore, these readings are not always accurate. Some audio interfaces do not report latency accurately, and might be off by even hundreds of samples. So, understand that if an audio interface claims that its latency is lower than another model, but you sense more of a delay with the “lower latency” audio interface, it very well might not be lower. WHAT ABOUT “DIRECT MONITORING”? You may have heard about an audio interface feature called “direct monitoring,” which supposedly reduces latency to nothing, so what you hear as you monitor is essentially in real time. However, it does this by monitoring the signal going into the computer and letting you listen to that, essentially bypassing the computer (Fig. 3). Fig. 3: TASCAM’s UH-7000 interface has a mixer applet with a monitor mix slider (upper right). This lets you choose whether to listen to the input, the computer output, or a combination of the two. While that works well for many instruments, suppose you’re playing guitar through amp simulation plug-in running on your computer. If you don’t listen to what’s coming out of your computer, you won’t hear what the amp simulator is doing. As a result, if you use an audio interface with the option to enable direct monitoring, you’ll need to decide when it’s appropriate to use it. THE VIRTUES OF USING HEADPHONES One tip about minimizing latency is that if you’re listening to monitor speakers and your ears are about 3 feet (1 meter) away, you’ve just added another 3 ms of latency. Monitoring through headphones will remove that latency, leaving only the latency caused by using the audio interface and computer. MAC VS. WINDOWS Note that there is a significant difference between current Mac and Windows machines. Core Audio is a complete audio sub-system that already includes drivers most audio interfaces can access. Therefore, as mentioned earlier, it is usually not necessary to load drivers when hooking an audio interface up to the Mac. With Windows, audio interfaces generally include custom drivers you need to install, that are often on a CD-ROM included with the interface. However, it’s always a good idea to check the manufacturer’s web site for updates—even if you bought a product the day it hit the stores. With driver software playing such a crucial part in performance, you want the most recent version. With Windows, it’s also very important to follow any driver installation instructions exactly. For example, some audio interfaces require that you install the driver software first, then connect the interface to your system. Others require that you hook up the hardware first, then install the software. Pay attention to the instructions! THE FUTURE AND THE PRESENT Over the last 10 years or so, latency has become less and less of a problem. Today’s systems can obtain very low latency figures, and this will continue to improve. But if you experience significant latencies with a modern computer, then there’s something wrong. Check audio options, drivers, and settings for your host program until you find out what’s causing the problem. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Loop abuse is not against the law, so feel free to bend, fold, staple, and mutilate by Craig Anderton Although much of the work involved with loops is to try to make them sound as good as possible when stretched, there’s something to be said for experimenting with twisting the loop sound beyond all recognition. Here’s a technique that works with Cakewalk Sonar and Sony Acid Pro that processes loops in totally freakazoid ways—from sci-fi to electro. It’s all based on deliberately mis-setting some of the looping parameters to create effects unobtainable by any other type of processing. These processed loops can work very well when layered with the original loop, which should be set to normal loop settings. They also make great breakbeats when you drop out the original loop, as well as make some sounds in their own right. Start off with drum loops, but feel free to try other variations on this madness. FREAKAZOID LOOPS IN SONY ACID PRO Click on the loop, then choose View > Clip Properties. Under the General tab, set Pitch Shift to 24 semitones for now (you’ll want to experiment with this later). Then click on the Stretch tab, and set the parameters as shown in Fig. 1. Fig. 1: Acid Pro's Clp Properties window is where you can find the freakazoid action. Number of Beats = the number of beats in the loop Stretching Method = Looping Segments Transient Sensitivity = 0 Timing Tightness = the main rhythm for your loop; 16th notes usually works well Stretch Spacing = 32nd notes Click the Clp Properties window’s Play button, then try different Stretch Spacing values (this makes the biggest difference), and different pitches under the General tab. The sound becomes less interesting if you go much below 12 semitones, but there are still useful sounds at pretty much anything other than 0 transposition—especially if you slow way down, and choose a large rhythmic value for Stretch Spacing. FREAKAZOID LOOPS IN CAKEWALK SONAR Double-click on an audio clip to open up the Loop Construction window (Fig. 2). Turn on Looping, then specify the number of beats in the clip. The default number of beats should be correct, but edit this parameter if necessary (the most common glitch is detecting twice as many beats). Fig. 2: Sonar's Loop Construction window allows for a variety of loop processing options. You’ll see slice markers that indicate individual rhythmic segments overlaid on the waveform in the window. Working with this window, set the following parameter values: Pitch = 24 Trans Detect slider = all the way to the left (0%) Slicing slider = 32nd notes Now click on Play or Preview to start the loop playing, and experiment with the Slicing slider. A 32nd note value gives the most robotic/metallic effect, but also try 16th, 8th, etc. Each slice setting produces a different type of freakazoid effect. A pitch parameter of +24 is a somewhat “magic” value, but +12 also produces useful effects. –12 and –24 give weirdly pitched, slowed-down effects that also sound fabulous layered with the original loop. Note that you can often simplify the loop beats by setting the Trans Detect slider to a low value, like 10%. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Make your loops more interesting with slicing, dicing, offsetting, and detuning by Craig Anderton Loop-based music is much more than the province of dancing and DJs. I’ve used loop music for three movie soundtracks, two industrial videos, a radio commercial, several remixes, and for rock as well as dance-oriented music. But using loops is also a friendly way to get into music, as the process is more like collage—you don’t need great musical technical chops to put together a satisfying musical experience. However, you do need good source material, and most importantly, the ability to move beyond the constraints associated with this method of making music. The tips presented in this article, coupled with a reasonable amount of time spent editing, will hopefully help you add a more creative, humanized element to your loop-based music. First of all, avoid letting loops repeat ad infinitum. You need to slice, dice, and otherwise modify them to maintain interest. The following examples show screen shots from Cakewalk Sonar, but apply to other programs as well. PERSUASIVE PERCUSSION A lively percussion loop can complement a drum part, but the point of these instruments is to add variations. To keep the loops from getting too repetitive, cut them into pieces. Rearrange the pieces in various orders, but maintain some level of repetitiveness to “anchor” the part (for example, try always repeating the same quarter note fragment at the beginning of every measure or two). Fig. 1 shows a tambourine loop before and after slicing and dicing. Fig. 1: The last two measures of this four-measure loop have been "sliced and diced" to create a different part compared to the original loop. A couple of the louder hits have been repeated to accent the part, and some of the smaller hits have added 32nd-note flourishes. BREAK THAT BEAT An important element in some dance-oriented music is the breakbeat, where the sound “thins out” dramatically just before a figure repeats. The breakbeat provides the element of tension/variation in the “tension/release” equation. For example, suppose you have a two-measure, repeating drum loop. Let it go for seven measures, then cut the 8th measure. This throws the spotlight on whatever is playing in the background, such as a bass part. Conversely, you could cut out the last measure of bass, and let the drums carry the piece by themselves; or cut both the bass and drums, and stick in a drum fill that’s different from the main drum loop. You may even be able to use the slice and dice technique mentioned previously to create a fill variation out of the drum loop. In addition to a major breakbeat effect during the eighth measure, I’ll sometimes throw in a slight variation on the fourth measure, like erase the second half of the measure, and repeat the first half of the measure in its place. EXITING THE BREAKBEAT As you come out of the breakbeat, adding a few kick drum hits can be a really effective lead-in to the next measure. As many dance-oriented drum loops start with a kick, if you draw in just the first sixteenth note of the loop, you’ll hear a short kick. To lead in, set snap to 16th notes, then draw two 16th notes just before the beginning of a measure. Also, try lowering the first kick’s volume a bit to provide some dynamics leading into the next kick hit. If you can’t find a loop with a suitable kick at the beginning, you can always drag in a one-shot kick drum and trim as needed (Fig. 2). Fig. 2: The kick that leads off a loop (highlighted in yellow) has been copied twice and inserted as 16th-note lead-ins. Note the red clip gain envelopes; the two kicks get louder as they lead into the loop. PART SPLICING Sometimes a continuing part, like a rhythm guitar, can get really b-o-r-i-n-g as it loops and loops and.... It really helps to cut a small piece from a similar or related part, and splice it in to break up the monotony and add a useful accent. For example, if there’s a funky guitar part doing most of the work, add a wa-wa flourish from a different loop magicmagat the end of the measure. REPAIRING PAD LOOP SEAMS No, “pad loop” has nothing to do with Thai food. Rather, pads are sustaining sounds, like string beds or drones. Pads are difficult to loop, whether in samplers or digital audio programs, and a common fix is to add a short fade to the beginning and/or loop end so that there aren’t any clicks when the end transitions back to the beginning. Unfortunately, this causes the sound to drop out momentarily, thus negating the loop’s continuous nature. The simplest way to fix a gap is to duplicate the looped pad, and offset the copy so its peak occurs during the original loop's volume drop, thus masking it (Fig. 3). For the best masking, pan the two loops to the same point in the stereo image. Fig. 3: Two tracks of the same pad loop (loop repeat points highlighed in yellow for clarity), offset with respect to each other to cover a loop transition. DETUNING Here’s another wonderful anti-boredom tool, particularly for drum loops. Suppose a drum loop plays during the intro to a verse, and during the verse itself. Detune the portion behind the verse by around a half-step or so; this adds a timbral difference that supports the change in the song from intro to verse. Detuning is also great with cymbals, as you can use it to turn one cymbal into a family of cymbals. If you define the cymbal as a one-shot instead of a loop, changing tuning also changes duration. In fact, you can get a gong-like effect by layering two cymbal sounds. Detune one by about a half-dozen semitones; detune the other by a much greater amount, like 20 or more semitones. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...