Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. Sometimes all you need to do to fix a older piece of digital gear is to check the connectors by Craig Anderton So you decided to re-visit that older piece of digital gear you had sitting around, plugged it in, and—ooops, nothing! Although this could mean you’ll have to cough up the bucks to get it serviced, if it’s out of warranty anyway, there are a few tricks you might try to get it working again. First, you have to unplug the unit (seriously, some people forget to do this!) and disassemble it. You may get lucky and find a service manual online, but be careful—you want to fix any damage, not create more. Use your cell phone to take pictures of what went where so you won’t have any trouble putting everything back together again. Get a small plastic container or cup to hold any screws; if it seems you can’t get the bottom off because of a hidden screw, check to see if it’s under a sticker or label on the unit’s underside. And please—if you don’t have the right size screwdriver, get one that fits. Stripping the head of a Philips head screw before it’s out completely means you probably won’t be able to remove it at all. Second, very carefully unplug and re-plug any ribbon connectors, ICs, and Molex connectors. You don’t have to take the plugs all the way out; just take them out far enough so that pushing them back in will wipe the contacts of oxidation. You may need to do this a couple of times if there’s any serious oxidation or even corrosion, and remember, some connectors may have tabs that have be pried away from the connector in order to remove it. With socketed integrated circuits, it’s worth getting an IC puller and gently rocking the IC back and forth to move the pin contacts within the socket (they don’t need to move much). If you pull the IC out, be extremely careful when re-inserting it so you don’t bend any pins, or end up bending a pin underneath the IC. After doing this, reassemble the device, plug it in, cross your fingers, and prepare for the “smoke test.” These procedures aren’t always the solution, but you might be surprised at just how often you’ll end up with a working unit—and not have to send it in for servicing after all. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. By Andrew Lentz, photography by Rick Malkin (originally published in the January 2012 issue of DRUM! Magazine) Drummers might pat themselves on the back after mastering complicated licks, odd-time signatures, and multi-accessory speed fills, but technical razzle-dazzle won’t amount to a hill of beans if no one is around to hear it. That’s the first, and apparently, only lesson Patrick Carney got after being forced into the drum chair in the earliest stages of his band’s career, and it’s paid off handsomely. When Carney and guitarist Dan Auerbach began bashing out their primitive blooz-rock in an Akron, Ohio basement 15 years ago, the duo never dreamed they would one day go two-times platinum, have its songs featured on TV’s top-rated shows, and in 2010 alone, snag three Grammys. “I wanted to be a guitar player,” confesses Carney in a sleepy voice from a hotel room in Seattle. Good thing he’s in the city that invented the triple latte — any drummer who starts an interview in our magazine with that sentence needs to smell the coffee. The occasion is a one-off performance celebrating the opening of a Microsoft store, and then it’s off to Europe for some promotional craziness. Carney has come a long way from the Rubber City, not to mention his teenage dreams of six-string heroism. The Black Keys’ steadily upward-sloping career trajectory is apt to spike again with the release El Camino. Here’s the thing though: Carney admits he still can’t play a very consistent beat. He also doesn’t care. Besides being the Keys’ seventh and best album, El Camino proves that chops not only aren’t everything, they’re oftentimes the enemy of success. So how did a rudiment-shunning scofflaw not sweat the technique? By transcending it. In case we haven’t been 100 percent clear, the nuggets of wisdom herein are part of The Black Keys’ story. By no means is Carney saying his approach is orthodox or sanctioned or anything close to proper. However, if it works for a rock and roll loving Everydude from a broke-down Midwest berg, maybe it can work for you, too. 1. Be Entertaining When Carney and Auerback first started making records, Carney could barely keep time. The minimal, simple drum parts that he played were nothing more than pure imitation of his and the guitarist’s musical favorites, which ran from the cave-man art-rock of Captain Beefheart and, on an even more simplistic level, Wu-Tang Clan. “The samples that RZA would pull were so basic,” Carney says. “It’s so accessible for drummers that are just starting out.” He eventually got it together enough to be able to perform in front of an audience, the occasion of which was so momentous the date of its occurrence, March 20, 2002, has been seared into his brain. There were probably 15 people who showed up, about a third of whom were aspiring rocker friends of his. “I remember getting off stage and just expected my friends who were musicians to make fun of me. But they all said it was a good show,” he recalls, still surprised. “So I slowly started to not really worry about whether or not I was as good as another drummer, but rather worry about if I thought my parts were interesting enough. We learned through the process that technical proficiency almost never plays a role in how good something is, except for singing. I think singing is the most important. If you can’t sing you’re not going to have a band. But if you can’t play the drums that well or the guitar that well, you can still have a band.” It’s a strikingly black-and-white declaration from a guy whose whole aesthetic is more or less fudging it when you don’t know what to do. Singing drummers are often a necesssary evil for two-piece bands since they must do more with less, but to this day Carney refuses to do backing vocals. “I can’t sing ... or I won’t,” he clarifies. “If I could really sing maybe I’d consider it. Also, there are very few times I want to see a drummer sing.” 2. Surrender To The Power If you’ve seen The Black Keys live or watched their YouTube videos, you know that Carney is the proverbial hard hitter. He wasn’t always that way, which is evident in any clip prior to 2003. The turning point came when The Black Keys were invited to London for a label showcase on the eve of the release for second album Thickfreakness. “I was just completely terrified,” he recalls of that night at the Camden Barfly. “For some reason I decided right then and there that it would be in my best interest to hit the drums a lot harder. So that show, I just decided to do it. And then I just never was able to not do it.” Any instructor worth his 25 bucks an hour will tell you that power should not originate from nervousness. Strength should come from a relaxed and controlled place. But once Carney got over the stage fright and established a preference for the hard-hitting mode, he was able to see beyond the adrenaline. “There’s something that happens when you hit the drums harder other than, obviously, the drums are louder,” he says, pointing out that by the time you reach a certain-size venue it doesn’t matter how hard you hit the drums because the P.A. is doing most of the work anyway. “The thing that I realized that happens is you’re exerting yourself a lot more, and you start thinking about your surroundings a lot less. And because of that, for me at least, I end up thinking more about the music and the nuances within the music. It creates a much larger dynamic range.” Ah, the “D” word. Now that’s vintage Keys: rollercoastering volume, cymbal crescendos, fluctuating time, flubbed licks, imperfect but awesome accents — all the sumptuous nooks and crannies of Carney’s approach. “We’re a two-piece band,” he continues. “You’re not going to have really high ups or really low downs. It’s in that in-between then that I think you’re at an advantage. So then, once I figured out where we lived, it all kind of made sense to me the way I play.” 3. Use Your Muse Another epiphany for Carney was when he first viewed a concert film of Black Sabbath playing the Olympia Theater in Paris in 1970 and watching Bill Ward work his magic on that 4-piece Ludwig. “If somebody could’ve shown me that when I was 15, I would’ve thought a lot differently about the drums,” he says. “I would’ve given up on the guitar a lot sooner. You see certain people play and I think it puts the drums in a whole different light. It isn’t about keeping time and it isn’t about just being there for necessity. It’s part of the band. It can be a lead instrument.” The Keys’ onstage dynamic — a musical interlocking of horns — is a dirty-funk telepathy. So it’s surprising to learn that despite the duo sharing a few favorite bands, Auerbach didn’t know Led Zeppelin or any of the British blues that Carney had dug since his teens. Both dudes’ listening habits have broadened in their years as a band and that is reflected in the music. Take the El Camino track “Sister” which has a driving beat so clean and wide you could drive a truck through it. “Every time I hear it I think of ‘Billie Jean,’” he says. The beat Ndugu Chancler supplied on Michael Jackson’s massive hit will always be cool, but Carney admits that even “Eye Of The Tiger” by one-hit-wonder Survivor was a kind of inspiration. “My favorite thing lately is, like, kind of rock revivalism in the ’70s, which was basically what glam-rock was,” he continues, naming T. Rex, The Stooges, and Bowie. “That’s basically ’50s rock with a much more prominent beat.” Buddy Holly glasses and Ludwig reissue kit notwithstanding, Carney’s approach is not about being retro. It’s about cherry-picking from the best grooves of the last 60 years and discarding the rest. “Like prog and metal drummers,” he says. “I don’t listen to any of that music. I don’t care about how fast anyone can hit a kick drum or any of that crap. In my mind there are one or two drummers that are absolutely amazing and it’s, like, Bill Ward and John Bonham. After that there’s no point to try to be the best, because the best has already been done. It’s just a matter of trying to be as interesting as you can without getting in the way of the song.” 4. Let The Music Be Your Guide Just when you thought the bare-bones Carney couldn’t get any barer, he has stripped down his approach to near-naked levels these days. His seat-of-the-pants, rough-hewn, sloppily coloring-outside-the-lines style was a work-in-progress that climaxed in 2010. “Brothers was me being able to unload whatever I wanted to on the record, always kind of going for a minimal-but-also-funky, I guess, type of style. Something that felt loose but tight. And this new record, it was the complete opposite. It’s much more straight up and down. “In a lot of ways, I think that this record has some of the most simplistic beats we’ve ever used. So it was like a challenge for me to play as simply as possible but to try to make it interesting. And make it something that’s enjoyable as a drummer. Because, you know, lots of drummers want to really play whenever they get a chance.” Now that he made the decision to play straighter and cleaner than on previous records, his natural pace and sense of swing didn’t jibe with the new songs in their early stages. As a matter of fact, on El Camino’s demos, the drums sounded like they were played in half time, so Carney had to subdivide and play faster to accommodate the new vibe. “My natural tempo, like, when I want to record drums, it’ll lie between 85 beats per minute and 110, you know? But usually 92 is kind of where I like to play. And most of stuff is around 125–130. We wanted it to be faster paced. You know, more of a rock album. Something less moody. On BrothersI think there’s a lot of that kind of slow dramatic kind of stuff. But we wanted it to move right along and be the kind of record that you can put on at a party.” 5. Don’t Let The Drums Play You When The Black Keys first started, Carney was just using a hi-hat, snare, kick, and a crash. In 2002 he finally added a floor tom while recording debut The Big Come Up. On that album there were two cover songs — blues standard “Leavin’ Trunk” and Junior Kimbrough’s “Do The Rump” — that he wanted to approach using the floor tom like hats. Problem was, using his right hand on the floor tom open-handed didn’t feel right. “Trying to do eighth-notes and different patterns was really difficult, so I moved it on the other side,” he explains. “Then it felt more like a hi-hat than a floor tom.” After a couple of months of playing that way, the left hand effectively became the floor-tom hand and the catalyst for enticing beat novelty. “It’s a completely different feel because my right hand is more connected to my right foot I guess, so if played the floor with my right hand it will be more on top of the beat.” This way of thinking about drums and switching hands can really mix up the feels. On “Howlin’ For You” he crosses over with his right hand to play the left floor tom like a hi-hat. On “No Trust” from Thickfreakness, however, he’s open-handed, so that the pulse is more or less eighth-notes divided between the left hand on the left floor and the right hand whacking the snare hoop (but the snare proper on the 2). “It’s more off,” he says. “More along with the snare than it is with the kick, so it’s a weirder pattern.” The larger point is that too few drums have never been an obstacle to tasty beats. Matter of fact, Carney didn’t start using a mounted tom until 2004, after the Keys put out their third record Rubber Factory. It’s been an excess of drums ever since. “About four years ago I bought one of the Ludwig ‘Bonham’ reissue kits, so I had two floor toms, so I just put the 18" on my right-hand side just because I had it. I actually use it very, very little. I only play it on three or four songs, and just for brief moments.” Note that stylistic evolution is possible despite — pffft, because of — having less drums and cymbals in one’s face. Don’t be surprised if in the future Carney pares down the setup even more. “There’s almost no rack tom on the whole record . The only song I think I really used the rack on is a couple fills on ‘Dead And Gone’ and ‘Little Black Submarine,’ and maybe ‘Mind Eraser.’ But for the most part I’m only really using hi-hat, kick, snare, crash, and floor tom.” 6. Learn To Unlearn This type of negative capability is usually prescribed for players whose creativity has been stifled by chronic sight reading and institutional dogma. But it also applies to a blood ‘n’ guts player such as Carney, who became more solid by letting go of the compulsion to overcompensate for a lack of chops — a tendency exacerbated by the fact that the Keys are a two-piece and he is responsible for filling 50 percent of the songs’ space without the benefit of backing vocals. “It’s just ten years of playing and finally getting comfortable enough and confident enough to play something that isn’t trying to be flashy or trying to be different or all dramatic." The hopped-up vibe of El Camino called for mostly clean, straightforward beats. Problem was straight eighth-notes were counterintuitive for a player who thrived on spontaneity and emotion. “When I sit down with a drum set and want to just play just to play, it’s similar to something like “Go Getter” . That’s kind of more where I play from, stumbl-y and stutter-y. Like a s\\_\\_tty version of ’70s African funk. That’s what I play like when I’m just playing, not recording or in the band. So then it just so happens that when we made this record, it’s like the first time I actually was capable of playing drums like a traditional rock band would have.” The uptempo propulsion of “Sister” is a perfect example of this cut-and-dry approach. “That’s a really, really basic drum beat,” he says. “I mean, it’s probably appeared in, like, 25,000 songs . Doing something like that, you try to make it your own if you can, maybe move into the bridge and into the chorus in a way that breaks up the familiarity of the beat.” Most of El Camino’s beats are immediate and concrete, whether it’s the no-fuss chop of “Mind Eraser,” the boogaloo of “Stop Stop” or the percolating train on “Gold On The Ceiling.” In many ways the increasingly spare yet rich and full sounding style of the drumming has been a process of subtraction. In The Black Keys’ earlier days the slower blues-ier stuff was something Carney approached with his gut, not worrying about whether it was correct or not. “Lonely Boy”’s straight shuffle which Carney admits he could never truly play before, demonstrates his joy at a rediscovery of the essentials. “We have, like, 110 songs or something since we’ve been a band and I’ve never used that beat before. And it’s one of the easiest beats to play. I think that for a lot of things we are almost like a band in reverse.” 7. DIY Till You Die We asked the self-taught Carney whether there was anything about his approach that he would like to improve. He mentions a few types of fills and time signatures, and when pressed, that he would not be opposed to taking a lesson. But you can tell he’s not serious and that he’ll do no such thing. “I’m kind of stubborn and have this attention-deficit problem to the point where I find that I learn the best if I just kind of figure it out on my own, you know? Listen to what other people are doing and pay attention to that and just kind of try to break the code myself. I have to learn hands-on.” That philosophy is applied broadly to The Black Keys’ career: deciding what label to sign with, when to put out records, how to record, and so on. “That’s the most fun being in a band, actually, is when you are completely in control of everything as far as the creative side and the decision making and stuff,” he says. “It’s kind of a trip.” The refreshing thing about speaking with Carney is that you get straight talk instead of glib statements about practicing six hours a day and committing Stick Control to memory. “I’m not a very good drummer,” he says as we wrap up our hour together. “But that’s the thing: Kids that were in my high school jazz band were focused on being good. They weren’t focused on being creative, and none of those guys ever became musicians. And then all of my friends who are musicians, they were never focused on being the best.” The Keys’ drummer never claimed to be a role model and never will. But something about the institutionalization of the rhythmic urge and drum pedagogy in general rubs him the wrong way. “Sometimes when you read magazines like Guitar Player or whatever and a lot of the musicians are focused on being the best and going to the Berklee School Of Music and doing all these things, that’s one route,” he says. “But the teach-yourself, have-fun route is also a viable route. I recommend that route.” Subscribe to DRUM! and get the world's best drumming features, gear insights, and lessons every month in print and digital versions.
  3. Turn your bass into its own "power trio" $430.69 MSRP, $279.95 "street," www.fishman.com by Craig Anderton So you play bass in a power trio. The drummer is flailing away, the guitar player is windmilling power chords, the vocal harmonies sound amazing, and the crowd is going nuts. Then the guitar player starts soloing. Uh-oh... What happened to that giant wall of sound? If this sounds familiar, let's talk about the Fishman Fission Bass. It's designed to transfer that wall of sound over to you, so your bass can roar with harmonies and fill in the hole that appeared when the guitarist switched from rhythm to lead. Sure, it can do more than that; any time bass needs a power trip, Fission Bass is there. But it was made for a specific purpose, and as a result, hits that target like a laser. HARMONY CENTRAL I couldn't resist the reference, because it makes sense: Fission Bass's heart is harmony generation, as called up by three footswitches. The main one switches between effect off and one octave higher. The second adds in a parallel harmony a fourth below the octave, while the third switch adds another parallel harmony a fifth above the octave. Pressing both the second and third switches simultaneously creates a power chord—octave higher, with fourth and fifth above that—just like the one the guitarist was playing before the solo started, and with a touch of formant processing to sound more guitar-like. You can do much more than switch harmonies in and out. There are four controls: Overdrive (for harmonies only; it doesn't affect the straight bass sound), a Tone control that pulls back highs, Noise Gate, and Effect Level control. The latter works with the two 1/4” output jacks—Mix Output blends the dry bass (going through a buffered, analog path to retain the bass tone) with the processed sound, whose level is set by the Effect Level control. Effect Output has no dry sound, but processed sound only—with the Effect Level control again setting level. (By the way, special props for the stomp-friendly knobs; they're metal, low-profile, and easy to adjust, but hard to move accidentally.) You'll also find a Trim control for matching levels to your bass, as well as LED indicators for harmony status, signal present, and a dual-purpose clip/low battery warning. Fission Bass can run off a 9V battery (housed in a quick-release, recessed battery compartment), but I measured a current drain of 27mA—pretty hefty, so you'll probably want to buy the optional “global” (100-240V/ 50/60Hz) AC adapter.. ON THE RIGHT TRACK(ING) The concept is solid; in “power chord” position, it really does sound like there's some type of synchronized guitar playing along, and the Overdrive adds a wonderful growl. Had this been around during the prime of Emerson, Lake, and Palmer, I bet they would have used it in their live act . . . and been grateful for its existence. Anyway, the most important hands-on aspects relate to tracking and tone. Sound quality when transposing up is always difficult to pull off, but Fission Bass acquits itself well. There's a little bit of warble on the octave higher note, but you have to listen for it. The fourth and fifth work really well. Although pitch shifters generally prefer single-note lines, I was pleasantly surprised that Fission Bass didn't get too anti-social if you accidentally hit more than one note at a time, and actually handled fifths fairly elegantly. As to tracking, there's no problem with slides, bends, and slaps, which to me is crucial. You'll hear a very slight “scoop” as the unit hones in on the pitch tracking, but I wonder if perhaps this was done on purpose, as it helps differentiate the harmony tone from the bass just a little bit more. And, don't overlook the Effect Output jack's value. For the stage, run it through a guitar amp for the power chord thang, and in the studio, it's just begging to go through parallel amp sims—I split the signal into two chains, one with a guitar amp, the other with a 4 x 10 bass cab emulation. It rocked. CONCLUSIONS I like effects that go boldly where no one has gone before, but I really like effects that are especially clever, and succeed at what they do. Fission Bass definitely delivers. Within 10 seconds of plugging it in, I had the fourth churning away, and a strange urge to play Swedish death metal. But that faded away as I checked out the power chord option, and started going down the power trio road—sparse notes, serious sustain, and big, muscular sounds. This effect isn't for everyone, but I think everyone it's intended for will totally dig it. The Overdrive and Tone controls are a great addition, as you can get more sounds out of the box than you might think at first. Fishman gets an “A” for effort, but they also get an “A” for execution. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Control Rack Parameters Onscreen, or with External Control Surfaces by Craig Anderton One of the great things about analog hardware is that controls are obvious: If there’s a function, there’s a physical control to go along with it. And if you want to alter that function, you just move the control. What could be simpler? Well, I can definitely tell you what’s more complex—software where all the controls are virtual, especially when you want to adjust one control out of a sea of parameters, or want to change multiple parameters with just one simple control. Fortunately, the same digital technology that gave us all those confusing parameters has, in some cases, also given us the means to make those parameters less confusing. With Ableton Live, you can isolate the parameters you use the most in a typical rack, then bring them out to a consolidated set of Macro Controls. In case you’re not familiar with what Live calls “racks,” the program lets you create “virtual racks” of audio processors (you can also create Drum, MIDI, and Instrument racks). As these can be quite complex, even encompassing parallel effects chains, it’s easy to get “lost in the parameters,” which is why being able to create Macro Controls for real-time tweaking or MIDI control is so helpful. Macros can also control multiple parameters at once, so for example, you could assign EQ to pull back on the high frequencies a little bit while increasing the amount of overdrive. Providing real-time control over specific parameters allows for greater expressiveness, but also, parameters mapped to Macros can cover a specific minimum and maximum range. This is very useful if you use a device like a footpedal for control, as you can limit its physical range to the parameter range you want to control. Assuming you already have a rack set up, let’s go through the steps required to create Macro Controls. We’ll assume you already have a rack set up. 1. Click on the rack’s Show/Hide Macro Controls button. 2. When the Macro section appears, click on the Map Mode button. 3. After clicking on Map Mode, any parameter that can be assigned to a Macro Control will be highlighted. Click on an effect parameter that you want to assign to a Macro Control (in this screen shot, it’s Drive). Note that the selected parameter’s highlight will have small square brackets in the corners to indicate it has been mapped. Even better, you can assign multiple parameters to a Macro Control. After assigning a parameter, click on the next parameter, then click on the same Macro Control’s Map button. 4. Click on a Macro Control’s Map button. The parameter selected in Step 3 is now linked to the Macro Control. 5. When in Map Mode, a list of Macro Mappings is visible. You can set a parameter’s minimum and maximum range as desired. 6. Click on the Map Mode button again to exit map mode. The Macro Control defaults to showing the target parameter’s name, and the target parameter has a small green dot in the upper left to indicate that it’s linked to a Macro Control. 7. To control a Macro Control with an external MIDI control surface, click on the MIDI Map Mode button in the upper right. MIDI-assignable parameters will be highlighted. 8. Click on the Macro Control you want to link to MIDI, then move the physical hardware control you want to assign. As soon as you exit MIDI Map Mode, the hardware control will change the Macro Control’s setting. Again as with step 3, note that the selected parameter’s highlight will have small square brackets in the corners. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. If you want a song mastered correctly, first make sure it's mixed correctly By Craig Anderton If there’s one thing in the world a mastering engineer doesn’t want to see, it’s a mix that looks more like a sausage than audio (Fig. 1). Fig. 1: Mixes like this are no fun to master, and they can’t be mastered to reach their full potential. This is usually due to someone who straps a maximizer-type dynamics processor across the master mix bus for a “loud” sound, without realizing that it ties the mastering engineer’s hands (who likely has better tools for making audio loud anyway). However, lately I’ve been getting something more disturbing: mixes that look a lot like Fig. 1, but upon closer examination, have clipping issues. Fig. 2: The peaks circled in red are clipped. In Fig. 2, you can see that the waveforms are “flat-topped,” which causes clipping distortion. One reason this happens is because the mix engineer doesn’t realize that with digital, “going into the red” almost invariably generates clipping, so they don’t get too bothered when the overload light goes on. But another issue is personal taste: Some people like the sound of digital distortion, and figure a little clipping won’t hurt. I don’t agree, but hey, there’s no accounting for taste. In either case when this file goes to the mastering engineer, remember that mastering puts a sort of magnifying glass up to the audio. Once digital distortion is “baked into” a mix, there’s almost nothing the mastering engineer can do to remove it. The end result is a sort of fuzzy, harsh quality that robs definition and causes ear fatigue. If you really want digital distortion, it’s better to do a mix without it, then tell the mastering engineer that you’d like the sound pushed somewhat into distortion. The mastering engineer will most likely hold his or her nose, but “the customer is always right,” and they’ll do their best to give you want you want. Fig. 3: The peaks circled in red have been severely compressed, but still sound better than being clipped. Fig. 3 also shows a mix that has virtually no headroom, but it’s due to excessive amounts of compression and limiting, not clipping. The waveform peaks aren’t flat-topped, but simply reduced in level. Although this file is still far from ideal from a mastering standpoint, and won’t let mastering reach its full potential, at least it’s better than clipping distortion. LET THE MASTERING ENGINEER MASTER! The solution is simple: Don’t use any processors across the stereo bus (like maximizers, compressors, EQ, and the like—remember, the mastering engineer probably has better tools than you do for the task). Having effects on individual tracks is fine, of course; you just don’t want to apply effects to the entire mix. Also, set levels so there’s plenty of headroom when mixing—I generally don’t let peaks go much above –10 to –6dB max in my mixes. Mastering can always make it loud, but it can’t get rid of distortion or most effects that you added to a mix. Some engineers balk at leaving this much headroom, because they like to push the output, and say that a squashed effect is part of their “sound.” In that case, sure, throw a maximizer across the stereo out and mix away—but bypass it before exporting the final mix. Then include a note to the mastering engineer saying you want a really loud, squashed mix; they’ll add the maximizer during the mastering stage, which will lead to better results. If it doesn’t, then it’s easy to just take the mix and add more or less dynamics processing. If that dynamics processing is already part of the mix, there’s nothing that can be done to “undo” it. When you’re mixing, get the mix and the balance right—but don’t mix too hot and introduce distortion, or you’ll just end up with a frustrated mastering engineer,. What’s worse, the recording will never be able to reach its full sonic potential. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Ready to record electric guitar? Before you put your pick on a string, make sure these basics are in place. By Craig Anderton Recording electric guitar is simple, right? You just plug the guitar into an interface’s guitar input, go through a DI box into a mixer, or stick a mic in front of an amp. But there are a few other basic considerations you need to consider first. THE INSTRUMENT The type of guitar, choice of pickups, string material, and tone control settings make a huge difference in the overall recorded sound. Get as close as you can to the sound you want by working with these options first, then start experimenting with the mic and amp. Check intonation prior to the session (and whenever you change a string), and check tuning constantly: It’s virtually impossible to fix an improperly tuned or intoned guitar in the mix. MINIMIZING EMI Power supplies, transformers, dimmers, and other sources of EMI (electro-magnetic interference) can get into your guitar’s pickups. Turn all dimmers full on or full off. Turn off any gear that isn’t being used. Experiment with the guitar’s orientation with respect to other gear, and choose the position that gives minimum interference. MICROPHONE TYPE Dynamic mics (like the Shure SM57) are the “old standby” as they can handle high power levels and have a naturally “warm” tone that complements amps. Large-diaphragm condenser mics are also popular, typically with any pad switch engaged due to a greater difficulty handling extremely high SPLs compared to dynamics. They also tend to give brighter highs and lower lows. Fig. 1: A Royer ribbon mic close-miking an amp (photo courtesty Royer Labs) Newer ribbon mics (Fig. 1 shows a Royer ribbon mic placed close to a cabinet for more bass buildup) are gaining popularity for miking amps, because they’re not as fragile as older ribbon types. They tend to pick up more room ambience, but remember that excessive levels can damage older ribbon mic elements. MIC POSITION The relationship of the mic to the amp speaker has a major effect on the sound. Pointing the mic directly at the speaker gives more highs than angling the mic, as a mic’s off-axis response tends to pick up fewer high frequencies. However, where you point the mic also matters; for example, pointing toward the outside of the speaker may give a “tighter” sound than pointing at the center. Also, if a cabinet has multiple speakers, try each one—not all speakers, even ones from the same production run, are identical. The distance from the amp also makes a difference. Placing the mic further away from the speaker picks up more room sound and ambience. For best results, listen in the control room while someone else adjusts the mic; or, adjust the mic yourself, while saying what you’re doing (“Mic pointing at cone, 2" away). Listen back to which sounds best, then re-create the setting you described. MIC LOW PASS FILTERS Engaging a mic’s low pass filter, if present, can “tighten up” the sound as it usually affects frequencies below the range of the guitar. This reduces hum and room rumble but doesn’t alter the guitar tone. RECORDING DIRECT To preserve your guitar’s high frequency response and output level, record into an input with a high impedance (at least 100, and preferably 220, kilohms). Many audio interfaces have an “instrument input” for this purpose; Fig. 2 shows the two front panel guitar inputs for Avid's Mbox (3rd generation). Standard passive direct boxes may not be suitable. If you’re using stomp boxes or other effects prior to your audio interface, its impedance is not an issue: Impedance matters only for the first device “seen” by the guitar. COMBINING DIRECT AND MIKED SOUNDS The sound coming from the mic will be delayed compared to the direct sound (approximately 1 millisecond of delay for each foot of distance between the mic and speaker). Combining the direct and miked sounds may sound “thin” due to the comb filtering caused by this time difference. In a DAW, temporarily pan both mics to center and nudge the miked sound forward (earlier) in time to compensate; compare the mix of the two sound sources until you achieve the best tone. MULTIPLE CABINETS It’s common to set up two (or more) cabinets and mic both. Variations in the cabinets and miking create a convincing stereo spread when one cabinet is panned more toward the left and the other panned more toward the right. MULTIPLE MICS There are two main applications for multiple mics. One is to have a mic (or mics) close to the amp, and one or more mics in the room to pick up ambience and reflections. The other is to place two mics on a single amp and vary the blend between them to create a particular sound. In the latter situation, it’s common to use very different mics (for example, a dynamic and a condenser). During mixdown these can be set to different levels, have one thrown out of phase compared to the other to give “pseudo-EQ” effects, and possibly even have different processing, to create a sound that’s very different compared to using a single mic. BIG AMPS VS. LITTLE AMPS Using a big amp and cranking it to get your “sound” will also put lots of reflections into the room where you’re recording, and when these are picked up, may give an overly-diffused effect. You may obtain better results by cranking a small amp, which may sound the same to the mic but not be as loud. This may also be a necessary solution if you’re recording in a location where noise can bother others. GETTING SUSTAIN WITH RE-AMPING Re-amping involves recording a dry guitar sound to a DAW, then running it through an amp or amp simulator plug-in on mixdown to allow choosing the perfect tone for the track. However, it also makes sense to play through an amplifier and record both the amp and dry guitar sounds. The amplifier can give more sustain, or even controlled feedback; and if you like the sound, you can always use it and not have to bother with re-amping. An added bonus is that the amp sound will often make a good complement to the re-amped sound, and facilitate creating a stereo image. RECORDING WITH THE LINE 6 VARIAX The Variax is great for recording, as it can provide so many different types of sounds. But pay attention to the amp it’s going through: For example, the Les Paul model gives a more “iconic” sound if it’s going through a Marshall stack than a Fender Champ. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Go ahead—shake the floor (and the rafters)! by Craig Anderton There’s a saying that “You can never be too thin or too rich.” That may work for models, but it’s only half-right for keyboard synth bass: Rich is good, but thin isn’t. Looking for a truly corpulent bass sound that’s designed to dominate your mix? These techniques will take you there. LAYERS GOOD, LAYERS BAD A common approach to crafting a bigger sound is to layer slightly detuned oscillators. However, that can actually create a thinner sound because slight detunings cause volume peaks but also, volume valleys. This not only diffuses the sound, but often makes it hard for a bass to sit solidly in the mix because of the constant sonic shapeshifting. Following are some layering approaches that do work. Three oscillators with two pitch detunings. Pan your main oscillator to center, and set it to be the loudest of the three by a few dB. Pan the other two oscillators somewhat left and right of center, and detune both of them four cents sharp. Yes, this will skew the overall sound a tiny bit sharp; think of it as the synth bass equivalent of “stretch” tuning. Dual oscillators with detuning. You can get away with detuning more easily if there are only two oscillators, as the volume peaks and valleys are more predictable. Pan the two oscillators slightly left and right of center, set them to the same approximate level, tune one oscillator four cents sharp, and tune the other four cents flat. If that’s still too diffused, pan them both to center, tune one to pitch, detune the other one eight cents sharp, and reduce the level of the detuned oscillator by -3 to -6dB. Three oscillators with multiple detunings. If you must shift one oscillator sharp and one flat in a three-oscillator setup, consider mixing the two shifted oscillators somewhat lower (e.g., -3 to -6dB) than the on-pitch oscillator panned to center. This will still give an animated sound, but reduce any diffusion. Three oscillators with layered octaves. This is one of the most common Minimoog bass patches (Fig. 1), and yes, it sounds very big. Adding a slight amount of detuning to the lowest and highest oscillators thickens the sound even more, as this simulates the drift of a typical analog synthesizer. Fig. 1: This Arturia Minimoog V shot shows an archetypal Minimoog patch, with three oscillators set an octave apart via the Range controls. Note that the lowest and highest oscillators are tuned a bit off-pitch to add more sonic animation. Two oscillators with layered octaves. While this doesn’t sound quite as huge as three oscillators with layered octaves, removing the third oscillator creates a tighter, more “compact” sound that will cede some low-end territory to other instruments (e.g., kick drum). Sub-bass layer. Drum ’n’ bass fans, this one’s for you! Layer a triangle wave one octave below any other waveforms you’re using. (You can also try a sine wave, but at that low a frequency, a little harmonic content helps the bass cut through a mix better.) For a really low bass end, layer three triangle waves with two tuned to the same octave (offset one by +10 cents), and the third tuned one octave lower and offset by -10 cents. Sub-bass patches also are excellent candidates for added “punch,” which provides the perfect segue to . . . PUNCH! There are two main ways to add punch to a synth sound. Percussive punch. This requires adding a rapid amplitude decay from maximum level to about 66\\\% of maximum level over a period of 20-25ms (Fig. 2, top). Fig. 2: The upper envelope generator picture from Cakewalk’s Rapture shows a quick percussive decay that adds punch. The lower envelope setting achieves a more sustained punch effect by kicking the envelope full on for a couple dozen milliseconds. To emphasize the percussiveness even further, if a lowpass filter is in play, give its cutoff a similarly rapid decay. However, for the filter, bring the envelope down from maximum to about 50\\\% of maximum over about 20-25ms. Sustained punch. This emulates the characteristics of the Minimoog’s famous “punchy” sound. (Interestingly, the amplitude envelopes in Peavey’s DPM-3 produced the same kind of punch; after I described why this phenomenon occurred in Keyboard magazine, Kurzweil added a “punch” switch to their keyboards to create this effect.) Sustained punch is simple to create with most envelope generators: Program an amplitude envelope curve that stays at maximum for about 20-25ms (Fig. 2, bottom). This is too short for your ear to perceive as a “sustained sound,” but instead comes across as “punch.” PSEUDO-HARD SYNC If your soft synth doesn’t do hard sync, there’s a nifty trick that gives a very similar sound—providing you can add distortion following the filter section. Fig. 3 shows Rapture set up for a hard sync sound on one of its elements. Fig. 3: Feeding a lowpass filter with a reasonable amount of resonance through distortion can create a sound that resembles hard sync. Note the setting of the Cutoff, Reso(nance), and BitRed controls (Bit Reduction is set for a Tube distortion effect, as shown in the label above the control). The envelope shown toward the bottom sweeps the filter cutoff from high to low. As it sweeps, the filter’s resonant frequency distorts, producing a hard-sync like sound. The crucial parameter here is resonance; too little and the effect disappears, two much and the effect becomes overbearing . . . not that there’s necessarily anything wrong with that . . . CHOOSING THE RIGHT WAVEFORM For most big synth bass sounds, a sawtooth wave passed through low-pass filtering (to tame excessive brightness) is the waveform of choice. As a bonus, if you kick the lowpass filter up a tad more, it brings in higher harmonics that add a “brash” quality. For a “rounder” sound that’s more P-Bass than Synth Bass, try a pulse waveform instead (Fig. 4). Fig. 4: Remember Native Instruments’ Pro-53? It's one of many soft synths that provides pulse waveforms. Better yet, a pulse width control determines whether the pulse is narrow or wide. I prefer narrow pulses (around 10-15\\\% duty cycle), but wider pulse widths can also be effective. The same layering techniques mentioned earlier work well with pulse waves, but also experiment with layering a combination of pulse and sawtooth waves. This produces a timbre somewhere between “tough” and “round.” Triangle and sine waves have a hard time cutting though a mix because they contain so few harmonics. If you want a very muted bass sound, use a waveform with more harmonics like sawtooth or pulse, then close a lowpass filter way down to reduce the harmonic content. This provides a rougher, grittier sound due to the residual harmonics that remain despite the filtering. However, while triangle waves aren’t necessarily great solo performers, they’re excellent for layering with pulse and sawtooth waveforms to provide more low-end girth. THE ALL-IMPORTANT MOD WHEEL Just because you’re playing in the lower registers doesn’t let you off the hook to add as much expressiveness as possible. Some programmers get lazy and do the default move of programming the mod wheel to add vibrato, but that’s of limited use with bass. If you want vibrato, tie it to aftertouch, and reserve the mod wheel for parameters where you need more control over the sound. Creative use of modulation could take up an article in itself, but these quick tips on useful modulation targets will help you get started. Filter cutoff. This lets you control the timbre easily. If the filter is being modulated by an envelope, assigning the mod wheel to filter cutoff (Fig. 5) can also create a more percussive effect when you lower the cutoff frequency. Try negative modulation, so that rotating the mod wheel forward reduces highs. Fig. 5: Cubase’s Monologue synth makes it easy to assign the mod wheel to filter cutoff as one of the filter modulation sources (circled in red). Volume envelope attack. To transform a sound’s character from percussive and punchy to something “mellower,” edit the mod wheel to increase attack time as you rotate it forward. Layer level. Assign the mod wheel to bring in the octave-lower layer of a sub-bass patch. This pumps up the level and really fills out the bottom end. Distortion. Yeah, baby! Kick up the distortion for a bass that cuts through the mix like a buzzsaw, then pull back when the sound needs to be more polite. Resonance. I’m not a fan of highly resonant synth bass sounds (they sound too “quacky” to me), but tying resonance to mod wheel provides enough control to make resonance more useable. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. You’ve Recorded the Vocal—But Don’t Touch that Mixer Fader Quite Yet By Craig Anderton As far as I’m concerned, the vocal is the most important part of a song: It’s the conversation that forms a bond between performer and listener, the teller of the song’s story, and the focus to which other instruments give support. And that’s why you must handle vocals with kid gloves. Too much pitch correction removes the humanity from a vocal, and getting overly aggressive with composite recording (the art of piecing together a cohesive part from multiple takes) can destroy the continuity that tells a good story. Even too much reverb or EQ can mean more than bad sonic decisions, as these can affect the vocal’s emotional dynamics. But you also want to apply enough processing to make sure you have the finest, cleanest vocal foundation possible—without degrading what makes a vocal really work. And that’s why we’re here. THE GROUND RULES Vocals are inherently noisy: You have mic preamps, low-level signals, and significant amounts of amplification. Furthermore, you want the vocalist to feel comfortable, and that too can lead to problems. For example, I prefer not to sing into a mic on a stand unless I’m playing guitar at the same time; I want to hold the mic, which opens up the potential for mic handling noise. Pop filters are also an issue, as some engineers don’t like to use them but they may be necessary to cut out low-frequency plosives. In general, I think you’re better off placing fewer restrictions on the vocalist and having to fix things in the mix rather than having the vocalist think too hard about, say, mic handling. A great vocal performance with a small pop or tick trumps a boring, but perfect, vocal. Okay, now let’s prep that vocal for the mix. REMOVE HISS The first thing I do with a vocal is turn it into one long track that lasts from the start of the song to the end, then export it to disk for bringing into a digital audio editing program. Despite the sophistication of host software, with a few exceptions (Adobe Audition and Samplitude come to mind), we’re not quite at the point where the average multitrack host can replace a dedicated digital audio editor. Once the track is in the editor, the first stop is generally noise reduction. Sound Forge, Adobe Audition, and Wavelab have excellent built-in noise reduction algorithms, but you can also use stand-alone programs like iZotope’s outstanding RX 2. The general procedure is to capture a “noiseprint” of the noise, then the noise reduction algorithm subtracts that from the signal. This requires finding a portion of the vocal that consists only of hiss, saving that as a reference sample, then instructing the program to subtract anything with the sample’s characteristics from the vocal (Fig. 1). Fig. 1: A good noise reduction algorithm will not only reduce mic preamp hiss, but can help create a more “transparent” overall sound. This shot from iZotope RX (the precursor to RX 2) shows the waveform in the background that's about to be de-noised, and in the front window, a graph that shows the noise profile, input, and output. There are two cautions, though. First, make sure you sample the hiss only. You’ll need only a hundred milliseconds or so. Second, don’t apply too much noise reduction; 6-10dB should be enough, especially for reasons that will become obvious in the next section. Otherwise, you may remove parts of the vocal itself, or add artifacts, both of which contribute to artificiality. Removing the hiss makes for a much more open vocal sound that also prevents “clouding” the other instruments. DELETE SILENCES Now that we’ve reduced the overall hiss level, it’s time to delete all the silent sections (which are seldom truly silent) between vocal passages. If we do this the voice will mask hiss when it’s present, and when there’s no voice, there will be no hiss at all. Some programs offer an option to essentially gate the vocal, and use that as a basis to remove sections below a particular level. While this semi-automated process saves time, sometimes it’s better (albeit more tedious) to remove the space between words manually. This involves defining the region you want to remove; from there, different programs handle creating silence differently. Some will have a “silence” command that reduces the level of the selected region to zero. Others will require you to alter level, like reducing the volume by “-Infinity” (Fig. 2). Fig. 2: Cutting out all sound between vocal passages will help clean up the vocal track. Note that with Sound Forge, an optional automatic crossfade can help reduce any abrupt transition between the processed and unprocessed sections. Furthermore, the program may introduce a crossfade between the processed and unprocessed section, thus creating a less abrupt transition; if it doesn’t, you’ll probably need to add a fade-in from the silent section to the next section, and a fade-out when going from the vocal into a silent section. REDUCE BREATHS AND ARTIFACTS I feel that breath inhales are a natural part of the vocal process, and it’s a mistake to get rid of these entirely. For example, an obvious inhale cues the the listener that the subsequent vocal section is going to “take some work.” That said, though, applying any compression later on will bring up the levels of any vocal artifacts, possibly to the point of being objectionable. I use one of two processes to reduce the level of artifacts. The first option is to simply define the region with the artifact, and reduce the gain by 3-6dB (Fig. 3). This will be enough to retain the essential character of an artifact, but make it less obvious compared to the vocal. Fig. 3: The highlighted section is an inhale, which is about to be reduced by about -7dB. The second option is to again define the region, but this time, apply a fade-in (Fig. 4). This also may provide the benefit of fading up from silence if silence precedes the artifact. Fig. 4: Imposing a fade-in over an artifact is another way to control a sound without killing it entirely. Speaking of fade-ins, they're also useful for reducing the severity of "p-pops" (Fig. 5) This is something that can be fixed within your DAW as well as in a digital audio editing program. Fig. 5: Splitting a clip just before a p-pop, then fading in, can minimize the p-pop. The length of the fade can even control how much of the "p" sound you want to let through. Mouth noises can be problematic, as these are sometimes short, “clickey” transients. In this case, sometimes you can just cut the transient and paste some of the adjoining signal on top of it (choose an option that mixes the signal with the area you removed; overwriting might produce a discontinuity at the start or end of the pasted region). PHRASE-BY-PHRASE NORMALIZATION A lot of people rely on compression to even out a vocal’s peaks. That certainly has its place, but there’s something else you can try first: Phrase-by-phrase normalization. Unless you have the mic technique of a K. D. Lang, the odds are excellent that some phrases will be softer than others—not intentionally due to natural dynamics, but as a result of poor mic technique, running out of breath, etc. If you apply compression, the lower-level passages might not be affected very much, whereas the high-level ones will sound “squashed.” It’s better to edit the vocal to a consistent level first, before applying any compression, as this will retain more overall dynamics. If you need to add an element of expressiveness later on that wasn’t in the original vocal (e.g., the song gets softer in a particular place, so you need to make the vocal softer), you can do this with judicious use of automation. Unpopular opinion alert: Whenever I mention this technique, self-appointed “audio professionals” complain in forums that I don’t know what I’m talking about, because no real engineer ever uses normalization. However, no law says you have to normalize to zero—you can normalize to any level. For example, if a vocal is too soft but part of that is due to natural dynamics, you can normalize to, say, -6dB or so in comparison to the rest of the vocal’s peaks. (On the other hand with narration, I often do normalize everything to as consistent a level as possible, as most dynamics with narration occurs within phrases.) Referring to Fig. 6, the upper waveform is the unprocessed vocal; the lower waveform shows the results of phrase-by-phrase normalization. Note how the level is far more consistent in the lower waveform. Fig. 6: In the lower waveform, the sections in lighter blue have been normalized. Note that these sections have a higher peak level than the equivalent sections in the upper waveform. However, be very careful to normalize entire phrases. You don’t want to get so involved in this process that you start normalizing, say, individual words. Within any given phrase there will be a certain internal dynamics, and you definitely want to retain this. ARE WE PREPPED YET? DSP is a beautiful thing: Now our vocal is cleaner, of a more consistent level, and has any annoying artifacts tamed — all without reducing any natural qualities the vocal may have. At this point, you can start doing more elaborate processes like pitch correction (but please, apply it sparingly and rarely!), EQ, dynamics control, and reverb. But as you add these, you’ll be doing so on a firmer foundation. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Use amp sims to create stacks that would be difficult, or even impossible, to do in the physical world by Craig Anderton When living in the world of guitars and amps, few things are more impressive than standing in front of a stack of cabinets. It’s not just about the visuals; multiple cabinets can add a tonal quality that’s impossible to duplicate in any other way. However, reality often intrudes in the form of how many physical stacks you can actually carry, hook up, and record (or play through live). Fortunately, these days we don’t always have to live in the real world: We can live in a virtual one, and use amp sims to do our bidding. After all, one of the great advantages of amp sims is you can try out sounds that would be a hassle to set up physically—like stacking two (or more) different amps and cabinets, with different effects, and spreading them out in stereo. If you record through a plug-in amp sim in your computer (in this case the track itself is dry, and the final sound results from the amp sim processing the dry track), you can duplicate the dry track and add another amp sim in parallel to stack the sound. But that means you don’t hear the stacked sound until after you’ve played your part, and it’s more fun to play through the stack, as it influences your playing. LIVING IN A PARALLEL UNIVERSE You’ll want to split your guitar into at least two different paths to feed the different “stacks.” You can do this by inserting amp sims into two different tracks and setting each track’s input to the channel carrying the guitar, then monitoring the input signal through the computer (this function is typically called something like “input echo” or “live monitor”). This lets you hear the effects of any plug-ins. But that’s not always necessary; many amp sims can create parallel signal paths (that you can pan anywhere in the stereo field) all by themselves. Here are some screen shots that show how various programs handle parallel processing. With IK Multimedia’s AmpliTube series, there are 8 routing options; routing 2 creates two separate, parallel chains. Line 6’s POD Farm has a Dual button that creates two different signal chains, which essentially puts two POD Farms in parallel. Peavey’s ReValver Mk III and Native Instruments’ Guitar Rig both offer “splitter” modules for their “virtual racks.” These let you split the input signal into two paths, where you can insert whatever amps, speakers, etc. you want. Then, the splits go into an output mixer for mixing and panning. (However, note that Guitar Rig lets you put splits within splits, whereas ReValver Mk III is limited to one split module per rack.) The above setup uses Guitar Rig to emulate the sound of a guitar being split into two different amps and cabinets. The Split module sends the guitar through two chains, each of which contributes a different sound. Note how the Split Mix output can crossfade between the two channels and adjust the pan. Also, the B split has a phase switch. Waves’ G|T|R has stereo amps, which provide the same basic function as stacked amps. However, if you want a parallel path where you can add effects and such independently to the two amps, then you’ll need to use two tracks, and two instance of G|T|R. HAVING FUN WITH STACKS Here are some ways to use stacking in the studio. When mixing, a stereo rhythm guitar with the channels panned oppositely opens up a huge space in the center for bass. It’s almost like having two guitars, but with the simplicity of a single guitar part. Use a tempo-synched effect like tremolo, but set different rhythmic values in the two chains. You can get some wild stereo effects bouncing around. Try three stacks, with power chord sounds left and right, and a bright, chorused acoustic-type sound up the center. Add bass and drums, and you won’t need anything else—the sound can be huge. If there’s a complementary instrument like keyboard or rhythm guitar, pan one channel of your guitar to center, and the other right or left. This “weights” the guitar toward one side of the stereo field. Similarly, weight the other instrument oppositely in the stereo field. Now both instruments take up a decent amount of space, but don’t tread on each other. Splitting isn’t just about amps, but also effects. If you want some great flanging effects, put a vibrato effect set for a slow speed in each split (processed sound only). When you sum the outputs together in mono, the delay variations between the two splits will rock your world. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. By Brad Schlueter Published January 2, 2012 In this video review, DRUM! product tester Brad Schlueter checks out a US Fusion X-Plus maple drum kit with Pro bass drum pedal and hardware from Natal Drums. To read Brad’s more detailed product test, be sure to pick up the February 2012 issue of DRUM! Magazines at your local drum shop, or click here. Contents © 2012 DRUM!. Used with permission. All rights reserved.
  11. Keyboard controller series with advanced hardware-to-software parameter mapping 25-key $329.99 MSRP, $249.99 street 49-key $449.99 MSRP, $349.99 street 61-key $499,99 MSRP, $399.99 street www.novationmusic.com By Craig Anderton It’s not exactly like we’re experiencing The Great Keyboard Controller Shortage of 2012—from basic models with keys ’n’ wheels to sophisticated control surfaces, never has so much been available, from so many, for so little. Yet Novation has jumped into the fray with a new line of keyboard controllers, and they think they can bring something new to the party . . . so let’s see if they’re right. BASICS The Impulse series consists of 25, 49, and 61-key USB keyboard controllers. All are functionally equivalent, except the 25-key model doesn’t have room for the full nine-fader control surface, but does have a single fader They all have eight assignable “endless” knob encoders, eight backlit drum pads with velocity and pressure, transport controls, pitch and mod wheels, USB and 5-pin DIN MIDI I/O (props to Novation for remembering that 5-pin DIN still matters), and jacks for sustain switch and expression pedal. All units are bus-powered. Not only do you not need an AC adapter, you can’t add an AC adapter. As a result, those with laptops may need to power their computer with an AC adapter if the batteries are running low. If the impulse is serving as a stand-alone controller, you can use any USB power adapter. LET'S SEE ACTION As soon as I started playing the keys, I immediately noticed that the action has been improved—the semi-weighted keyboard action has a little more resistance than the average synth keyboard, but not so much as to detract from the “fly across the keys” appeal of typical synth keyboards. There’s also predictable channel aftertouch and velocity—if I used the same amount of pressure or dynamics, I heard the same results. The LCD has large and readable characters, and doesn't suffer from the lack of a contrast control. The blue, backlit LCD is large and readable, and the knobs and pads have a positive feel. Two fader caps were a little close to the panel and I could feel some friction; pulling up slightly on the cap solved that. (Note: Upon reading of the issue I had with the two faders—which really was minor enough that I almost didn't mention it—Novation nonetheless took it quite seriously, and said their project manager will work with the factory to make sure this is tested more rigorously.) The control surface on the 49- and 61-key models includes nine faders, typically for channels and master. Several of the 20 presets are loaded with factory defaults (Basic MIDI Control, Reason, GarageBand, MainStage, Kontakt, FM8, and a few others) but of course, you can create, save, and load your own. REGARDING SELF-CONTROL Impulse supports Mac OS X 10.6.8 (32/64-bit) and Lion 10.7.2 or higher, as well as Windows XP SP3 (32-bit) and 7 (32- or 64-bit). In theory, Vista isn’t supported yet I checked out the system with 64-bit Vista and everything worked as expected. Of course I’m not suggesting you go against Novation’s recommendations (and if you do you’re on your own), but this indicates to me that they’re pretty conservative in how they spec system requirements. The keyboard is class-compliant so you don’t need drivers, but Novation’s software is necessary to run Automap, which automatically and intelligently correlates hardware controllers to virtual effect, instrument, and DAW parameters. Although Impulse includes the Automap 4 application on its bundled DVD-ROM, I of course checked the web site for a newer version, and found Automap 4.2. Installation was painless; just click and go—the latest version also updated the firmware automatically. When setting up the software you can choose templates for any of the following programs that are installed on your computer: Cubase 6, Pro Tools, Live, Sonar X1, Reason, Logic, or “advanced,” which involves general purpose MIDI control for programs like Studio One Pro or Reaper. However, note that Impulse is HUI-compatible (but not Mackie Control-compatible), so you can vary level, solo, mute, etc. with programs that accept HUI messages. After selecting the VST effects path, I chose setup for Sonar X1. Using Impulse with Ableton Live offers some additional mojo, as you can launch clips with the percussion pads. In this mode, the pads glow either yellow, green, or red depending on whether a clip is available, playing, or recording respectively. The lights flash if Live is waiting for the specified quantization timing before firing the clip. AUTOMAP 4 Automap creates a “wrapped” version of your plug-ins (VST, AU, RTAS, and TDM, but not DirectX) so the program can read and edit their parameters. The setup program walks you through setting up your DAW with Automap, and Novation makes the process transparent and automatic. Automap can correlate the eight rotary encoders to processor and instrument parameters. Note the transport controls below the knobs. When I started using Automap with various effects, Novation had apparently already created logical mappings between controls and parameters; they claim they’ve already developed mappings for many effects, so that’s not too surprising. Of course, you can also come up with your own custom mappings, as well as exchange mappings with other users. The Impulse LCD shows an abbreviated name of the parameter being controlled. With instruments, as there are only eight encoders there can be dozens of scrollable parameter pages to cover all available parameters, but note that you can edit mappings to move your most-used parameters to the first couple pages for easy access. Although many of the mappings make logical sense, you can re-assign parameters as needed. Note in the lower screen shot that you can also modify a parameter's range, as well as invert the control's response. Furthermore, the Automap 4 user interface is really slick, and shows mappings on-screen; sometimes it’s easier to use this to find particular pages, especially for lesser-used parameters. Also, a small pop-up balloon shows the parameter being controlled by the hardware—helpful, although you can disable this if it’s distracting. The pop-up in the lower right confirms the parameter currently being adjusted. ARPEGGIATOR AND ROLL The pads are also where you control some of the arpeggiator's characteristics. The arpeggiator is very cool, as you can use the pads to alter how the arpeggiator plays—drop out notes, insert notes, and even use pad velocity to affect note levels. Other options include gate time, note quantize (sync), pattern type (up, down, random, etc.), octave range, sequence length, and swing. You can also set the pads to create rolls, for example, repetitive drum hits. Both these functions can follow tempo, or tap tempo. CONCLUSIONS Those are the main “sexy” features, but there are also the expected ones—transmit program changes, split the keyboard into four independent MIDI zones, set four velocity curves (or full velocity) for the keyboard and three curves (or again, full velocity) for the pads, local control on/off, sys ex dump (save settings with your DAW project), and more. There’s even a help menu whose scrolling messages give hints on how particular functions work. The Impulse series is priced competitively, while offering several features you won’t find elsewhere. Of these, Automap 4 is the most significant; the program has matured to the point where using a controller soon becomes a natural part of your workflow. As a bonus, new wizards make it easier to set up than previous versions, and it’s also easier to tweak. Wizards simplify setting up Automap by showing which DAWs are installed on your computer; after specifying a particular DAW, Automap takes over the setup and optimization process. And while the templates are welcome, because the control surface generates MIDI continuous controller data, you can always use the target program’s learn function to create custom mappings. Throw in a generous software bundle (including Ableton Live Lite, the Novation BassStation plug-in, and a bunch of samples and loops) and while there’s certainly no lack of keyboard controllers available, Novation has indeed brought something new to the party. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. They're intended for "music on the go"—but that may not be their only application nanoPAD2 $75 MSRP, $59.99 street nanoKONTROL2 $75 MSRP, $59.99 street nanoKEY2 $65 MSRP, $49.99 street www.korg.com by Craig Anderton It’s great that laptops are powerful enough to accommodate actual music-making, but they were never designed for interfacing with software like a musical instrument. There’s the "QWERTY-keyboard-as-trigger" option, embodied in Tanager Audioworks’ clever Chirp program and included as a control option in Reason, but forget about velocity, easy playing, faders, or a good feel. Korg’s original nano series was intended to address that need, but now they’re back with their second generation nano controllers—which aren’t just for laptops any more. Really? Well, it seems space is always at a premium with desktop setups, too. You’re tied to the computer monitor, mouse, and keyboard, and finding the space to put an AGO keyboard in front of your computer keyboard is not always possible . . . so you set up a keyboard to the side, but it may or may not include pads, faders, and other controllers. But these little guys do. There are three nanoSERIES2 controllers—keyboard, pad controller, and fader box—all available in black or white, and all of which work with USB (they come with a 43” USB cable). You can swap them out as needed as you go through a project, and not have to move from your computer. Of course, they’ll work with laptops; I presume Korg considers that the primary application. Nonetheless, in the course of doing this review, I’ve found the nanoSERIES2 controllers very convenient for punching out a quick soft synth or drum part, or setting up a mix. GETTING STARTED As usual, I first went to Korg’s web site to look for software, updates, etc. You need to go there anyway if you want to take advantage of the free, bonus software offer (lite versions of Korg’s M1 soft synth, Toontrack’s EZ Drummer, and Lounge Lizard, along with a discount coupon for various versions of Ableton Live). I found two drivers, each of which said it was the most recent version (check the dates—the April 2011 one is the droid you’re looking for), an updater for each controller, controller editor, and USB-MIDI driver that needs to be installed for Windows or Mac prior to installing the other elements. I had a false start on Windows; installing the USB-MIDI Driver is not the same as installing the driver itself, but rather, it installs the program that lets you install the driver. Once I had that figured out, it was smooth sailing. I did the recommended updates, and was good to go. THE KONTROL EDITOR First things first: What you see in all these devices is not necessarily what you get, because you can do a lot of customization via the cross-plaform Korg Kontrol Editor software. For example, I found the nanoKEY2 velocity response predictable, but there are four possible velocity curves (light, normal, heavy, and constant velocity, where you can specify a fixed value) so you can accommodate a different touch if needed. Furthermore, with the nanoKONTROL2, you can assign the faders and knobs to any continuous controller parameter, and the buttons to notes as well as continuous controllers. Although in general you could actually ignore the editor as the defaults are sensible and work fine, the editor lets you take the degree of control much further. However, note that the Kontrol Editor will compete with your DAW for MIDI I/O. For example, if your DAW has MIDI in and out assigned to the nanoKEY2, and then you then call up the Kontrol Editor, the DAW will take priority and the Kontrol Editor won’t be able to write changes to the nanoKEY2. The workaround is simple: de-select the MIDI I/O in your DAW, write your changes using the Kontrol Editor, then re-select your DAW’s MIDI I/O to continue using the nanoKEY2 as a controller. A very useful Kontrol Editor feature is a drop-down menu for choosing particular parameters, and shows the values for all applicable controls simultaneously so you don’t have to call up each button individually to see its assignment. Note that Korg also includes the factory presets so you can always get back to square one if your editing gets out of hand. Finally, there’s a multiple-level undo function (not just the last change), and you can save particular setups for the various controllers. This would let you, for example, save a separate setup for the nanoKONTROL2 where the solo and mute buttons generate notes, so you could kick out a quick bass line or trigger drum sounds without having to switch over to a different control device. nanoKEY2 This handy little keyboard has 25 keys, and is 5/8” thick so it’s definitely low profile. Keyboard action is subjective, but the nanoKEY2 turned out better than expected. I say “turned out” because of course, these aren’t real keys, and you have to get used to a somewhat different playing style. Once I did, though, I didn’t find it hard to do single-note lines, and not much more of a hassle to play chords. There are six buttons. Octave up and down are obvious, but a cool feature is the ranges are color-coded—no color for default, green for an octave up or down, orange for two octaves, red for three octaves, and flashing red for four octaves. I would have preferred a slower flash rate, but no big deal as four octaves off default is something I rarely use. As you might expect, there’s no pitch wheel or ribbon controller. But, there are pitch up and pitch down buttons, and the implementation is quite clever. With the Editor, you can specify a rise/fall time for the pitch bend, so pressing the button doesn’t necessarily produce an instantaneous change, but rather, your choice of rise and fall times: instantaneous, slow (about 750ms), normal (about 100ms), or fast (about 50ms). Note that this generates the full pitch bend range of values, so if you want to restrict the range, you’ll need to do so with the target instrument. From left to right: Slow, normal, and fast pitch bend times. The Mod and Sustain buttons are not limited to those designations—with the Kontrol Editor you can change the controller number, choose momentary or latching mode, set max and min controller values, and choose a “switch speed” as with the pitch bend. NanoPAD2 The 16 pads are obvious candidates for triggering drums, but they can also send program changes and MIDI continuous controller switch values. However, the grooviest feature here is the X-Y controller—it’s a great addition that adds a lot more options. Pad assignments are comprehensive. Each can be set to its own channel, alternate between toggle or momentary modes, and have the arpeggiation options enabled or disabled. There’s also a global velocity curve option (same as the nanoKEY2, including the constant velocity option). As to X-Y options, you can assign the X and Y axis to independent continuous controllers, with choice of normal or reverse polarity. These assignments apply if you drag across the touchpad. Simply touching the pad can send out a continuous controller with variable on and off values. However, as an alternative to just sending controller messages, there are various gate, scale, and arpeggiation features. In Touch Scale mode, you can trigger notes with constant note-on velocity, based on one of 16 user-selectable scales (including a user-defined scale), simiply by dragging across the pad along the X-axis. The Y-axis sends out a continuous controller of your choice. If you also enable Gate Arp,whatever notes you play will be repeated at the arpeggiation rate, which you can set from 1/48th notes to half-notes. With this mode the Y-axis changes the gate time. With only Gate Arp enabled, the playing technique involves triggering notes with the pads, and altering those notes with the touch pad where the X-axis controls the arpeggiation rate, and the Y-axis controls velocity. You can change the range of notes assigned to the touch pad to cover a lesser range with more precision, as well as the octave, but note that the arpeggiation will not jump octaves—gating remains on the selected note pitch until changed. There’s also a Hold button that maintains the last X-Y pad value prior to releasing your finger from the pad. Finally, given the versatility it’s helpful that you can store four individual “scenes” that are presets of all the parameters you’ve saved. So, you could have one scene dedicated to traditional drum triggering, another optimized for notes, another with the X-Y touch pad as a controller, and another with the X-Y touch pad doing its gated arpeggiator thing. You can sync the tempo to the host, run from an internal tempo, or do tap tempo. nanoKONTROL2 This has two modes—a Mackie Control-compatible DAW controller, or a general-purpose MIDI control surface. For Mackie control, templates (which you load by holding down particular buttons during nanoKONTROL2 power-up) are included for Cubase, Digital Performer, GarageBand, Logic, Live, Pro Tools, and Sonar. There’s also a general-purpose template for DAWs where specific templates are not included. Note that when you load a template, the Kontrol Editor can’t make changes; it’s for control surface applications only. The Mackie Control mode worked perfectly in Sonar, including bank-switching with more than eight tracks. It’s really quite cool to have all that control in such a tiny little control surface, and while the 30mm faders have a pretty short throw, they’re perfectly useable. As a general-purpose control surface, you can create mappings for virtual instruments, signal processors, whatever. For example with Sonar, it was a piece of cake to have the nanoKONTROL2 serve as an ACT device for plug-in parameter control. As with the other nanoSERIES devices, the Kontrol Editor gives you pretty much total freedom for assigning the strips. They can be on different MIDI channels, and you can even set minimum and maximum values for the continuous controller messages assigned to each knob and fader (no, the faders aren’t motorized . . . we’re not in the 22nd century yet). Also, note that any button, including transport buttons, can generate notes. CONCLUSIONS I can’t comment on how these would hold up over time, but so far, the construction quality seems considerably improved over the original nano series controllers. I assume the nanoKEY2 and nanoPAD2 would be particularly durable because of their low profile; of course, the nanoKONTROL2 has knobs that protrude above the surface, so you wouldn’t want them to catch on something but aside from that, it seems an unlikely candidate for damage. The overall playing feel is good, too; as you’d expect the buttons have a little “wobble,” but their action is positive. The software is also a major plus. Although the controllers can serve as standard, generic devices, the latency seemed very low and I assume that’s because of the Korg MIDI-USB drivers. The Kontrol Editor increases flexibility dramatically, and although it took a little effort to get everything up, running, and updated on Windows, it wasn’t an onerous task and everything is working smoothly. Of the three it’s hard to pick a favorite, although the nanoPAD2’s inclusion of the X-Y touch pad and arpeggiation options are very cool if you’re into programming beats. Then again, having a Mackie Control-compatible device that fits just about anywhere has definite merit . . . and if you need a keyboard, it’s hard to beat the nanoKEY2 for under $50 street. All in all, these are a step up and logical evolution from Korg’s original nanos, but to loop back to the beginning, don’t overlook these as adjuncts to a desktop studio. While obviously intended for musicians on the go, they’re useful for any situation with limited space. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. How a simple hand-held recorder can change your perspective—and help you make better music By Ronan Chris Murphy The concept was born from a joke made during an intense week of pre-production for the first full length album by the Italian rock band, Riaffiora—an idea that would eventually have us singing at midnight in the piazzas of Venice, recording a pipe organ in an ancient church, recording guitars in the loggia of the Palazzo Ducale, using a 600 year old palazzo as a reverb chamber and more . . . We were hunkered down in a 15th century wine cellar-turned-rehearsal space in the center of the medieval walled city of Cittadella, 40 minutes west of Venice. Influenced by great creative energy and copious amounts of local Prosecco, someone suggested that we record the guest accordion player while he flew over the piazza of Cittadella suspended by wires like a TV superhero. As the laughter (and perhaps the Prosecco) subsided, I could not get the idea of recording in this beautiful piazza out of my head. I began to explore the idea, and the more I thought about it, the more I understood that the music we were working on really was drawn from our surroundings—and I became obsessed with the idea of finding ways to actually integrate beautiful places in the Veneto region of North Italy into the album. My eventual vision for the album was to create an album that incorporated the sounds, spirit and experiences of the ancient spaces, but still have the size and power of a major label rock album. We settled into Abnegat Studio outside of Vicenza, Italy where engineer Jean Charles Carbone and I recorded the basic tracks for the album. We recorded to Pro Tools HD primarily through the studio's beautiful vintage EMT console. All of the drums, bass, and a lot of the electric guitars were recorded at the studio, but with a few exceptions, the majority of the rest of the album (vocals, guitars, accordion, strings, organs, piano, and more) was recorded in historic remote locations around North Italy. Bonus Video! See a 12-minute documentary on the making of the album ARTIST: RIAFFIORA ALBUM: LA MARSIGLIESE LABEL: DISCHI SOVIET STUDIO An essential part of making this happen was that as luck would have it, Avid had recently released Pro Tools 9, allowing long-time Pro Tools users like myself to bounce easily between multiple systems and be extremely mobile. This made a lot of the remote recording very easy, but I still had aspirations of being even more mobile and capturing spaces where even the bulk and setup time of a laptop rig would be difficult. “RUN AND GUN” RECORDING I needed some kind of hand-held portable solution to be able to do the audio equivalent of "run and gun" (a filming technique of getting a shot fast without any tech set up or break down), so I looked into the available options. After some extensive head scratching and research I realized that I already had a great solution in the Zoom H4N portable recorder that belonged to Riaffiora’s keyboard player. We had used it in pre-production and got some decent recordings of the band using the built-in mics, but what made me settle on the H4N as the recorder was not only the sound quality, but its rudimentary (yet still very usable), built-in 4-track recorder. This feature would allow me to do the remote recordings, and save potentially days of synching and editing work after the fact. I made some initial tests for audio quality, and to my surprise I found the onboard condenser mics very usable. Even though some outboard mics and mic pres would likely have improved the quality somewhat, I decided that the increased flexibility, speed, and ease of recording with the onboard mics would outweigh the small loss in audio quality. At no point did I ever regret that decision. NO SYNC? NO WORRIES! The solution for integrating the H4N, which has no kind of synch or time code, turned out to be quite simple. I made simple mono “in the box” mixes of the basic tracks on my laptop that had a short stick click sample at the beginning of each mix, and recorded that mix on to the first track of the 4-track session in the H4N. This left us with three tracks for overdubbing for each song. Using a simple headphone splitter, both the musician and I could hear the music and I could do a simple mix of basic tracks and overdubs. The only downside to this approach is that we were committed to the mix of the basic tracks. If the musician needed more or less of a certain instrument in the headphones I was not able to do that for them, but once the musicians understood that limitation, no one had any problem with it. The H4N records onto SD cards, saving each of the four tracks as a single WAV file that starts at zero, and could be imported from the SD card into their respective Pro Tools sessions. Once in Pro Tools I “grouped” the 4 tracks from the Zoom and aligned them visually so that the stick click from track1 (the basic tracks mix) and the original stick click in the Pro Tools session lined up. The overdub tracks were now perfectly aligned with the tracks recorded direct into Pro Tools. BAND ON THE RUN The decent audio quality, as well as the speed and simplicity of recording with the hand-held recorder, allowed for recording opportunities that could have not have been realized with a traditional setup. As our entire “recording studio” could fit in my coat pocket, we were able to move fast, and set up fast. This let us go into various locations without attracting attention to ourselves and start recording literally within seconds of finding a space in which we wanted to record, and be on the run within seconds of getting chased out of any given space. In addition to the hand-held recorder tracks, we also used a laptop and Digi 003 (with various converters and mic pres) to record in locations where we had a little more time for set up—and less chance of being chased away by the cops. The locations included the several rooms in a 600-year-old palazzo (whose four-story marble stairway also served as a reverb chamber), the 200-year-old Teatro Sociale in the center of Cittadella, and the 160-year-old pipe organ in the church of Fontaniva. The recording of the pipe organ ended up being a combination of recording directly to Pro Tools and the Zoom. We had only a limited time for setup in the church, as a funeral service was set to start soon after our arrival. I put a Shure KSM32 (medium diaphragm condenser) on the balcony close to the organ pipes and an AKG 414 (large diaphragm condenser) about 50 yards back in the middle of the church. I really wanted to capture the ambience of the church, so I set up the H4N 100 yards back in front of the altar on the far side of the church. As the Zoom was not synched to the Pro Tools session in any way, I clapped my hands in the church before each take so that we would have a way to visually align the clap on the Zoom tracks with the clap on the tracks recorded directly to Pro Tools (you can hear the clap we left in at the beginning of the song “L’imputato” on the album). When it came time to mix, I never used the AKG 414 track from the middle of the room, but instead used the close sound of the KSM32 with the ambience from the Zoom and, in some cases, used only the Zoom, which provided a surprisingly good overall sound for the organ when I didn’t need much direct impact. These sessions became some of the most enjoyable recording moments of my life, and the level of excitement and joy of all involved was something I’ve rarely witnessed in my over two decades of recording. And yes, we finally did record the accordion player in the beautiful piazza of Cittadella (although, sadly, not flying through the air like a superhero), tracked acoustic guitars in the salon of a renaissance palazzo, and the highlight was using the magical city of Venice, Italy as our live room: In the middle of the night we recorded vocals in the middle of Saint Mark’s Square and in tiny hidden alleys by the canals, we recorded guitars in the loggia of the Doge’s Palace, violins by the gondolas docked in the Venice lagoon and church bells throughout the city. BACK HOME IN THE STUDIO Because we never had exclusive use of any of the outdoor spaces, every recording situation was filled with extraneous noises, such as boats and Vespas roaring by, people talking (or yelling at us to be quiet outside their windows in the middle of the night), or unexpected blasts of wind. If I felt something added to the spirit of the event I kept it, but could clean up almost all the troublesome noises with a combination of iZotope RX and Bias Soundsoap noise reduction software. We never “lost” a performance because of noise issues. There were a few musical parts that we recorded in both the controlled studio setting through all the high-end gear and also in the remote locations with the hand-held recorder, and much to my surprise, in almost every instance I chose the H4N version in the final mix. Although the studio recordings were technically superior, there was something special about the character of the sounds and a spark in the performances of the remote tracks that always seemed to be more exciting in the mix. I mixed and mastered the album at my studio, Veneto West, in Los Angeles. After completing all the technical work of aligning and cleaning up the “hand-held” tracks, the process of mixing was no different from any other album I might mix—but in this case, we had unique elements and a bit of magic that helped make the album special and an experience of a lifetime for all of us involved. At its best, new technology allows us to expand our creative palette. I’ve worked on hundreds of albums, in some of the top studios around the world—but a simple hand-held recorder that cost a few hundred dollars opened up creative opportunities that would have never been available to us even in the most expensive recording studio in the world. And now, the whole world can be our live room. Producer/Mixer Ronan Chris Murphy has worked with artists such as King Crimson, Steve Morse, Tony Levin, Chucho Valdes, Nels Cline, and Ulver; he's also the founder of Recording Boot Camps™, a new kind of school that teaches musicians what they really need to know to make better recordings.
  14. Check out the latest advances and techniques for mixing with DAWs by Craig Anderton The best mics, recording techniques, and players don’t guarantee great results unless they’re accompanied by a great mix. But the face of mixing has changed dramatically with the introduction of the DAW, both for better and worse. Better, because you don’t need to spend $250,000 for a huge mixer with console automation, but worse because we’ve sacrificed hands-on control and transparent workflow. Or have we? Today’s DAWs have multiple options—from track icons to color-coding to configurable mixers—that help overcome the limitations of displaying tons of tracks on a computer monitor. While this can’t replace the one-function/one-control design of analog gear, some tasks (such as grouping and automation) are now actually easier to do than they were back in the days when analog ruled the world. As to hands-on control, controller-related products keep expanding and offering more possibilities, from standard control surfaces with motorized faders, to FireWire or USB mixers, to pressing keyboard workstations (such as Yamaha’s Motif XS/XF series or Korg’s M3 or Kronos) into service as controllers. These all help re-create “the analog experience.” Although we’ll touch a bit on gear in this article, it’s only to illustrate particular points—the main point of interest here is techniques, not features, and how those techniques are implemented in various DAWs. And speaking of DAWs, if you’ve held off on upgrading your DAW of choice, now might be the time to reconsider. As DAW feature sets mature, more companies focus their efforts on workflow and efficiency. While these kinds of updates may not seem compelling when looking over specs on a web site, in practice they can make the recording and mixing process more enjoyable and streamlined. And isn’t that what we all want in the studio? So pull up those faders, dim the lights, and let’s get started (click on any image to enlarge). GAIN-STAGING The typical mixer has several places where you can set levels; proper gain-staging makes sure that levels are set properly to avoid either distortion (levels too high) or excessive noise (levels too low). There’s some confusion about gain-staging, because the way it works in hardware and software differs. With hardware, you’re always dealing with a fixed, physical amount of headroom and dynamic range, which must be respected. Modern virtual mixers (with 32-bit floating point resolution and above) have almost unlimited dynamic range in the mixer channels themselves—you can go “into the red” yet never hear distortion. However, at some point the virtual world meets the physical world, and is again subject to hardware limitations. Gain-stage working backward from the output; you need to make sure that the output level doesn’t overload the physical audio interface. I also treat -6 to -10dB output peaks as “0.” Leave some headroom to allow for inter-sample distortion (Fig. 1) and also, it seems converters like to have a little “breathing room.” Fig. 1: SSL’s X-ISM metering measures inter-sample distortion, and is available as a free download from solidstatelogic.com. Remember, these levels can—and usually will—be brought up during the mastering process anyway. Then, set individual channel levels so that the mixed output’s peaks don’t exceed that -6 to -10dB range. CONFIGURABLE MIXERS One of the most useful features of virtual mixers is that you can configure them to show only what’s needed for the task at hand, thus reducing screen clutter (Fig. 2). Fig. 2: This collage outlines in red the toolbars that show/hide various mixer elements (left Steinberg Cubase 5, middle Cakewalk Sonar 8.5, and right Ableton Live 8). Mixing often happens in stages: First you adjust levels, then EQ, then stereo placement, aux busing, etc. Granted, you’ll go back and forth as you tweak sounds—for example, changing EQ might affect levels—but if you save particular mixer configurations, you can recall them as needed. Here are some examples of how to use the configurable mixer feature when mixing. The meter bridge. This is more applicable to tracking than mixing, but is definitely worth a mention. If you hide everything except meters (and narrow the mixer channel strips, if possible), then you essentially have a meter bridge. As software mixers often do not adjust incoming levels from an interface when recording (typically, the interface provides an applet for that task), you can leave the “meter bridge” up on the screen to monitor incoming levels along with previously-recorded tracks. Hiding non-essentials. Visual distractions work against mixing; some people even turn off their monitors, using only a control surface, so they can concentrate on listening. While you might not want to go to that extreme, when mixing you probably don’t need to see I/O setups, and once the EQ settings are nailed, you probably won’t need those either. You may want to adjust aux bus sends during the course of a mix, but that task can be relegated to automation, letting you hide buses as well. Channel arrangement. With giant hardware mixers, it was common to re-patch tape channel outs to logical groupings on the mixer, so that all the drum faders would be adjacent to each other; ditto vocals, guitars, etc. With virtual mixers, you can usually do this just by dragging the channels around: Take that final percussion overdub you added on track 26, and move it next to the drums. Move the harmony vocals so they’re sitting next to the lead vocal, and re-arrange the rhythm tracks so they flow logically. And while you’re at it, think about having a more or less standardized arrangement in the future—for example starting off with drums on the lowest-numbered tracks, then bass, then rhythm guitars and keyboards, and finally moving on to lead parts and “ear candy” overdubs. The less you need to think about where to find what you want, the better. Track icons. When I first saw these on GarageBand, I thought the concept was silly—who needs cute little pictures of guitars, drums, etc.? But I loaded track icons once when I wanted to make an article screen shot look more interesting, and have been using them ever since. The minute or two it takes to locate and load the icons pays off in terms of parsing tracks rapidly (Fig. 3). Coupled with color-coding, you can jump to a track visually without having to read the channel name. Fig. 3: Acoustica’s Mixcraft 5 is one of several programs that offers track icons to make quick, visual identification of DAW tracks. Color coding. Similarly, color-coding tracks can be tremendously helpful if done consistently. I go by the spectrum mnemonic: Roy G. Biv (red, orange, yellow, green, blue, indigo, violet). Drums are red, bass orange, melodic rhythm parts yellow, vocals green, leads blue, percussion indigo, and effects violet. When you have a lot of tracks, color-coding makes it easy to scroll to the correct section of the mixer (if scrolling is necessary, which I try to avoid if possible). WHY YOU NEED A DUAL MONITOR SETUP If you’re not using two (or even three) monitors, you’ll kick yourself when you finally get an additional monitor and realize just how easy much easier DAW-based mixing can be—especially with configurable mixers. Dedicate the second monitor to the mixer window and the main monitor to showing tracks, virtual instrument GUIs, etc., or stretch the mixer over both monitors to emulate old-school hardware-style mixing. Your graphics card will need to handle multiple monitors; most non-entry-level cards do these days, and some desktop and laptop computers have that capability “out of the box.” However, combining different monitor technologies can be problematic—for example, you might want to use an old 19” CRT monitor along with a new LCD monitor, only to find that the refresh rate has to be set to the lowest common frequency. If the LCD wants 60Hz, then you’re stuck with 60Hz (i.e., flicker city!) on the CRT. If possible, use matched monitors, or at least matching technology. CHANNEL STRIPS Several DAWs include channel strips with EQ and dynamics control (Fig. 4), or even more esoteric strips (e.g., a channel strip dedicated to drums or vocals). Fig. 4: Cakewalk Sonar X1 (left) and Propellerhead Reason (right) have sophisticated channel strips with EQ, dynamics control, and with X1, saturation. However, also note that third-party channel strips are available—see Fig. 5. Fig. 5: Channel strips, clockwise from top: iZotope Alloy, Waves Renaissance Channel, Universal Audio Neve 88RS. If there are certain settings you return to frequently (I’ve found particular settings that work well with my voice for narration, so I have a vocal channel strip narration preset), these can save time compared to inserting individual plug-ins. Although I often do make minor tweaks, it’s easier than starting from scratch. Even if you don’t have specific channel strips, many DAWs let you create track presets that include particular plug-in configurations. For example, I made a “virtual guitar rack” track preset designed specifically for processing guitar with an amp sim, compression, EQ, and spring reverb. BUSING There are three places to insert effects in a typical mixer: Channel inserts, where the effect processes only that channel Master inserts, where the processor affects the entire mix (e.g., overall limiting or EQ) Buses, where the processor affects anything feeding that bus Proper busing can simplify the mixing process (Fig. 6), and make for a happier CPU. Fig. 6: Logic Pro’s “Inspector” for individual channels shows not only the channel’s level on the left, but also, on the right you’ll see the parameters for whatever send you select (or the output bus). In the days of hardware, busing was needed because unlike plug-ins, which you can instantiate until your CPU screams “no more,” a hardware processor could process only one signal path at a time. Therefore, to process multiple signals, you had to create a signal path that could mix together multiple signals—in other words, a bus that fed the processor. The most common effects bus application is reverb, for two reasons. First, high-quality reverbs (particularly convolution types) generally uses a lot of CPU power, so you don’t want to open up multiple instances. Second, there’s an aesthetic issue. If you’re using reverb to give a feeling of music being in an acoustic space, it makes sense to have a single, common acoustic space. Increasing a channel’s reverb send places the sound more in the “back,” and less send places it more in the “front.” A variation on this theme is to have two reverb buses and two reverbs, one for sustained instruments and one for percussive instruments. Use two instances of the same reverb, with very similar settings except for diffusion. This is because you generally want lots of diffusion with percussive sounds to avoid hearing discrete echoes, and less diffusion with sustained instruments (like vocals or lead guitar) so that the reverb isn’t too “thick,” thus muddying the sustained sound. You’ll still have the feeling of a unified acoustic space, but with the advantage of being able to decide how you want to process individual tracks. Of course, effects buses aren’t good only for reverb. I sometimes put an effect with very light distortion in a bus, and feed in signals that need a little “crunch”—for example, adding a little grit to kick and bass can help them stand out more when playing the mix through speakers that lack bass response. Tempo-synched delay for dance music cuts also lends itself to busing, as you may want a similar rhythmic delay feel for multiple tracks. GROUPING Grouping is a way to let one fader control many faders, and there are two main ways of doing this. The classic example of old-school grouping is a drum set with multiple mics; once you nail the relative balance of the individual channels, you can send them to a bus, which allows raising and lowering the level of all mics with a single control. With this method, the individual fader levels don’t change. The other option is not to use a bus, but assign all the faders to a group (Fig. 7). Fig. 7: In PreSonus Studio One Pro, the top three tracks have been selected, and are about to be grouped so edits applied to one track apply to the other grouped tracks. In this case, moving one fader causes all the other faders to follow. Furthermore, with virtual mixers it’s often possible to choose whether group fader levels move linearly or ratiometrically. With a linear change, moving one fader a certain number of dB raises or lowers all faders by the same number of dB. When using ratiometric changes, raising or lowering a fader’s level by a certain percentage raises or lowers all grouped fader levels by the same percentage, not by a specific number of dB. In almost all cases you’ll want to choose a ratiometric response. Another use for grouping is to fight “level creep” where you raise the level of one track, then another, and then another, until you find the master is creeping up to zero or even exceeding it (see the section on Gain-Staging). Temporarily group all the faders ratiometrically, then bring them down (or up, if your level creep went in the opposite direction) until the output level is in the right range. CONTROL SURFACES Yes, I know people mix with a mouse. But I highly recommend using a control surface not because I was raised with hardware mixers, but because a control surface is a “parallel interface”—you can control multiple aspects of your mix simultaneously—whereas a mouse is more like a serial interface, where you can control only one aspect of a mix at a time. Furthermore, I prefer a mix to be a performance. You can add a lot more life to a mix by using faders not just to set static levels, but to add dynamic and rhythmic variations (i.e., moving faders subtly in time with the music) that impart life and motion to the mix. In any event, you have a lot of options when it comes to control surfaces (Fig. 8). Fig. 8: A variety of hands-on controllers. Clockwise from upper left: Behringer BCF2000, Novation Nocturn, Avid MC Mix, and Frontier Design AlphaTrack. One option is to use a control surface, dedicated to mixing functions, that produces control signals your DAW can interpret and understand. Typical models include the Avid Artist Series (formerly from Euphonix), Mackie Control, Cakewalk VS-700C, Behringer BCF2000, Alesis Master Control, etc. The more advanced models use motorized faders, which simplify the mixing process because you can overdub automation moves just by grabbing faders and punching in. If that option is too expensive, there are less costly alternatives, like the Frontier Design AlphaTrack, PreSonus Faderport, Cakewalk VS-20 for guitarists, and the like. These generally have fewer faders and options, but are still more tactile than using a mouse. There’s yet another option that might work even better for you: An analog or digital mixer. I first got turned on this back in the (very) early days of DAWs, when I had a Panasonic DA7 digital mixer. It had great EQ and dynamics that often sounded better than what was built into DAWs, as well as motorized faders and decent hardware busing options. It also had two ADAT cards so I could run 16 digital audio channels into the mixer, and I used the Creamware SCOPE interface with two ADAT outs. So, I could assign tracks to the SCOPE ADAT outs, feed these into the DA7, and mix using the DA7. Syncing the motorized faders moves to the DAW allowed for automated mixes. This had several advantages, starting with hands-on control. Also, by using the DA7’s internal effects, I not only had better sound quality but lightened the computer’s CPU load. And it was easier to interface hardware processors with the DA7 compared to interfacing them with a DAW (although most current DAWs make it easy to treat outboard hardware gear like plug-ins if your audio interface can dedicate I/O to the processors). Finally, the DA7 had a MIDI control layer, so it was even possible to control MIDI parameters in virtual instruments and effects plug-ins from the same control surface that was doing the mixing. While the DA7 is long gone, Yamaha offers the 02R96VCM and 02R96VCM digital mixers, which offer the same general advantages; also check out the StudioLive series from PreSonus. However, that’s just one way to deal with deploying a control surface. You can use a high-quality analog mixer, or something like the Dangerous Music 2-BUS and D-BOX. Analog mixing has a somewhat different sonic character compared to digital mixing, although I wouldn’t go so far as to say one is inherently better than the other (it’s more like a Strat vs. Les Paul situation—different strokes for different folks). The main issue will be I/O limitations, because you have to get the audio out of the DAW and into the mixer. If you have 43 tracks and your interface has only 8 discrete outs—trouble. The workaround is to create stems by assigning related tracks (e.g. drums, background vocals, rhythm guitars, etc.) to buses, then sending the bus outputs to the interface. In some ways this is a fun way to mix, as you have a more limited set of controls and it’s harder to get “lost in the tracks.” Today’s FireWire and USB 2.0 mixers (M-Audio, Alesis, Phonic, Mackie, etc.) can provide a best-of-both-worlds option. These are basically traditional mixers that can also act as DAW interfaces—and while recording, they have enough inputs to record a multi-miked drum set and several other instruments simultaneously. Similarly, when it’s time to mix you might have enough channels to mix each channel individually, or at least mix a combination of individual channels and stems. SCREEN SETS Different programs call this concept by different names, but basically, it’s about being able to call up a particular configuration of windows with a simple keyboard shortcut or menu item (Fig. 9) so you can switch instantly among various views. Fig. 9: Logic Pro 9’s Screensets get their own menu for quick recall and switching among views. Like many of today’s DAW features (track icons, color-coding, configuring mixers, and the like) it requires some time and thought to create a useful collection of screen sets, so some people don’t bother. But this initial time investment is well worth it, because you’ll save far more time in the future. Think of how often you’ve needed to leave a mixer view to do a quick edit in the track or arrange view: You resize, move windows, change window sizes, make your changes, then resize and move all over again to get back to where you were. It’s so much simpler to have a keyboard shortcut that says “hide the mixer, pull up the arranger view, and have the piano roll editing window ready to go” and after doing your edits, having another shortcut that says “hide all that other stuff and just give me the mixer.” DIGITAL METERING LIMITATIONS And finally . . . they may be digital, but you can’t always trust digital metering: As just one example, to indicate clipping, digital meters sometimes require that several consecutive samples clip. Therefore, if only a few samples clip at a time, your meters may not indicate that clipping has occurred. Also, not all digital gear is totally consistent—especially hardware. In theory, a full-strength digital signal where all the bits are “1” should always read 0 dB; however, some designers provide a little headroom before clipping actually occurs—a signal that causes a digital mixer to hit -1dB might show as 0dB on your DAW. It's a good idea to use a test tone to check out metering characteristics of all your digital gear. Here are the steps: Set a sine wave test tone oscillator to about 1 kHz, or play a synthesizer sine wave two octaves about middle C (a little over 1 kHz). Send this signal into an analog-to-digital converter. Patch the A/D converter's digital out to the digital in of the device you want to measure. Adjust the oscillator signal level until the indicator for the device being tested just hits -6dB. Be careful not to change the oscillator signal level! Repeat step 3 for any other digital audio devices you want to test. In theory, all your other gear should indicate -6dB but if not, note any variations in your studio notebook for future reference. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Can’t afford recording school? Then let Alan Parsons be your personal tutor 3-DVD set $149 MSRP, $134.10 street. Also available as 24 downloads ($99), individual section downloads ($4.99), and individual section stream ($1.99), www.artandscienceofsound.com by Craig Anderton You definitely don’t want to ask me to review an instructional video. Why? Because I make instructional videos, and while I wouldn’t say I’m hypercritical...well actually, I would say I’m hypercritical. So I also don’t review a lot of instructional videos, because I don’t want it to look like I’m dissing fellow travelers. We all do the best we can, and I’m sure people have learned a lot from videos I’ve considered not all that wonderful. Yet I’m reviewing this one, because it is, in a word, outstanding. ALAN PARSONS' PROJECT Much of what elevates this video above the norm is Alan Parsons himself. He doesn’t come across as “Alan Parsons the Respected Engineer,” but more like a neighbor who heard your band practicing, and came over to introduce himself and shoot the breeze about music. Or at least, a neighbor who knows a helluva lot about sound and recording. But, it’s not a solo effort by any means. Parsons enlists many people who may not be household names, but are on the same level as he is in their respective fields—for example, I was pretty blown away to see Chris Pelonis talking about studio design. You may not know who he is, but I guarantee you’ve heard music made in studios he designed. The pacing of the video is relaxed—it’s not cut for the attention deficit generation—but there isn’t any wasted time, either, and the flow is consistent. Which brings up another reason why this video excels: the production. From the scripting, to the audio quality (not unexpected, I guess!), to judicious use of computer graphics to get points across, to subtitles that emphasize certain points, the overall approach blends sound, stills, animation, and live footage to create a well-oiled approach to learning. Yes, there are the occasional attempts at gratuitous humor, but surprisingly, quite a few of them succeed. One more point before getting into the content: This isn’t just practical tips, or just philosophy, but a combination of the two. As a result, almost everything is given a context that’s bigger than just the topic being discussed. For example, when talking about how to optimize a space for a studio, it’s in the context of sound as much as it is recording, while referencing back to great studios of the past. As a result the video is friendly to beginners, while containing enough practical techniques to be useful to veterans. WHAT’S IN IT There’s a ton of information about what’s on the video at the home website for the project, www.artandscienceofsound.com. I checked over their descriptions of the various chapters (scroll to the bottom of the page), and they are both accurate and hype-free—which is in keeping with the nature of the project itself. So, while I’ll cover the topics that are covered, it will be more from evaluating how they’re done than simply describing them . . . the website covers that well. There are three DVDs that take about six hours to watch—and yes, I watched every minute. (You can also stream individual chapters à la carte, or stream all of them at a reduced price, from the ASSR website.) This wasn’t just out of a sense of journalistic duty because I was writing this review, but because it’s really interesting and educational (one great tip: track lighting is good in the studio because it minimizes the number of holes that go into the outside world). Sure, I know plenty about recording—but never thought about the subject in some of the ways presented in the video. For example, while I know you want to minimize sound leakage in a studio, the video expressed this as imagining your studio filled with water, and looking for where it leaks. I’d never thought of it that way. DVD 1 (TOOLS) After the introduction and a bit of history, the first “practical” chapter deals with studio acoustics. It combines theory with practical implementation, and manages to pack a lot of useful, and very specific, advice into a compact presentation. Even though you might not be building a studio from scratch, some of the concepts discussed here are suitable for retrofitting an existing space, down to recommending bookshelves packed with books as excellent diffusers. An acoustic expert from Auralex is brought in to discuss acoustics, but doesn’t plug his products once—you wouldn’t know he was from Auralex had he not been identified as such. Does the video substitute for a complete book on acoustics? No, but it comes a lot closer to that goal than you might think, as it concentrates on those elements and principles that are most important. After acoustics, Parsons goes into microphones—including the usual nod to history, and explaining how we got to the current state of the art—while explaining the different types, and how they work. This is relatively basic, but of course, essential for those getting into recording. Given how many people now record direct into computers, it’s probably a useful refresher course. From there, he gets into mic placement and mic technique, but also includes a lot of information on miking philosophies—from ambience, to how much time you should spend auditioning mics—then proceeds to mic pres. Next up: Consoles and controllers. Parsons jumps right into analog vs. digital consoles, without advocating any particular agenda. This leads into a discussion of automation (including a cameo from Jack Joseph Puig, who appears several times in the video), with a nod toward in-the-box compared to an analog console. Again, Parsons presents both sides, without either the “digital fanboy” or “I’m-better-than-you-because-I-do-analog” vibe. He even gets into the DAW vs. analog summing controversy by setting up an A-B comparison. While he doesn’t talk explicitly about how to do A-B tests, he does exactly that and with his actions, demonstrates what is good practice for conducting this type of test. You then hear both in the box and analog summed mixes, and Parsons invites you to decide for yourself if you can hear a difference. (If you really want to get into it, full bandwidth, hi-res mixes are available online. I still think how good a chorus you write has a lot more influence on how people perceive the sound, but that’s my opinion!) Then Parsons examines the mixer, going through buses, EQ, aux sends, dynamics control, pre-fade listen, solo, mute, pre/post-fader switching, and so on. This is very much “overview”—type material, so if you’re into recording, you won’t learn much from this section but again, for those on a more elementary level this will tie together a lot of useful concepts. This segues into meters, and given that they don’t get the attention they deserve, the explanation here is both clear and definitive. Although he doesn’t get into inter-sample distortion, or the relationship in levels between channel faders and the master, he does explain you need headroom and shouldn’t consider digital 0 to be zero, but rather, a level below digital zero. He also gets into one of my favorite topics—why you want a control surface with DAWs. DAWs come next, and Parsons gives equal credit to Windows (his traditional choice) and Mac, which he also uses. This is mostly about the components that make up a system—computer, interface, and software. He then goes into digital interconnections, and even touches on dongles and authorizations before getting into all the pieces to work together, then closes out the section with the story behind software and touches very briefly on mastering (he goes to a pro, implying that maybe you should too). Next comes monitoring, and Parsons digs into it, including headphones, near-fields, directionality, and also considers critical listening. This section will help de-mystify these subjects for a lot of people; the “man on the street” interviews are hilarious. DVD 1 closes out with MIDI, and Parsons covers the subject with reasonable depth—a good move, considering that many people watching this DVD were probably born after MIDI appeared, and thus missed the original flurry of books and articles. DVD 2 (TIPS) This moves from theory to practice, and starts with EQ. Parsons covers what EQ is, then different types of EQ, and moves into strategically important frequencies for a variety of instruments—bass, drums, guitar, wind, percussion, strings, piano, vocals, etc. He also dips into a Q&A format at one point about general EQ questions. He then gets into specific problems that can be solved with EQ, like hum removal, making particular sounds stand out or recede more in the background, etc. So what comes after EQ? Dynamics, of course. Here Parsons gets into parameters and controls, as well as hardware vs. software, but also covers the application of compression to various instrument categories. Limiting, de-essing, and multiband dynamics are covered as well. Of course, no modern treatment of dynamics can avoid the subject of the loudness wars. But Parsons, always the gentleman, does so in an objective way that sheds more light than heat on the topic. While it’s clear where his personal philosophy lies, it’s also clear that he has an open mind about those who think otherwise. Next comes noise gating, and Parsons digs a little deeper than you might expect, covering sidechaining for both gating and noise reduction, using attack times to add swells, and ways to eliminate leakage between drums. The latter is done with a fairly extensive hands-on demo, which I’m sure those who haven’t used noise gates to reduce leakage would find instructive. Of course, gated reverb gets its due, too, as does using sidechaining with noise gates to create tremolo effects. As to reverb, Parsons covers springs, to halls, to digital, and delves into when you’d use pre- and post-settings, as well as several other considerations with send and insert applications. He also talks about which instruments are best-suited to reverb, and typical settings. From there it’s a short hop to delay, from tape to digital, but he also mentions tape loops and includes demos of delay-based effects including flanging, phasing, chorusing gets into the calculations involved in keying delay times to rhythm. The section closes out with applications and tips relating to delay. With processors out of the way, the DVD “zooms out” and gets into the process of tracking a band in a conventional studio session. This includes planning the session (where to position people, cabling, mics, separation, who goes in the isolation booths, etc.), considerations about arrangement and production, and shows the personal dynamics of a band doing live recording. This section is one of the few that drags a little bit—okay, the players are playing, we get it—but some of the comments about the value of playing together as a band, and the virtues of recording a live performance, are indeed worth mentioning. Also, probably quite a few people who buy this DVD series will have never had the opportunity to play sessions in a pro studio with pro studio musicians, and it might be a revelation for explaining why this approach produced such classic recordings. Next up is vocals, and covers a lot about the importance of capturing the magic. This involves preparation (like determining whether the singer sings softly or loud, making sure you’ve chosen the right key for the singer, whether the singer has warmed up sufficiently, and the like). Part of it also covers how to make vocals more expressive, which of course has nothing to do with gear. Next up is the process of choosing a mic, part of which involves John McBride talking about what mics work well, and also where to put the pop screen and even gets a bit into optimal processing for different mics. One of the cool stories (and a good example of the kind of insights sprinkled throughout the DVD) is when Sylvia Massey talks about a blind test they did with Billy Corgan of Smashing Pumpkins with about $30K worth of mics, and they ended up deciding the SM58 was the best. There’s some discussion of what makes a vocal chain, and managing dynamics with vocals—with an emphasis on using good mic technique (arguably the best way to manage dynamics, if the singer is good at what they do). And in case you wondered—Alan Parsons seldom goes beyond 10dB of limiting with vocals. The vocal section closes out with issues about headphone mixes with singing. The second DVD ends with coverage of internet recording. Parsons is honest about the limitations of latency and such, and mentions that this kind of technology is in its infancy. Still, it’s a worthwhile inclusion to give an idea of where things are going. As with the section on tracking a session, this wasn’t tremendously interesting to me, but I would think that for those who haven’t worked with producers or done typical sessions, having a glimpse into the workflow and process would be illuminating. DVD 3 (TECHNIQUES) The third DVD deals with techniques involving specific instruments, and is broken out into nine sections: drums, keyboards, bass, guitar, acoustic guitar with voice, recording a choir, approaches to live recording, mixing, and dealing with disasters. British drummer Simon Phillips is the go-to guy for the drums section, and talks about recording his instrument of choice with a personal, yet authoritative, style. Taylor Hawkins and Sylvia Massey pitch in with additional insights. The drums section covers kick, snare, toms, hi-hat, cymbals, etc., and includes really useful information about making drums sound good before you even stick a mic in front of them—although of course, it then gets into miking (not just individual drums, but overheads as well). If you’d never recorded drums before and watched this section of the video, you’d know enough to get by. Although a lot of Phillips’ material is based on his personal opinion, they’re informed opinions that have been proven over the years. Next up: keyboards, with a lot devoted to recording MIDI vs. recording audio, as well as a discussion of virtual vs. vintage instruments. There are some interesting comments from Dave Smith about why there’s a resurgence in analog synths. Naturally there’s info on miking acoustic piano, synthesizers, sampling, and the like. Bass follows, which starts with a cool little story from Carol Kaye (I won’t spoil it for you). It then goes into DI recording, but there’s a lot of emphasis on playing technique, picks, and strings. There’s also discussion of bass in a mix and recording context, like the “kick vs. bass” discussion. After bass comes acoustic and electric guitars. There’s some interesting material about single-coil vs. humbucking pickups, discussion of amps/cabs/speakers, and the like. Parsons explains why he prefers condensers for miking amps over dynamics, as well as miking positions. He even covers hardware devices like the SansAmp and amp sims. Guitarist Tim Pierce talks about modelers and interestingly, talks about the “harsh frequencies” that have always bugged me, and I’ve explained how to fix in articles (maybe someone can introduce me to him at NAMM, and I can tell him how to get his amp sims to sound right!). Pierce also talks about issues involving pedalboards, and seems to have the same affection for dotted eighth note delays that I do. One interesting comment is how guitar solos fell out of fashion after Nirvana hit the scene, but is starting to return thanks to “Guitar Hero”-type games. The series has lots of little tidbits like this sprinkled throughout, but this one in particular struck me as worth mentioning. Of course they cover tuning, but some of the viewpoints relate to how to accommodate the fact that guitars can never be in tune. Interestingly, the next section is devoted to recording acoustic guitar with vocal—a common scenario, but I didn’t expect it to get its own treatment. This section is pretty compact, but helpful. Next up is . . . choirs, which of course, is not exactly something we record every day. But really, it was more about live recordings of groups, as you might find in schools and churches. He also uses this as a way to get into mid-side recording, as well as other miking techniques for the piano and various parts of the choir. If you’re tempted to skip this part, don’t—it’s loaded with useful information on various aspects of live miking, including room mics; the section involving how he mixed the various mics, and the choices he made as well as additional info on the mid-side recording, is both interesting and very real-world. The next section digs into live recording in more detail, including issues like the backline, how to deal with separation, the difference between mixing for the audience and the monitor mix, and the like. The next to last section is about mixing. It covers more than just mixing, though—there’s quite a bit about the current practice of transporting projects, and using other or even multiple mix engineers. Thankfully, Parsons brings up the topic of people who receive material to mix without notes; a small consolation I learned from this section is that it drives other people as crazy as it drives me (I just got an 85-track project in for remixing, with the only notes being the track names—ugh). Other topics include mixing in the box, automation, analog mixers, the order of adding sounds together (which of course is a matter of preference), stereo placement, processing, grouping, and the like. As befits the subject matter, this is a fairly lengthy section with lots of good advice; there’s even some time devoted to mixing to tape. The section concludes with the final mix (and video) of “All Our Yesterdays,” a song that was written specifically for this project, and used throughout for examples. The final “Bonus Track” is called Dealing with Disasters, which covers some studio horror stories. Fun stuff. The one curious omission is that mastering is not addressed at all, either as a discipline unto itself, or how to mix in order to give the best possible options to the mastering engineer. (Presumably this omission exists so that people will have to see my six "Mastering in Studio One" videos instead.) CONCLUSIONS There’s no doubt this is a highly educational video that’s loaded with useful info and insights, put together in a way that somehow manages to be both casual and professional at the same time. It really is a comprehensive overview, and aside from the mastering aspects, I don’t feel anything truly significant is overlooked or missing. But there’s something else about this video that provides a sort of hidden value. Many who are currently working “in the box” at home studios have never had the “big studio” experience—the collaboration among multiple talented people, using no-compromise gear, with a talented engineer steering the shop. ASSR gives an insight into that process, and if the DVD is an advertisement for anything, it’s an advertisement for the power of collaboration. Parsons practices what he preaches, bringing in luminaries not just for their names, but for their insights. All of them come across as real and sympathetic, not didactic people who are impressed with themselves and “know better than you do.” Just like Alan Parsons, and just like this video. They got it right. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Ozone 5 is a sophisticated mastering tool—but these five tweaks are both simple and effective by Craig Anderton Mastering is a complex art that above all, requires sensitive ears and sophisticated tools. While Ozone 5 definitely raises the bar compared to Ozone 4, that doesn't mean you have to use all the new features—sometimes the tried-and-true, basic mastering techniques described in this article are all you need to make a major improvement in the sound of a stereo mix. 1 NARROW THE BASS While many people think of stereo imaging processors as a way to expand the stereo image, with Ozone’s implementation you can also use it to narrow a frequency band’s stereo image by pulling its slider negative. This is particularly useful with bass, which you usually want centered in a mix (and always want centered if you’re cutting to vinyl). Drag the band splitter so the lowest band covers 20Hz to about 50-100Hz, depending on what you’re mastering. Then, pull down the band’s slider—I usually bring it down all the way to -100.0% so that the bass is really glued to center. 2 REDUCE THE MUD One popular mastering tweak is to apply what’s nicknamed the “smile” curve—a high-frequency lift to increase definition, and some bass for power. Typically, the tool of choice is shelving EQ. While this can be effective, in many cases what you really want isn’t so much to boost the bass and treble as it is to cut the lower mids. Many instruments have energy in the 200-500Hz range, which can create a buildup that “muddies” the sound. A broad, shallow cut in this range emphasizes highs and lows, which can create a more effective curve than boosting highs and lows with shelving. 3 TAME THE HIGHS Musicians sometimes hear a “brittleness” in digital recordings. Rather than being caused by any inherent issues with digital technology, this can be due to issues like high-frequency artifacts from digital instruments (samplers, drum machines, etc.), or unintentional distortion caused by not allowing enough headroom while recording. You can often “warm up” a master by applying the lowpass filter (in flat mode). Set the filter to the highest possible frequency and steepest slope, then start lowering the frequency. Usually there will be a “sweet spot” that reduces the brittleness, but doesn’t dull the sound. (By the way, don’t use Ozone 5’s brickwall or resonant filter responses—a standard rolloff seems to work best for this.) 4 TRY THE "ALTERNATIVE EQ” Don’t always reach for EQ first to bring out vocals and add midrange definition. While the Harmonic Exciter’s usual function is to give a sweet high frequency lift, applying this effect to the upper midrange can increase intelligibility and definition dramatically for vocals and other midrange instruments. However, note that a little goes a long way—apply the Harmonic Exciter subtly, otherwise it can add harshness. 5 HIT THE HALFWAY MARK One of the most important considerations to remember about mastering is that even small changes can have a major impact. If you add one dB of midrange boost to, say, a drum or vocal track, you won’t hear too much difference—but add one dB of boost to a stereo mix, and you’ve done the equivalent of adding one dB of boost to every single track. For example, a common mastering technique is to find resonances by sweeping a parametric stage, set for a narrow Q and high gain, over the full frequency range. Sometimes particular frequencies will “jump out,” and cutting a little bit at this frequency will bring the resonance into balance with the rest of the track. Or, you might want to boost the treble a bit for a brighter sound, use limiting to squeeze the dynamic range, alter stereo imaging, or try any combination of techniques to improve the sound. For those getting into mastering, I usually advise making a change that sounds right, but then cutting that amount in half. For example, if you think something needs 3dB of boost at 4kHz, change it to 1.5dB. Ozone has sliders for each processor, as well as a master slider, that can do proportional adjustmens of all parameters simultaneously, but I'd recommend doing the "halfway" adjustment after adjusting a single parameter, It takes a while for your ears to become acclimated to changes in the sound, so live with the lesser setting for a while before deciding you actually need more. This also prevents “EQ creep,” where you increase the treble so now the bass seems low, whereupon you increase the bass but now the midrange seems low, and so on. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. If you're a guitar player, it's not just an iPhone world By Craig Anderton Despite the iPhone juggernaut, Android smart phones are not exactly also-rans, and are becoming increasingly popular. Although there may not be as many apps as there are for the iPhone—at least not yet—there are plenty of useful musical programs that are available for free, and often upgradeable to additional functionality for a nominal fee. Your phone can record your riffs, help tune your guitar, and even provide chord patterns to play against. Unlike the closed iPhone, though, there are many different Androids, and the operating system version is not the same across all Androids—some can’t even support the latest OS versions. As a result, not all apps work on all phones. Luckily, if you read the comments section in the Android market on various programs, users are often considerate enough to mention which phone they’re using when saying that a particular program does or doesn't work. These programs were all tested on a V1.5 Motorola Backflip, so if a program works with an Android that ancient and quirky, it will almost certainly work with more modern versions; if it doesn’t work, you can always uninstall it. But also note that when updating, sometimes the update will appear to be non-functional. In many cases, simply uninstalling and re-installing the program will solve the problem (sort of like trashing a preferences folder with the Mac). Now, let's check out the apps. Click on the app name to go to a web site with more information, or the appropriate page in the Android market; except for the first two apps, you can click on the image to enlarge it. gStrings Since installing this, my acoustic guitar is never out of tune. gStrings is not only a useful and accurate tuner, but also provides a “pitch pipe” function. You can optimize the response for specific instruments, change the bass tuning reference to something other than A=440, select from a number of alternate tunings, tweak precision and responsiveness, and adjust mic sensitivity. Extra credit: There’s an accessibility option that provides audible feedback for blind musicians. Solo Lite What attracts casual users to this program is that you can choose chords, and “strum” them on the screen’s virtual guitar (switchable to lefty, too). That’s fun, but check out the Chord Library page, which is like having one of those “1001 Chords” theory books sitting in your phone. Can’t remember the fingering for an E9b5? No problem. You can choose a chord, see the fingering on a virtual fretboard, and if you tap the chord, it plays. Hertz Most Android audio recording apps aren’t really “hi-fi” because they’re designed to record phone conversations, or be more of a memo-taking program, and record to the sonically compromised 3GPP format. But this no-frills app can record 44.1kHz WAV files to a built-in SD card without data compression; the quality is outstanding compared to the usual apps. For electric guitar, I tested this with Peavey’s Ampkit LiNK—it works great as an Android audio interface (although you need to adjust the input level on the guitar itself, and of course, Android can’t use the associated iPhone amp sim app). However, if you want to save memory, you can ratchet the sample rate all the way down to 8kHz. Mobile Metronome This elegant app offers tap tempo, support for just about any time signature, choice of various metronome sounds, beat division, visual beat counter, first beat accent, and the option to change sounds. The timing is solid, too; it's a great little "practice assistant" to have sitting in your phone. Chordbot Lite Here’s another stellar practicing tool: Create a chord progression, choose the tempo, then play back the results with various instrument sounds. Each chord change “step” lasts one measure (although you can add steps to lengthen the number of measures), with 60 different chord types and 16 different time signatures, which can be different per step. The full version ($5) lets you randomize progressions, as well as export them as WAV or MIDI files. Robotic Guitarist Like Solo, this lets you “strum” chords on the touch screen. Seven chords are available, as selected by buttons toward the left; you can choose from 13 different chord types. But go into the options menu, and you can select three different sounds (acoustic guitar, electric guitar, and piano), 9 chord presets of 7 chords that work well together, and various preferences (e.g., lefty or righty, whether the chord pattern is superimposed on the strings, etc.). You can also call up a convenient metronome and tuner. Guitar Chords Lite This is la chord library of over 400 chords, with variations, displayed in standard tab. It resembles the Chord Library page of Solo Lite, but has the limitation of not letting you actually play and hear the chord—only display it. In that respect, it’s more like a book; but the option to see variations is very useful. Think of it as a replacement for a chord book, and you’ll dig it. Ethereal Dialpad There’s more to life than guitars, so these last two apps are more general, “fun” music-making programs. With Ethereal Dialpad, you drag your finger around the touch screen, which plays melodies using whatever pitch-quantized scale you’ve selected. There are additional options, like delay and flanger effects, and four graphic “looks.” This is your basic “automatic new age noodling” program that can be, among other things, a pretty good stress reliever. MusicGrid This step sequencer was inspired by ToneMatrix and the Yamaha Tenori-On. It’s simple, but fun and addictive; the developer says more updates are on the way, so it will be interesting to see how this app develops. Meanwhile, when stuck on the tarmac while waiting for a plane to take off, this will keep you occupied for at least a few minutes. If you like these, consider donating or buying the full versions that offer more features—encourage these people to keep developing cool apps! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. By Jared Cobb Originally Published in the April 2007 issue of DRUM! Magazine Down and out, dead in the water, nowhere to run. Bay Area metal band Machine Head carries a back-breaking metaphorical suitcase full of Been There Done That Got Screwed dirty laundry — most of which had the band teetering on the brink of extinction — through every rehearsal, every recording session, and every concert they play. The blistering speed, the violent rage, the wicked words are all testament to the tumultuous times the Machine has endured. Most bands would break, all bands would bend. But this band is no band at all — they’re a monstrous mechanical middle finger spitting boiling blood in the face of adversity. And then begging for more. Gluttons For Punishment Why else would a controversial metal band with a rocky past, yet on the cusp of a potential platinum resurgence, open up their highly anticipated new album with a 10-minute, 27-second Civil War—themed grand proclamation? The Blackening welcomes listeners with the ferocious sprawl of “Clenching The Fists Of Dissent,” one of four tracks on the eight-song album exceeding nine minutes in length. The track has more ingredients than a street vendor hotdog, more changes than Michael Jackson’s profile. Yet the album’s welcoming mat, like Blackening as a whole, somehow works. It could be the colossal song structures or the infuriating guitar play or the bloodthirsty vocals that separate this Machine from the countless other screaming bags of hate available on today’s metal market. But, eh, we don’t think so. Just a quick listen and Machine Head’s distinction jumps out and kicks you square in the face. It’s the drums, man. It’s the drums. Dave McClain is the Hemi engine under the hood of the Machine. All piss and pistons, he’s a beastly basher with venomous speed and thunderous power yet enough creative deceit to effectively juggle these razor blade drum parts without so much as a paper cut finger. Crisp and strong, complex and blinding fast, McClain is a determined woodshedder armed to the teeth with talent. And his work on The Blackening could certainly be his best yet, as he and his bandmates felt a newfound freedom while taking on the album. “We went through so much stuff as a band for our last album,” sighs McClain, articulate and humble. Quieter than you’d think. “I think we went through more stuff during a six month period than most bands go through their entire career. There was a time we didn’t have a record label, we just lost a guitar player, we didn’t have anything. What are we going to do? Well, we could fold it all in right then and there, or we could keep going. it, let’s keep going. So we just started writing stuff. “It was almost a freedom, not having to worry about songs going too long or making a single or all that stuff. And Through The Ashes Of Empires was received so well, like a rebirth almost. So we wanted to take that same approach and carry it over to this album. It felt good, it was fun, and we wanted to take it to a new level.” The band will usually spend about a year writing songs for an album, and Blackening was no different. While singer/guitarist/producer Robert Flynn is the undeniable driver of the Machine, the creative process is nonetheless a highly collaborative effort. “Everybody brings in riffs,” McClain explains. “I play guitar too and write stuff, so I’ll bring in some ideas, along with everybody else. We have a dry erase board in the practice room, and we start naming riffs. Like we had this one riff called “Iron Russian” because it sounded kind of Russian and like an Iron Maiden riff. So we give them stupid names and go from there, trying to put the songs together. Over the year of songwriting, we definitely go through a lot of demo processes — probably three or four times.” Even McClain’s drum parts are open for collaboration. It takes a true pro to welcome suggestions from fellow band members (especially from guitar players), but McClain considers these suggestions essential as well as comical. “Rob and I are always working on the drum stuff. He thinks he’s a drummer. He’s kind of a closet drummer, and sometimes he’ll have these ideas that are kind of, in a way, dumbed down. He’ll suggest something really simple that I would probably never do on my own. But he’ll make me try it and sometimes it actually sounds pretty cool. Most of the parts he suggests are cool to people who don’t really understand drumming. “One thing he always says is, ’Dude, do it backwards!’ So I’ll have to do things like go up the toms instead of down the toms. We’ll record it that way, then I’m like, great, now I have to play it like that every night. He gets a kick out of it.” Studio Speed They say you can tell a man by his drumming. While this may not work as a blanket theory, it certainly applies in the McClain study. When discussing his approach and his drumming philosophy regarding his studio recording sessions, it boils down to the two traits most prevalent in his ideology as well as his technique: efficiency and meticulousness. “The last couple weeks of writing, before we head into the studio, Rob and I really work hard on the drum parts. I hate getting into the studio and not being prepared for stuff. Just sitting there. So once we get into the studio, I’m ready to fly through the drum parts. I recorded these eight drum tracks in two days. The first day we did six tracks, then two tracks the next day. I leave the tougher songs for the end. Then we spent the next two days just trying different things — changing up fills or adding different layers. “Every album up until this one, we’ve used a full-band scratch track. But this time, we got in there and we did it with just Rob and myself. It was just the two of us for the drum pre-production the two weeks prior, so we thought we’d see how it went. We don’t use a click track or anything; we just go in there and rock it out. And Rob’s the person I feed off the most when we’re playing. So we did it this way and it was awesome. It worked out great. “Taking two other guitar players out of the equation cuts down so much of the noodling factor. Normally it takes so long just to start a song. You have three guitar players there and one person plays a little noise, then the other guitar player starts doing it, and it just goes on and on. So this way we cut out the five minutes of waiting from when we’d say we were starting a song and when we actually started it.” Goodbye to guitar player noodling? Does life get any better than that? Yes it does, and it starts when you hear the drum sounds on The Blackening. On an album like this, it’s so easy for the drums to get swallowed in the bloody mud of screaming vocals and guitars. Not the case here. Snare hits like a stick of dynamite in a catcher’s mitt. Tom tones like church bells. Bass that makes your scrotum ache. “It took us longer to get drum tones than it did to actually record the drums. We tried everything. We tried different drums, different heads, different mike placements. There were a couple things that were really surprising to me. “We switched out my maple drum set for my birch drum set. I never realized how different the two sets could sound. It was amazing. And then switching up the heads really opened up the drums and it sounded so good. I just wanted to sit there in the studio and just play my drums all day. It blew me away how much better the birch kit sounded. It just opened everything up. The tone had more attack and it rang out more. A complete 180 from the maple kit. So I’m sticking with birch when we hit the road. Pearl is making me another birch kit right now. “I had been using the same kind of drumheads for probably ten years. But we decided to try some different head combinations. We started using Emperors for the top heads and Ambassadors for resonant heads, and the sound we got from that was amazing. We tried a couple snares, but mostly I stuck with my Pearl piccolo snare because I know how to tweak it out and make it rock.” Double Kick Pistons All this talk of power and efficiency and tone would be simply wasted words if McClain couldn’t provide that one delicious ingredient essential to all meaningful metal: a devastating double bass. He lays it thick and fast throughout Blackening — from running speed rolls to tight uppercut fills — and takes pride in his efforts to become one of the most skilled double bass players in today’s metal scene. “I practice a lot just by myself. When we’re home, I practice at least three days a week, usually four or five. I’ll play our songs or geek out and play solos or just work on things. I’ve had this double bass breakthrough in the last couple of years. I started doing heel-toe doubles on the kick drum. On our last album, Rob wanted me to do this drum part that was just super fast. ’Dude, how am I going to play that?’ Every once in a while I could get it, but not with any real consistency. “Then before we went out for that tour I was messing with my drums and realized something: I could bounce my drumsticks on my drumhead, so why couldn’t I bounce my bass drum beater on my bass drumhead? So I started doing it and it took some time, but I’m to the point now where I can do full-on double-stroke rolls with my feet — and do it super fast. “Another thing I did that got my speed way beyond what I considered to be the fastest period in my life is I lowered my drum throne. For a long time I sat really high, almost as high as the throne would go, and I started having problems. During a tour I started noticing that some of the easiest double bass parts were getting a little harder. So I went back home to my practice room and messed with everything.” That’s right, more woodshedding. “I made every possible adjustment to my pedal and my technique, and things weren’t getting any better. Then I remembered that I used to sit really low — like Tommy Aldridge low — so I practiced a couple days with my throne lowered. My foot speed came back, and then some. I couldn’t believe how fast I was going. Now I’m faster than I’ve ever been. I can probably play single strokes with my feet as fast as I used to play doubles.” Becoming The Machine The hard work started early for McClain. You don’t become such a technically proficient drummer without starting quick and playing often. As a young kid growing up in Texas, McClain used idols and influences as headphone instructors, but did incorporate some formal lessons early on. “I took lessons for about a year when I was 11, learning how to sight read and that stuff, but for the most part I learned drums from playing along to Rush and Judas Priest albums. After my first year of playing drums, I was pretty much playing in some kind of band from that point on. I think playing in bands early on was definitely a key to my growth as a drummer. When you play with a band you’re kind of thrown in the fire. You’re not playing to a record anymore. You are the record. “One of the first ’gigs’ I can remember, I was probably 13 years old and my friend’s sister had a backyard Halloween party for her 13-year-old friends. We played Judas Priest songs and Kiss songs, Thin Lizzy, all that stuff. I think there were like five kids in the backyard, but it was the big time for us. We used my friend’s room as the dressing room. We were rock stars.” Indeed. Luckily those early bands avoided the all-too-common candy corn overdoes of their time and McClain drummed his way into his twenties, and out to the Left Coast. “I moved to L.A. when I was 20 and went through the whole thing of playing in bands and trying to make it. Eventually I got the call from the Sacred Reich guys. That was the first time I could really support myself with music. They were signed and touring and all that. I thought I had made it, that I was at the finish line. Little did I know I was just at the starting gate.” While the Sacred Reich gig was a huge step for McClain, it truly was just the beginning for him. The Reich had the fury and cult following his talents craved, yet they were unable to match McClain’s drive and determination. The invaluable experience was a dream come true, but eventually that dream just wasn’t enough. Then the phone rang. “I was with Sacred Reich and basically got hooked up with Machine Head through a mutual friend who knew they were looking for a drummer. When Rob first called me, Chris Kontos was already out of the band and they were on their second replacement drummer. He said they were auditioning drummers, and I kind of went along with it. Then I called him back and said I wasn’t really interested in doing it and that I was going to stick it out with Sacred Reich. “A couple weeks went by and I was like, ’What am I doing?’ Machine Head was a totally hard-working band that had this drive. They just had this thing. So I called Rob back, flew out to Oakland, and auditioned. And that was it.” 1997’s More Things Change marked McClain’s Machine Head debut. Now, a decade later — through all the pitfalls of abuse and turnover, with the occasional highlight tour or release — he literally deflates at the thought of doing anything else with his life. The wind leaves him, and he can’t even think about, much less discuss, his life without metal and without drums. The best he can do is take away the metal component. Drums stay. “If I wasn’t playing metal, I’d definitely like to be a country drummer. You know, the cool country stuff. The style is so cool and different. I love those train beats and that stuff. And to me that cool country stuff has the same attitude as metal. It sounds weird, but I can get the same feeling listening to some types of country music that I can get listening to metal.” Once an outlaw, always an outlaw. From trick-or-treat kid shows to sold-out metal festivals, the pistons pumping the Machine seem to never stop. We’ve talked about efficiency and attention to detail, but enough cannot be said about the amount of determination it takes to succeed as a drummer in the metal world. Underpaid, under-appreciated, and overworked, McClain wouldn’t have it any other way. And if this hazardous highway sounds to you like a yellow brick road, he offers his advice. “The most important thing is just sticking with it, no matter what. It’s a hard thing to do. Just don’t stop doing what you’re doing because a lot of people just give up. I hate to even think about where I’d be right now if I had given up. I can’t imagine not having music as my life and not playing drums. I’m completely stoked about my life. Every day I think about how lucky I am that I’m doing for my job what I love to do. So don’t give up. “Really, if you just love playing the drums, then everything else will fall into place.” McClain Massacres Machine Head Dave McClain’s blisteringly fast, intricate footwork perfectly propels Machine Head’s heavy guitar riffs. We get a glimpse into his fierce style of playing on “Clenching The Fists Of Dissent” off of Machine Head’s new album, The Blackening. This ten-and-a-half- minute metal opus begins with syncopated cymbal chokes and a very speedy sextuplet fill down his toms that lead into more of the same and a longer, full-measure sextuplet fill. We get to catch our breath during a short break at the 2/4 bar before McClain launches into his double-time double bass groove. It sounds like he’s playing the &’s on his crash, but the cymbal is placed back in the mix. In the second bar of this section McClain plays a cymbal accent pattern of 3 e ah 4 & ah, and a couple bars later plays a tasty sixteenth- note fill from his snare to his ride cymbal. At the 2:47 mark, he changes his pattern and plays 1 e & ah 2 e on his kick drums and crashes. Later, he plays the same bass drum pattern, but this time with a half-time feel created by the snare pounding 3. DRUM! Notation Guide McClain’s Setup DRUMS Pearl Masters Birch (Custom Camo Finish) 22" x 20" Bass Drum 14" x 3" Snare 10"x8"Tom 12"x9"Tom 16" x 16" Floor Tom CYMBALS Zlidjian 14" New Beat Hi-Hats 18" Z Custom Rock Crash 8" Splash 10" A Splash 17" Z Custom Rock Crash 21" Mega Bell Ride 20" China High PERCUSSION Pearl Tambourine Dave McClain also uses Pearl hardware, Remo heads, Shure Microphones, and ddrum AT modules for triggering kicks and toms. McClain’s Selected Discography 1993 Independent Sacred Reich 1995 Hempilation: Freedom Is Norml Various Artists 1996 Heal Sacred Reich 1997 The More Things Change Machine Head 2000 Year Of The Dragon: Japan Tour Diary Machine Head 2001 Supercharger Machine Head 2003 Hellalive Machine Head 2003 Through The Ashes Of Empires Machine Head 2005 Roadrunner United: The All-Star Sessions Various Artists 2006 The Blackening Machine Head 2011 Unto The Locust Machine Head
  19. Once Again, We Ask the Question: Why Be Normal? by Craig Anderton Many synthesizers and samplers, whether hardware or software, combine digital sample-based oscillators with synthesis techniques like filtering and modulation. These synthesis options can turn on-board samples into larger-than-life acoustic timbres, impart expressiveness to static sounds, and create entirely new types of sounds—but only if you know how to do a little editing. Don’t believe the hype that editing a synth preset is difficult. All you really need to know is how to select parameters for adjustment, and how to change parameter values. Then, just play around: Vary some parameter values and listen to what happens. As you experiment, you’ll build up a repertoire of techniques that produce sounds you like. When it comes to using oscillators creatively, remember that just because a sample says “Piano” doesn’t mean it can only make piano sounds. As with so many aspects of recording, doing something “wrong” can be extremely right. Such as . . . 1. BOMB THE BASS Transpose bass samples up by two octaves or more, and their characters change completely: So far I’ve unearthed great dulcimer, zither, and clavinet sounds. Furthermore, because transposing up shortens the attack time, bass samples can supply great attack transients for other samples that lack punch (although it may be necessary to add an amplitude envelope with a very short attack time so that you hear only the attack). Also, bass samples sometimes make very “meaty” keyboard sounds when layered with traditional keyboard samples. 2. THE VIRTUAL 12-STRING Many keyboards include 12-string guitar samples, but these are often unsatisfying. As an alternative, layer three sets of guitar multisamples (Fig. 1). The first multisample becomes the “main” sample and extends over the full range of the keyboard. Transpose the second set of multisamples an octave higher, and remember that the top two strings of a 12-string are tuned in unison, not octaves. So, limit the range of the octave higher set of multisamples to A#3. Detune the third multisample set a bit compared to the primary sample, and limit its range to B3 on up. (You may want to fudge with the split point between octave and unison a bit, as a guitarist may play the doubled third string higher up on the neck.) Fig. 1: A simple 12-string guitar patch in Reason’s NN-XT sampler. The octave above samples are colored red for clarity, while the unison samples are colored yellow. (This example uses a limited number of samples to keep the artwork at a reasonable size.) If you can delay the onset of the notes in the octave above and unison layers by around 20 to 35ms, the effect will be more realistic. 3. THE ODD COUPLE Combining samples with traditional synth waveforms can create a much richer overall effect, as well as mask problems that may exist in the sample, such as obvious loops or split points. For example, mixing a sawtooth wave with a string section sample gives a richer overall sound (the sawtooth envelope should mimic the strings’ amplitude envelope). Combining triangle waves with nylon string guitars and flutes also works well. And to turn a sax patch into a sax section, mix in some sawtooth wave set for a bit of an attack time, then detune it compared to the main sax. Sometimes combining theoretically dissimilar samples works well too. For example, on one synth I felt the piano sample lacked a strong bottom end. Layering an acoustic bass sample way in the background, with a little bit of attack time so you didn’t hear the characteristic acoustic bass attack, solved the problem. Sometimes adding a sine wave fundamental to a sound also increases the depth; this worked well with a Chapman Stick sample to increase the low end “boom.” Try other “unexpected” combinations as well, such as mixing choir and bell samples together, or high-pitched white noise and choir. 4. FUN WITH INTERGALACTIC COSMIC EXPLOSIONS Transpose percussion sounds (cymbals, drums, tambourines, shakers, etc.) way down—at least two octaves—for weird sound effects and digital noises. If this causes any quantization noise or grunge to the sound, you may want to keep it but if not, consider closing the lowpass filter down a bit to take out some of the high frequencies, where any artifacts will be most noticeable. For truly massive thunder effects, spaceship sounds, and exploding galaxies (which are always tough to sample!), choose a complex waveform, transpose it down as far as it will go, and close the filter way down . . . then layer it with a similar sound. 5. GENTLEMEN, START YOUR SAMPLES Changing the start point of a sample (a feature available on most synths and samplers) can radically affect the timbre and add dynamics. Move the start point further into the sample (Fig. 2) until you obtain the desired “minimum dynamics” sound, then tie the start point time to keyboard velocity so that more velocity moves the start point closer to the beginning of the sample (this usually requires negative modulation, but check your manual). Fig. 2: The green line indicates the initial sample start point (minimum velocity). Hitting higher velocities moves the sample point further to the left, toward the beginning of the sample, so the sound picks up more of the attack. The red part of the waveform is the area affected by velocity. This seems to work best with percussive sounds, as changing the start point dynamically can cause clicks that are obvious with sustained sounds, but blend in with percussion. An alternative is to use two versions of the same sample, with one sample’s start time set into the sample and the other left alone; then use velocity switching to switch from the altered sample to the unaltered one as velocity increases. 6. DETUNING: WHO SAYS SUBTLE IS GOOD? Detuning isn’t just about subtle changes. When creating an unpitched sound such as drums or special effects, use two versions of the same sample for the two oscillators, but with their pitches offset by a few semitones to thicken the sound. You may need to apply a common envelope to both of them in case the transposition is extreme enough that one sample has a noticeably longer decay than the other one. 7. THE REVENGE OF HARRY PARTCH Microtonal scales (17-tone, 21-tone, exotic even-tempered scales) are good for experimental music, but they’re also useful for special effects. After all, car crashes are seldom even-tempered, and you may want a somewhat more “stretched” sound—either higher or lower—than what the sample provides. To get these kinds of scales (or even a 1-tone scale where all notes on the keyboard play at the same pitch), assign note position (keyboard) as an oscillator modulation source. Adjusting the degree of modulation can “stretch” or “compress” the keyboard so that an octave takes up more or less keys than the usual 12. Note that you may need to adjust the tuning so that the “base” key of a scale falls where you want it. 8. CROSSING OVER Use waveform crossfading to cover up samples with iffy loops. For example, one keyboard had a very realistic flute sound, but the manufacturer assumed you’d be playing the flute in its “normal” range, so the highest sample was looped and stretched to the top of the keyboard. This flute sound actually was very useable in the upper ranges, except that past a certain point the loop became overly short and “tinny.” So, I used the flute sample for one oscillator and a triangle wave for the other, and faded out the flute as it hit the looped portion, while fading in the triangle wave (Fig. 3). Fig. 3: As the natural flute loop fades out, a looped triangle wave fades in to provide a smoother looped sound for the decay. The flute sample gave the attack and the triangle wave, a smooth and consistent post-attack sound. Similar techniques work well for brass, but you’ll probably want to crossfade with a sawtooth wave or other complex waveform. 9. BETTER LIVING THROUGH LAYERING Try layering two samples, and assigning velocity control to the secondary sample’s amplitude so that hitting the keys harder brings in the second sample. This can be very effective in creating more complex sounds. One option for the second sample is to bring in a detuned version, so that playing harder brings in a chorusing effect; or, you use variations on the same basic sound (e.g., nylon and steel string guitars) so that velocity “morphs” through the two sounds. 10. TAKE THE LEAD WITH GUITAR “FEEDBACK” With lead guitar patches, tune one lead sample an octave higher than the other lead sample and tie both sample levels to keyboard pressure. However, set the initial volume of the main sample to maximum level, with pressure adding negative modulation that lowers the level; the octave-higher sample should start at minimum level, with pressure adding positive modulation that increases the level. Pressing down on the key during a sustaining note brings in the octave higher “feedback” sound and fades outs the fundamental. For a variation on this theme, have pressure introduce vibrato and perhaps bend pitch up a half-tone at maximum pressure. Also experiment with other waveforms and pitches for the octave-higher sound; a sine wave tuned an octave and a fifth above the fundamental gives a very convincing “feedback” effect. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Find out how data compression works, and how to convert files to MP3 for free By Craig Anderton Remember the dot-com boom? The web was supposed to be the force that democratized the music industry—physical distribution would be obsolete, as you could post your music on the web and be exposed to an audience of millions, who would cheerfully download high-quality music while paying a reasonable, and fair, fee. Amazingly, significant parts of that have come true, and we’re not just talking iTunes: Bands set up web sites where fans can hear new tunes, MP3s make it easy to post songs for critiques, and snippets of music help sell physical CDs. It’s not hard to get your music on the web, but unless you’re Trent Reznor, you’ll probably have to settle for posting it in a data-compressed format in order to conserve server space, as well as save time for those downloading your masterpiece. Although there’s a lot of criticism of the MP3 format, not all of it is justified as there are ways around some of its limitations. THE DATA DIET PROGRAM The MP3 format is based on using data compression algorithms that can reduce the amount of data needed to reproduce music. (Note that this has nothing to do with audio dynamic range compression, as used in recording.) Actually these are data omission algorithms, because they do not work like StuffIt or Zip data compression algorithms, which restore the original file when uncompressed. Instead, a process like MP3 throws away “unneeded” data. For example, if there’s a lot of high-level sound going on, the algorithm might assume you can’t hear lower-level material, and decide for those sections you only need 24dB of dynamic range. This requires only 4 bits of resolution—25\\\% the amount of data required by 16-bit resolution. Unfortunately, it’s difficult to retain quality with data-compressed music (video and images are much more easily compressed). One workaround is to use a lossless algorithm, such as FLAC, or the lossless options offered by Microsoft and Apple for their audio formats. However, these don’t result in particularly svelte files; with complex music, the size reduction may only be 10-20\\\%. Although there are many data compression algorithms for audio, only a few are common: MP3. This allows several levels of encoding, so you can generate just about any size audio file—with greater fidelity loss as the files get smaller. There are many free or shareware MP3 players (e.g., iTunes and Windows Media Player); for MP3 encoding, you can use iTunes, most digital audio editors, and many digital audio workstations. AAC. As the iPod’s native file format, this format is pretty popular—and to most ears, sounds better than MP3 for a given file size. iTunes can convert files to AAC. Windows Media Audio. Being part of Windows has helped establish WMA as a player, but it’s not as common as MP3 or AAC and few musicians post their music as WMA format files. At low bit rates, the quality is generally much better than MP3. Although Microsoft no longer offers a WMA player for the Mac, the utility Flip4Mac (a free version is available) allows playing Windows Media formats on the Mac. Ogg Vorbis. While rare (and no one can figure out why, unless it’s the weird name), this format also sounds better than MP3 for a given bit rate—and unlike MP3, the encoding tools are free to developers. Ogg Vorbis files haven’t gotten much traction with the public, but are popular with tech-savvy users. FLAC. This popular lossless compression format isn’t supported by many portable music players, but musicians often use FLAC to send files back and forth when collaborating due to the superior sound quality. When creating audio content for posting, even though MP3 doesn’t necessarily provide the best quality, all the players read it, there’s a ton of supporting software, and people can load the files into portable MP3 players. CHOOSING THE RIGHT MP3 SETTINGS When encoding a file to MP3, always give the encoder high quality, uncompressed material so it can make the best decisions on how to apply the compression algorithm. Then, choose the compression parameter values carefully. When saving to MP3, you can typically choose from a range of bit rates (number of bits that get transferred in a second), from 320 kbps stereo (excellent quality, but largest resulting file size) down to 8 kbps mono (good enough for dictation). Compressing a standard 28MB WAV audio file to 320kbps stereo MP3 results in a 6.4MB file; compressing to 8kbps mono yields a 0.16 MB file—a data reduction ratio of 175:1. In addition to fixed rates, there are variable bit rate (VBR) options that dynamically optimize the bit stream according to the material being played back. This is not as universally compatible, so it’s usually preferable to use constant bit rates. For best results, save a file using a variety of bit rates and sampling frequencies, in mono and stereo, and determine which combines best sound with smallest file size. Note that mono will usually have higher fidelity than stereo for a given file size. For example, with an MP3 128kbps file, the mono version “spends” that bandwidth on a single, high-quality file. Stereo generates two 64kbps streams—one for each channel—and 64kbps doesn’t sound as good as 128kbps. However, this means you give up stereo, which you probably don’t want to do. CONVERTING WITH iTUNES Although there are plenty of ways to convert files to the MP3 format, iTunes is free, readily available, and works for both Mac and Windows. If you don’t have iTunes already, download it from www.apple.com and follow the instructions for installation. Then, convert using the following procedure. 1. Open iTunes, then drag the files you want to convert into the main iTunes window. 2. From the menu bar, go iTunes > Preferences, then click on the General tab (Fig. 1). Fig. 1: Click on the General tab in iTunes to get started, then click on Import Settings to set up the MP3 file format characteristics. 3. Click on the Import Settings button (Fig. 2). Fig. 2: The Import Settings dialog box is where you can choose the file format, and custom settings for that format. 4. With the Import Using pop-up menu, choose MP3. The other choices are relevant only if you’re ripping from a CD. 5. Choose the MP3 settings from the Settings pop-up menu. If you want to keep things simple, you can choose one of the default data rate settings of 128kbps, 160kbps, or 192kbps. The higher the data rate, the better the fidelity. But you can also choose Custom, which gives you multiple options (Fig. 3). Fig. 3: Customize the quality and size of your MP3 file with the Custom Settings dialog box. Stereo Bit Rate (data rate). This is variable from 16kbps to 320kbps. The higher the rate, the higher the fidelity and the more space taken up by the file. Use Variable Bit Rate Encoding. This varies the number of bits used to store the file, based on the needs of the program material. Although it can create smaller file sizes, VBR files are not compatible with every single MP3 player, so it’s probably best to leave this unchecked. If you do select this, another pop-up menu lets you specify the level of quality. Sample Rate. Selecting Auto chooses the same sample rate as the source material, which is usually the best choice. Choosing a lower sample rate than the source creates a smaller file size with the tradeoff being reduced fidelity; choosing a higher sample rate than the source creates a bigger file size, but gives no audible benefit. Channels. Auto will create a mono file from a mono source, and a stereo file from a stereo source, so this is usually the best option. If you want to halve a stereo file’s file size, you can choose mono although of course, you’ll lose any stereo effects. Stereo Mode. This is available only if you choose Stereo for channels. At bit rates under 160kbps, the Joint Stereo option can improve sound quality by not devoting unneeded bandwidth to redundant material. Smart Encoding Adjustments. This causes iTunes to analyze the encoding settings and source material and make the appropriate adjustments. Uncheck this if you’re going to do custom settings. Filter Frequencies Below 10Hz. I recommend always leaving this on, because even if your source material does contain frequencies below 10Hz, very few transducers can play back frequencies that low. Therefore, there’s no need to waste bandwidth on encoding what are essentially sub-sonic frequencies. Now that the encoding parameters are setup, let’s encode your file. In the main iTunes window, right-click (ctrl-click) on the name of the track you want to convert, and select Create MP3 Version. Wait a few seconds for the conversion, and iTunes will create an MP3 copy of your file. As there’s no obvious indication of file format in iTunes, if in the future you aren’t sure which is the original and which is the MP3 copy, right-click (ctrl-click) on the name and select Get Info (Fig. 4). You’ll see the format, sample rate, bit rate, channels, and other info. And that's all there is to it! Fig. 4: Find out a file's characteristics with the Get Info option in iTunes. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Let these tips help you get the most out of this popular processor By Craig Anderton Guitar Rig 4 is an extremely capable processor that can also host studio effects (like Native Instruments' Vintage Compressors) as well as guitar components. But it got its start as an amp sim, and these tips can help maximize its potential. BETTER SOUND QUALITY Enable the HI-Q button in the upper right corner to increase sound quality dramatically, especially with distortion and amp effects. This increases the power required from your CPU, but it's worth it. SET THE NOISE GATE AUTOMATICALLY Double-click on the Noise Gate knob to activate a “learn” function, and after double-clicking, don’t play your guitar for a few seconds. Guitar Rig will sense the incoming level with no playing, and set the threshold just above it. SET LEVELS AUTOMATICALLY Overloading the internal signal path produces “nasty” digital distortion, unlike the pleasing distortion created within amp modules. There are two places in the signal path where Guitar Rig can optimize levels automatically to prevent this. Amplifier modules: Once the Input level is optimized, to prevent overloading an amp module click on the amp cabinet's Learn button, then play at the loudest possible level. Preset Volume: The Master Volume retains its existing level when you switch presets. The Preset Volume module Learn function adjusts the Preset level (which controls the level of individual Presets) automatically for a consistent output level. As with the Amp modules, click on the Preset Volume module Learn button, then play at the loudest possible level. PREVENT SPIKES Engage the Output limiter to tame any possible spikes or transients. To do this, click on the Limit button to the right of the Out meter. However, this does not substitute for proper level-setting as described above. MINIMIZE AND MAXIMIZE BUTTONS If a module shows a + (maximize) button, click on it to reveal the primary set of controls. If there’s also a downward arrow button, click on it to reveal additional parameters for tweak fans. The screen shot shows the top Citrus amp totally minimized, the middle module has been maximized to show the main controls, and the bottom module has been extended to show all additional parameters. EASY PARAMETER TWEAKING Can't play guitar and edit sounds at the same time? Show the Pre Tape Deck, record your playing, then loop what you played so you can adjust controls while you listen to your guitar. Note: Set the Play switch to “At Input” so you record the input signal, which on playback, goes through the rack effects. TUNE TO ALTERNATE TUNINGS In addition to Guitar, Bass, and Chromatic, the tuner provides Open D, Open G, Open A, Open E, and DADGAD tunings. Use the drop-down menu to the right of the Tuner label, located in the module's upper left. PRACTICE TO ALTERNATE TIME SIGNATURES The Metronome module has 28 different time signatures with different accents. Access them from the Metronome's “Sig.” drop-down menu. FUNCTION KEY SHORTCUTS F1: Toggles between standard and Live view. F2: Show/hide the “sidekick” (the left section with Browser, Components, and Options). F3: Show/hide Rig Kontrol in standard view. F4: Hides everything except GR, and stretches it to fit the full available vertical space. This is great for Live view. ASSIGN PARAMETER TO MIDI CONTROLLER Right-click on the parameter, select Learn, then move the controller you want to assign (physical, or virtual—e.g., the virtual Rig Kontrol pedal). Assignments are saved with the Preset. WHERE'S THE ENVELOPE FOLLOWER? Under Components, open the Modifier section. The modifier called “Input Level” provides the same function as an envelope follower. Open the module, then drag one of the fields to the parameter you want to control (the screen shot shows the Pro Filter cutoff being controlled). PARALLEL EFFECTS Use the Split component (found in the Tools section). Drag the component or components (amp, cab, or effect) for one parallel chain between the Split A and Split B sections. Drag component(s) for the other parallel chain between the Split B and Split Mix sections. The Mix section lets you pan the chains, as well as crossfade between them. In this example, two different amp/cab combinations are panned to opposite sides of the stereo field to give a wide stereo image. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Get your synth bass to dominate By Craig Anderton I love synth bass—there’s something about its massive sound that’s hard to resist. And I also like the versatility, from a weapon of mass distortion that provides the pit bull snarl behind a hardcore dance track, to a sweetly attacking rubber-band bass that lopes around a tender ballad. We’ve already covered the Top 10 Tips for Mixing Bass Guitar, so in a spirit of equal time for the keyboard crowd, here are ten tips designed to help you get even more out of your synth bass. 1. Keep it centered. With electric bass, you’re usually dealing with a mono signal, so panning it to the center is a no-brainer. But with synth bass, there’s more of a tendency to get creative with stereo imaging and layering. Bass frequencies are much less directional than high frequencies, so pan the bassiest elements to center. Spread any additional layers a bit to left or right, but don’t go too far away from the middle. And if you’re planning a release on vinyl, it’s imperative to keep the bass centered. 2. “A” is for “Ampeg.” Put your synth through an Ampeg bass amp, and you’ll have the classic bass amp and cabinet that powered a zillion hits, from rock to R&B. Don’t have an Ampeg amp? IK Multimedia’s Ampeg SVX plug-in nails that sound, but you can also get great Ampeg emulaions from the Line 6 POD Farm Flip Top bass amp, Guitar Rig 4 Bass Pro model, and Waves G|T|R Mo’Town bass amp model (Fig. 1). Fig. 1: A collage of three bass amp sims. Clockwise from top: Line 6 Flip Top, Waves Mo’Town, and IK’s Ampeg-approved SVX (click to enlarge). 3. Sometimes more is less. If one bass sound is good, layering a second one is better . . . right? Not necessarily. What makes similarly-tuned layers sound rich is how their frequencies add and subtract as they go in and out of tune, which is fine for a string patch, but can weaken the low end. If you’re using layers, keep the lowest, primary layer at maximum level, and reduce the doubled layer by at least -6dB. However, layers set to different octaves are a whole other matter. Create a prominent main layer, add in a sub-bass layer an octave lower, then add an octave-higher layer with most of the highs taken off for a huge bass sound. But again, avoid detuning unless it really works well. 4. Mod wheels are better for more than vibrato. Really, how often do you need to add vibrato to a bass line? So use that mod wheel to bring in a sub-bass layer, or tie it to filter frequency so that moving the wheel away from you cuts the highs somewhat. That way, with something like a sawtooth wave-based patch, you can do a quick change from raw and rude to smooth and round. 5. The punch factor. A Minimoog sounds punchy because the signal doesn’t decay immediately once the attack is complete, but hangs at the maximum level for around 20-30ms before decaying. If your envelope generator has a hold function or can do rate/level envelopes, bingo: You’ve got punch (Fig. 2). Fig. 2: In Cakewalk’s Dimension Pro, the amplitude envelope has been edited to add a 45ms hold time (click to enlarge). 6. Live in a parallel universe. Want to add effects to your synth bass sound, like distortion or wa? To avoid thinning out the bass sound, copy the audio track (or if you’re using a virtual instrument, render it to audio), apply your effects to the copied track, and mix it behind your main track. 7. Use Limiting to tame filter peaks. It’s common knowledge that electric bass loves compression. Synths don’t need compression as much because the sustain can be pretty constant, and there’s no neck to have “dead spots.” However, if you’re using filtering there can be resonances and with layering, there can be sudden peaks as notes add or subtract. The solution is a limiter (Fig. 3), as it will tame the peaks while leaving the rest of the signal relatively unscathed. Fig. 3: A limiter can keep bass dynamics under control, even in the face of filter resonances and detuned layers (click to enlarge). 8. A little EQ can really help. Don’t just rely on your synth’s filters; a slight low end bump (around 80Hz) rounds out the bottom, while another slight boost in the upper midrange (2-3kHz) accentuates bite. And while we’re on the subject of EQ, roll off the low end of instruments that don’t have any signal in the bass range. There can still be low frequency components (including sub-sonic ones) that may interfere with the bass. 9. Let it slide. Bass and kick often share notes that hit at the same time. To emphasize the bass, slide the MIDI or audio clip slightly ahead of the kick—just a few milliseconds will work. Conversely, to emphasize the kick, slide the bass a few milliseconds after the kick. Whichever track is ahead will sound slightly louder on playback, even if you haven’t touched the volume. 10. Be careful about ambience. Reverb doesn’t like low frequencies, and besides, diffusing the sound takes away some of the bass’s force. But what can work is to add five or six very tight delays (in the 15-30ms range), mixed very subtly in the background (stereo will work because the level is so low). Multitap delays are good candidates, as are chorus effects that have multiple voices. Don’t add any modulation to these delays, as they’re designed to simulate the early reflections you get from a room. If they don’t sound good, then they’re mixed too high.
  23. Double your pleasure, double your fun By Craig Anderton If you haven’t explored the ReWire protocol, you’re missing out on a tremendous way to improve workflow, and capitalize on the strengths of different programs. Basically, ReWire is a software protocol that allows two or more software applications to work together like one integrated program. That sounds simple enough, but the implications are significant. Suppose you create a kickin’ rhythm track in Propellerhead Software’s Reason, but would love to add some vocals, guitars, and piano as overdubs in a DAW that accepts standard VSTs. Without ReWire, you’d need to export the rhythm track, import it into the DAW, and try to match the DAW’s tempo with the Reason file’s tempo. And if you wanted to make a change in a Reason instrument, you’d have to make the change, export the file, import, and so on all over again. It’s doable, but clumsy. Instead, you can ReWire Reason as the client (also called the synth application) with a ReWire-compatible, host program (also called the mixer application) like Sonar, Cubase, Live, Digital Performer, Logic, Pro Tools, Studio One Pro, etc. Reason will pump its instrument outputs into the host’s mixer, and follow the DAW’s existing tempo while you lay down your audio tracks. Any ReWire-compatible application is either a host, a client, or both (but not simultaneously—you can’t ReWire a client into a host, then ReWire that into another host). Although there can only be one host, you can sometimes ReWire multiple clients into that host. There are five main ReWire aspects (see Fig. 1). Fig. 1: ReWire sets up relationships between the host and client programs). The client’s audio outputs stream into the host’s mixer. The host and client transports are linked, so that starting or stopping either one starts or stops the other. Setting loop points in either application affects both applications. MIDI data recorded in the host can flow to the client (excellent for triggering soft synths). Both applications can share the same audio interface. COMPUTER REQUIREMENTS ReWire is a software-based function that’s built within ReWire-compatible programs—no drivers or special hardware is needed. Although there’s a misconception that ReWire requires a powerful computer, ReWire itself doesn’t need much resources—it’s simply an interconnection protocol. However, you’ll be using two programs together, so your computer needs enough power to run them both comfortably. This means a decent amount of RAM and a reasonably fast processor. APPLYING REWIRE A client can stream up to 256 individual channels into the host’s mixer with the current ReWire version (ReWire2; early versions were limited to 64 channels). You will likely have the option when rewiring to stream only the master mixed (stereo) outs, all available outs, or your choice of outs. ReWire2 can also stream 255 MIDI buses (with 16 channel per bus) from the client to the host, as well as have the host query the client for information—like instrument names to allow for automatic track naming. If you choose all available outs, then instruments or tracks can ReWire into channels individually, and be processed individually. For example, if a drum module has eight available outs to which you can assign its various drums and you ReWire these individually into a DAW, you can process, mix, insert plug-ins, and automate channel parameters for each output. Another aspect of ReWire is that you usually need to open the host first, then any clients (and close programs in the reverse order). You won’t break anything if you don’t , but you’ll likely need to close your programs, then re-open them in the right order. Also note that although many hosts try to launch the client automatically once you’ve selected it for rewiring, if that doesn’t work you’ll need to launch the client manually. REWIRE IMPLEMENTATIONS Each program implements ReWire a little bit differently. The Propellerheads web site has tutorials on using ReWire with specific host programs, so look there for details. However, here’s the general principle. In the host, you’ll have an option to insert a ReWire device. This may be included as part of the process of inserting any virtual instrument, or be its own category (see Fig. 2). Fig. 2: In Sonar, you insert a ReWire device just as you would insert a MIDI track, audio track, or virtual instrument. Before you insert the ReWire device, or as you insert it, you’ll likely be presented with a menu that lets you specify which channels you want to enable for streaming. Once you do this, the selected channels will appear in the host mixer, and be identified in some way—perhaps they’ll say “ReWire channels,” or include the ReWire device in the track name. With a client like Reason that includes MIDI instruments, the MIDI output menus for the host’s MIDI tracks will include those instruments as possible MIDI data destinations (see Fig. 3). Fig. 3: Reason appears in Sonar's list of tracks (circled), and any MIDI track output can trigger the Reason instruments - in this example, the MIDI drum track is triggering Reason 5's Kong drum module. AND BEST OF ALL... ReWire is free, fun, effective, and works reliably thanks to years of refinement. Why settle for one program when you can turn two programs into a single, integrated entity? Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. How low can you go? Use an octave divider, and find out! By Craig Anderton Octave dividers aren’t just for guitar players: They also rock for bass, whether you’re getting mega-low sounds from the lower strings, or playing high up on the neck for very cool 8-string bass effects. It’s easy to do octave division with amp sims and DAWs, but there are some definite tricks involved. SPLIT YOUR SIGNAL As when adding many other types of effects on bass, it’s best to create a second track in parallel with the main bass sound, and dedicate the second track to the octave divider. This lets you mix in the precise amount of octave sound, but more importantly, you may need to condition the bass signal to optimize it for octave division. CHOOSE YOUR DIVIDER Most amp sims include octave dividers. Fig. 1: Waves’ GTR|Solo is set up for octave division on bass. The Pitcher module provides the octave division; its Mix control is set to divided sound only. I’ve successfully used octave dividers on bass with IK Multimedia AmpliTube, Waves GTR Solo (Fig. 1) and GTR3, Native Instruments Guitar Rig (Fig. 2), and Peavey ReValver Mk III. Fig. 2: Guitar Rig’s Pro-Filter module is an excellent EQ for conditioning a bass signal before it hits the Oktaver; this screen shot is from Guitar Rig 3. There’s not a lot of difference among these effects for this particular task; they all do the job. You can also use other available modules to condition the bass signal. PRE-OCTAVE PROCESSING Two main problems can interfere with proper triggering: An inconsistent input signal level, and triggering on a harmonic rather than the fundamental (which causes an “octave-hopping” effect, where the signal jumps back and forth between the fundamental and octave). A compressor can solve the consistency problem. Set it for a moderate amount of compression (e.g., 4:1 ratio, with a fairly high threshold). Make sure the compressed sound doesn’t have a “pop” at the beginning, and the sustain is smooth. Then if needed, patch in an EQ to take off some of the highs—the object is to emphasize the fundamental. This may require compromise; too much filtering will reduce the level from the higher strings to where they might not be able to trigger the octave divider (as well as change the tone), whereas not filtering enough may cause octave-hopping on the lower strings. What works best for me is cutting highs and boosting the low bass a bit. If the EQ curve isn’t sharp enough, you may get better results by patching two EQs in series. I’ve also found that with Guitar Rig, using the Pro Filter module with mode set to LPF (lowpass) and slope to 100\\\% four-pole provides outstanding conditioning, especially when preceded by the Tube Compressor. THE FINAL TOUCH Playing technique also matters. Popping and snapping might confuse the octave divider, as can the transients that occur from playing with a pick. Playing with your fingers or thumb gives the best results, but don’t be afraid to experiment; for example, if you do “snap” the string, the sound might mask the divided sound anyway, so it won’t matter. Also, remember that octave dividers are monophonic, so make sure only one string vibrates at a time. Once you have your signal chain tweaked, adjust the parallel, octave-divided signal for the right balance with the main bass signal. You’ll probably find yourself playing an octave higher than normal, because the octave divider will supply the low fundamental. But octave division is also a great way to make those low strings create seismic-type lows that throb in a way you can’t get any other way. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Use envelopes to add extra expressiveness to your clips By Craig Anderton Why settle for static audio and MIDI tracks when you can use clip envelopes to add animation and expressiveness? Envelopes aren’t just about fade ins and fade out, but altering parameters over time in a way that adds interest. Ableton Live makes it easy to create modulation envelopes, as well as edit them “on the fly,” using virtually the same procedure for both MIDI and audio tracks (this example shows how to use clip envelopes with MIDI tracks). All of this envelope action takes place in the Clip Overview pane for the sequence you want to edit; after opening the Clip Overview pane, click on the E button (in the Clip View box). Now you need to choose where you’ll apply the automation. Click on the Device Chooser, and a pop-up menu will appear. Choose the target Device for the automation (the screen shot shows Redux being selected). Right below the Device Chooser, you’ll see the Control Chooser field. Click on it to show a drop-down menu, then select the parameter you want to modulate with the envelope. Here, Redux’s sample rate is being selected. Choose the Pencil Tool, as this is what you’ll use for drawing the envelope. Right-click within the sequence using the Pencil Tool to call up a context-sensitive menu. Here you can change the grid to which envelopes will snap, choose a triplet grid, or turn off the grid altogether. Note that there are several keyboard shortcuts for changing the width, as well as turning snap on and off; check the manual for a listing. Draw the desired envelope shape. To keep track of which parameters are automated, click on the Device Chooser or Control Chooser. Devices and controls with automation will have small red squares to the left of the device or control’s name. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...