Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Anderton

  1. CME Xkey Air Bluetooth Keyboard Controllers Now MIDI data can float through the air from your keyboard by Craig Anderton This isn’t my first dance with Xkey keyboards; I’ve been using the standard 37-note model in my studio, and the 25-key version for travel when I have enough space to bring something bigger than a Korg nanocontrol 2. Of course, I have “real” keyboards but I often need to test out presets that I’m developing, and having the 37-key model set up in front of my QWERTY keyboard makes for a much more efficient preset creation process. On the road, the 25-key version gives the velocity and aftertouch response I want, is light, and can survive portability (I'm sure the brushed aluminum foundation helps with that). They’re both USB devices, but recently I visited a friend who got tired of cables, and converted as much as he could to wi-fi and Bluetooth. I could definitely see the merits of his approach, and was considering adding one of the Zivix PUC Wi-Fi or Bluetooth adapters ($79 and $99 respectively) so I could convert the Xkey into wireless operation. However then the Xkey Air Bluetooth keyboards (25 or 37 key versions for $199 and $299 respectively) appeared, and seemed like the right solution at the right time. Also note they can still work as wired USB devices. The 25-key Xkey Air - note the Bluetooth sticker on the C key A Better Bluetooth. Bluetooth Low Energy (BLE, which is also what the PUC+ uses) is faster and more efficient than standard Bluetooth. However, it is available only on the most modern hardware; fortunately for everything else—whether Windows, Mac, iOS, Android, or Linux—CME recommends the WIDI BUD dongle, a small, low-latency, BLE-to-USB MIDI Bridge. I tested Xkey Air with several devices (Mac, iOS, and Windows) and all of them needed the WIDI BUD. Cost is around $49, so you may need to factor that into the sticker price (and don't lose it - it really is tiny). Just Like the Xkey, But… There are no significant differences to the Air versions’ outward appearance or capabilities, and the feel of the low-travel keyboard is the same. For details on the original, see my review of the CME Xkey 25 vs. the Korg Nanocontrol 2 on Harmony Central. However, dig deeper and you’ll see the on-off switch for Bluetooth, some LEDs to indicate status, and note there’s an internal battery that gets recharged via USB—but as far as I can tell, there’s no way to replace it. I always consider this a negative because at some point, the battery will lose its ability to hold a charge. Fortunately you can always use the keyboard via USB. Also note that there’s no port for the breakout cable included with the standard Xkey 37 that allows for MIDI out, a sustain switch, and pedal. This isn’t surprising, given the emphasis on portability. Incidentally, all Xkeys ship with a USB cable that terminates in a micro-B USB connector for plugging into the Xkey’s USB port. Although the connector is standard, the size is thin, and most commercial cables won’t fit—don’t lose the one that comes with the Xkey. It’s 41 inches/104 centimeters long, so you may need a USB cable extender for when you’re not going wireless. The 37-key version. Note the buttons on the left for modulation, transpose, sustain, and bend. Trial by Installation. The Xkey Air is ready for prime time…but then there’s the rest of the world. I tried pairing with a circa 2013 Windows laptop running Windows 10, and a pre-Lightning iPad running the latest iOS; no luck. So I plugged the WIDI BUD into my laptop, and still couldn’t get any Bluetooth pairing—yet Widibud showed up as a MIDI input in Cakewalk SONAR, and I could play virtual instruments perfectly from the Xkey Air. How could that be? Apparently as long as WIDI BUD shows up in Windows’ Control Panel > Settings > Connected Devices, you’re good to go and don’t need Bluetooth pairing because (I assume) the Xkey “pairs” itself with WIDI BUD. In the process of figuring this out, I also I went to the Docs & Downloads page under CME’s support, and found an app called Widi Plus that could update the WIDI BUD firmware, so I did. The iPad solution was the same: use WIDI BUD, which required the Camera Kit adapter for my particular iPad. The WIDI BUD is “the great leveler” that makes operating Xkey Air possible on what appears to be just about anything that normally handles Bluetooth. But even if your device’s Bluetooth is compatible, WIDI BUD supposedly provides lower latency. CME quotes around 7ms, so that fits my “under 10 ms keeps me happy” requirement. I wish CME was a little more diligent about documentation; for example, if you want to use Xkey Air with Apple devices, I highly recommend this forum post. It would be great if CME consolidated everything about using Xkey Air with Apple, Windows, and Linux into individual documents. I suspect some people who don’t have my level of perseverance will just assume it doesn’t work when they can’t pair it the way they would other Bluetooth devices. Yet everything worked flawlessly once I figured out the ground rules. The bottom line is unless you’re using the latest and greatest computers, factor in the cost of the WIDI BUD. For $49, it lets Xkey communicate happily with dinosaurs, allows for very low latency, and even has a helpful little red light that blinks when it’s receiving MIDI data. The Xkey Plus Application. This is also described in the reviews linked above, and it’s exceptionally useful. You can do so much more with the Xkey keyboards than just play notes—for example, assign different program changes to each key—as well as customize velocity and lots more. Best of all, since writing the previous review, Xkey Plus (which is free) now lets you save and load presets. This is huge, because it means you can easily switch between using the Xkey as a standard note player and something that starts to resemble more of a control surface. Other Accessories. If you want to strut around the stage, the $49 Xclip (left) clips to the 37-note model to allow attaching a guitar strap, and there are two carrying cases: the $25 Supernova (middle) for a single Xkey, and the Solar carrying case (right; it's just a name, it’s not solar-powered) can hold both the 25- and 37-key versions, or one and various other accessories; it costs around $40. Conclusions I’m a fan of the Xkey series. The keyboards are sturdy, light, functional, and very handy. Polyphonic aftertouch and the Xkey Plus software are the icing on a very sweet cake. Furthermore, I’ve had my two Xkeys for long enough that I can vouch they hold up over time. Although you might assume the minimal key travel would limit velocity response or make it difficult to adapt, I didn’t find that to be the case at all. In fact I’m confident enough with its "feel" that I have no problem using the Xkey for preset development, and switching over to a standard keyboard only as a final “reality check.” The wireless aspect is very cool, although you pay a premium for that coolness, particular if you need the WIDI BUD adapter (you probably will). And of course, there’s the non-replaceable battery issue mentioned earlier. Still, the latency is low, the system is reliable, you can get about 30 feet away from your computer, and you’ll never trip over a cable or yank it out at an inopportune time. For many users, the Xkey Air series will be exactly what they want—and if they can’t stretch for the price, the standard Xkey controllers remain as good as ever. Resources Available from: Sweetwater, B&H, Amazon, Reverb and Ebay Video: Xkey Air Overview Video: What About Latency? Video: Jordan Rudess playing the Xkey on a mountain in Norway ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Craig’s List 5 Reasons God Must Like Drummers by Craig Anderton 1. They’re the safest member of the band. When people throw bottles, rotten tomatoes, used condoms, and other tokens of appreciation at the guitarist and lead singer, the drummer sits safely on a throne (that’s really what they call it), behind impressive fortifications. Bottles have to make it through a bewildering forest of toms, cymbals, and stands before they can hit their target. Safety first! 2. They have nothing to fear from United Airlines baggage handlers. Drummers beat the living crap out of their instruments every day, so having a baggage handler do the same…been there, done that. No big deal. 3. They get so many groupies, the calculator was invented specifically so drummers could keep count. There’s something about all that physical activity and sweat and stuff…the rhythmic pulsing…moving in and out of the beat…veiled in mystery behind that drum kit...hey, just wondering—does anyone have contact information for Sheila E.? 4. Drummers can get away with anything. Let’s face it, in comparison to John Bonham and Keith Moon, anything you do is going to seem pretty tame by comparison. Yes, even that little stunt you did last week with the pickup truck, Trixie’s mom, Gatorade, four gallons of Crisco, and a complete set of the Encyclopedia Brittanica. 5. Drummers are the poster children for mental health. Because they hit things all the time, drummers get to work out their aggressions on inanimate objects. Not only is this great news for lead guitarists and singers, but after searching through 100 years of public records, not one serial killer has ever been a drummer. Just sayin.’ ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. Chris Jenkins: Sound for the Beatles Movie After all, the Beatles deserve the best... by Craig Anderton The remaining Beatles and Apple Corps Ltd., as well as Giles Martin (son of the late, legendary producer George Martin) have been rightfully protective of the Beatles’ legacy. Yes, the Beatles were a band…but they were also not just a part of history, they made history. So it’s not only the subject matter that makes The Beatles: Eight Days A Week — The Touring Years important, but the fact that it even exists. This White Horse Pictures release, produced by Imagine Entertainment and directed by Ron Howard, hits the theaters on September 16, with a run on Hulu.com shortly thereafter. However if you’re going to do a Beatles movie, the sound can’t just be “okay”—which isn’t easy due to the technology back in the Beatles’ early days. Howard tapped Chris Jenkins to lend his expertise to the project. Fresh off winning (another) Academy Award for the sound in Mad Max: Fury Road (I was so impressed by the sound I had to stay for the credits to see who was responsible), it wasn’t just Jenkins’ skills but his love of the project, his background as a studio musician, and his understanding of the industry that made him the perfect choice. The first step was gathering the materials. Fortunately, Jenkins didn’t have to start entirely from scratch. “We were able to go to the original sessions and mixes. Giles [Martin] is the guardian of the tapes and masters, many of which are just mono or stereo. Giles also had a huge treasure trove of material that’s been undiscovered or never made public, including studio chatter and outtakes. These helped create the transitions from songs to live performances, and from [the Beatles] quitting touring to falling in love with the studio again.” For many viewers, the restored Shea Stadium concert will be the standout attraction. Those who never saw the Beatles will find the energy and crowd reaction surreal, and those who were there at the time will remember what the excitement was about. Yet it almost didn’t happen. “The Shea Stadium part is a standalone piece, we didn’t get clearance to use it from Apple Corps until late in the process. There was so much work to do on the feature itself we didn’t know if we’d be able to get the concert done—especially because in many ways it is the definitive Beatles concert, so [sir Paul] McCartney wanted it to be the Beatles concert for everyone who didn’t know what a Beatles concert was about. It was shot with twelve 35 mm cameras and recorded pretty well, and it’s an amazing document to see now. Most of the songs were two minutes or so, and intensity-wise, everything started at a 9 out of 10—with 50,000 screaming fans adding to that intensity. “One particular section was astonishing to me, it’s one of the best sections in the movie. They started playing ‘Baby’s in Black’ and the crowd quieted, so you could hear John and Paul singing in perfect harmony. You could really feel, not just hear, what the Beatles were all about.” But then again, what about those screaming fans? In any of the sections I’d heard in the past from the Shea Stadium concert, you could barely make out the Beatles underneath the screams. As Jenkins notes, “We did quite a bit to reduce the crowd noise, because it was hard to sit and listen to that size crowd for that length of time. Giles has some proprietary techniques he used, and I did a long pass on it with EQ, while the team at Abbey Road working on the 5.1 version were dealing with individual sounds. You really had to do an EQ pass for the crowd as well as the music. Giles was able to extract the crowd and create 5.1 crowd stems with no music at all, but the funny thing is that in a stadium setting, the screaming girls were just as much a part of the concert as the music…it’s in the DNA of the music. While we were successful in extracting the crowd, we needed to put all the pieces back together to strike a balance between preserving the legacy of what was happening, but not driving people out of the room because they were put off by the intense screaming.” There was a lot more to the process for the 5.1 version than simply dealing with the crowd, and Jenkins reached back for an old school technique that was remarkably effective and stems (no pun intended!) from his film experience. “Before the days of infinite plug-ins and such, it was always a problem to have looped dialog and sound effects blend into a movie. We ‘worldized’ them by playing them back in acoustic spaces so they didn’t sound like an actor in a studio, or a sound effects library, but something real and occurring in a natural environment. “To create the 5.1 mix, Giles and the Abbey Road team created acoustical spaces optimized with pre-EQ for the various instruments—drums, bass, guitars, voices. Then, instead of upmixing the music, it played back through great speaker systems in these acoustic spaces and was re-recorded from the source material—no reverb, no overdubs, but done as a 5.1 acoustic process. It’s a beautiful way to record, without digital processing, that yielded a very natural sound. This kind of technique is very old school and not for everybody, but it was ideal in this case. Giles did an incredible job of restoring the concert, while staying totally true to the loyalists. The point of the restoration was to bring out the music. “As to the movie itself, the mix was supposed to take 6-10 days, but Giles worked for months on the material, while Cameron Frankley and his crew at Warner Brothers did dialog editing, crowd effects backgrounds, and so on. It took about a month to do all the cutting and the near-field two-tracks and 5.1 mixes.” The mixdown studio—Neve DFC with S6 and S3 for FX (pictured: Ryan Murphy, engineer and Mark Purcell, mix tech). Of course, there’s always a potential concern that when putting this much effort into something, the act of “sanding and polishing” will take off the edge that made the music so interesting in the first place. But Jenkins is candid about accepting the limitations of the source material. “It’s important to understand that 8 Days a Week - the Touring Years isn’t necessarily a ‘good-sounding’ soundtrack; it’s not a Beatles recording project but a documentary, so there are hundreds of performances and interviews in all kinds of locations with all kinds of flaws. The Cavern Club recordings are very lo-fi, the technology for the recordings from 1962-1964 wasn’t very good, and later on the crowds were overwhelming the Beatles. “But then they started not to like touring, and what saved their trajectory was the convergence of studios and recording techniques. The movie follows that trajectory from the Cavern Club to the big stadiums, and then after an hour lands in the studio with all these beautiful recordings. I’ve seen people watch the movie and when it transitions to John playing 'Lucy in the Sky with Diamonds' into 'A Day in the Life,' a common reaction is 'wow…that moment is the reason why I’m doing what I’m doing in my career and my life.'" This picture has nothing to do with the Beatles, but we couldn't pass up the chance to show Chris at the Warner Brothers Batman vs. Superman museum. In a way, then, would you say the movie chronicles the changes that occurred in music technology? “Yes, as you progress further into the movie the sound gets better and better. As it goes from unrefined audio to these beautiful recordings, you get an idea of what’s to come. The mics improved, the amps improved, the technology improved pretty rapidly and the music started to sound much, much better. This movie captures the arc of the recording process as well as the band.” Jenkins is what would happen if you took a cynical, jaded engineer—then flipped his phase switch 180 degrees. He loves what he does, and he talks about this project with unabashed, and very genuine, enthusiasm. So how do you get a gig like this? “It all happened thanks to Ron Howard. We’d been working on a project called ‘Inferno.’ He said ‘Hey, we’re doing this documentary on the Beatles, want to be the mixer for it?’ Well, I was totally a Beatles fan when growing up. He sent an email to Nigel Sinclair at White Horse Pictures, and I was in. I was blown away…I’ve done some really fun projects, but this was a dream gig. It was so lucky. Getting to do the Beatles project is the high point of my career.” I didn’t need to ask “Even more than winning three Academy Awards and being nominated for two, not to mention all the seminal music documentaries you worked on decades ago?” It was obvious he had a deep emotional connection, not just a professional one, with the project. “This was all done with the most care and love possible not only from the crew at Abbey Road, but everyone here who was involved.” Jenkins paused... “You have to honor that love every second.” ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Craig’s List - 5 Reasons to Stop Making Music Come on...you don't need to make music anyway ... by Craig Anderton You’ve been a musician all your life. You love making music, and being part of the industry of human happiness. But have you ever considered why you should stop making music? There are plenty of good reasons! You can still use your instruments for other purposes. For the vertically-challenged, an acoustic guitar makes a fantastic houseboat. Or take off a drum head, and voilà—the perfect kitty litter box for Fluffy! You can’t do much better than a ukulele for swatting mosquitos, and of course, pianos can be used for…um…hmmm…well, maybe not pianos. Kanye West. How you can possibly hope to match the awe-inspiring artistry, talent, and ground-breaking innovation of a man whose brilliance, subtlety, and breathtaking command of his instrument transcends all that has come before? You’ll get so frustrated trying, you might as well just give up now. No more groupies. Admit it: Aren’t you really burned out on all those people who offer you sexual favors, throw undergarments on stage that your poor roadies have to clean up, and insist on sticking around after providing stimulating companionship during those late-night, post-Waffle House hours? Get out of music, and you’ll never have to worry about those problems again! You can turn over a new leaf, and start a more socially acceptable career—like accounting. Accountants lead exciting, cutting-edge lives as they navigate the treacherous waters of IRS regulations, bizarre and incoherent bank fees, the almost poetic qualities of accelerated depreciation, and the terrifying spectre of jammed paper in desktop calculators. The only downside: Now you’ll be surrounded by those super-hot accountant groupies. Oh well…life’s about tradeoffs. Since music makes you smarter, you’ll become part of a minority. As society spirals down to where the movie “Idiocracy” is now categorized under “Documentary” instead of “Comedy,” using words with several syllables, understanding different points of view, reading, and other signs that betray intelligence will brand you as an “innaleckshal.” You’ll be red-lined from neighborhoods, your former friends will shun you, you’ll be denied credit, and you’ll find it impossible to communicate with non-musicians. Do you really want to end up like that? ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Guitar Feedback "Modeling" with Samplers Add an extra dimension to keyboard samplers by emulating "guitar feedback" effects by Craig Anderton One of the great aspects about guitar is feedback, but how can you possibly translate that to a keyboard sampler? Here are some techniques that come pretty close, and add another dimension of expressiveness to keyboards. Note that it will really help to have an E-Bow, but depending on how facile you are with creating feedback, this may or may not be necessary. SEPARATING THE ELEMENTS There are four guitar feedback elements: Attack Sustain/decay prior to the onset of feedback Initial body resonance feedback Body feedback + harmonic feedback (that “whine” that appears at the end of a sustained chord) The problem with sampling these is that different notes go into feedback at different times, and the character of the feedback is different. I feel a keyboard player would want a bit more note-to-note consistency, so I sample each element individually, then mix them together into a single note using a DAW. THE SETUP Of course for feedback, you need to mic an amp. Start recording, and hit a power chord. After getting a good attack, let the note decay without feeding back. Then, bring the guitar in toward the amp and touch the headstock to the amp. Doing this creates noise and thunks until the guitar is firmly pressed against the amp, but using a DAW makes it easy to cut out the sample’s bad parts. Once the guitar goes into body feedback, let it sustain for a while. Then to get the harmonic feedback, I drive one of the chord’s strings with an E-Bow to make the process easy. There will be some discontinuity while switching on the E-Bow and waiting for it to feed back, but that’s not an issue. Capture about 10 seconds of sustained E-Bow harmonics, then stop recording. CUTTING UP I like to get about a 10-12 second sample for each note, and loop the final harmonic feedback. I typically end up with a raw sample that’s about 2 or 3 minutes long, so it‘s chopping time (Fig. 1). Fig. 1: The three elements for a guitar feedback sample, as described next. These eventually get mixed down to a single audio clip, with the end looped using either a digital audio editor or within a sampler. Isolate the best attack along with its natural decay (about 4 seconds); there’s your first track. Then chop out the best 6 or so seconds of the body feedback for the second track. Then cut about 4 seconds of harmonic feedback—there’s another track. Next, add crossfades among the various sections to create a single, unified note. Because a guitar’s sound is so rich, crossfaded sections sounded just like part of the sound’s natural evolution. Now all that’s left is to loop the end. Your DAW probably won’t be able to do that, but you can do the looping in a sampler like Kontakt or MachFive, or a digital audio editor like Steinberg Wavelab or Sony Sound Forge. THE COUP DE GRÂCE To provide some control over the feedback sound, I cheat: within the sampler, I layer a sine wave tuned a couple of octaves, or an octave or two and a fifth, above the fundamental (the optimum choice depends on the note, so I choose different notes for different chords, just so that the sounds don’t have too much “sameness”). The sine wave is modulated by three sources: An amplitude envelope with a really looooog attack, so that even if the player gets into the looped section, there will still be something evolving and changing. Low amplitude vibrato at “finger vibrato” speed. Modulation wheel controlling amplitude, so the player can bring in the sine wave “feedback” at any time. Granted, no keyboard sampler will replace a guitar...but you can still have a lot of fun trying, and even come up with effects that you can’t get with a “real” guitar. Need proof? Check out the resulting sound in the music video “ .” ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. G7th Performance 2 Capo for Guitar It's not cheap, but does the performance justify the premium price? by Craig Anderton I was never too interested in capos because I can play in different keys, but lately I’ve realized capos are an easy way to create novel timbres by using familiar voicings in different keys. Also, Gibson’s G FORCE tuning system introduced a capo mode, so it’s a lot quicker to tweak the tuning than doing so manually. But there’s a bewildering variety of capos out there, from a few dollars up to at least $60, as well as some creative variations on a theme—like the Spider Capo (with individual lever pads for each string), Dunlop’s combination capo and slide converter, and more. However the G7th Performance 2 capos stand out from the crowd, not just because of the price point (about $35-$50) but also the functionality. Furthermore, they look like what a capo would look like if Apple’s Chief Design Officer, Jony Ive, had decided to make a capo instead of things like iPhones and iMacs. The industrial design is top-notch. Aside from being lightweight, compact, and attractive, attaching the capo is simple (and you can attach as well as move it with one hand): slide it over the strings, and squeeze the top and bottom sections. The capo holds firmly in place until released by pushing on a small tab. This also means that when not in use, you can simply clamp the capo to your headstock. So is it really that simple? Yes. Just make sure you apply enough pressure to hold the strings down without buzzing but not enough to pull them out of tune; and for best results, place the Performance 2 not too far behind the frets. Also, note that there are several variations on a theme. Both nylon-string and steel-string versions are available in silver, satin black, and (for a nominal surcharge) gold-plated. There’s also a silver version for 7.25” radius vintage necks. With all of them, the company claims the materials that come into contact with your guitar have no short- or long-term effect on the finish. There’s really not much else to say, because the Performance 2 worked perfectly on every guitar I tried, which ranged the gamut from a Gibson SG’s traditionally thin neck to a J-45 acoustic. Granted there are less expensive alternatives (including G7th’s Newport and Nashville lines), but I’ve yet to find a capo that matches the Performance 2’s performance, ease of use, and design. - HC - Resources G7th's video on the Performance 2 The Performance 2 is available from: Sweetwater B&H Musician's Friend Direct from G7th ...as well as local dealers like Sam Ash and Guitar Center ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. How To Make Keyboards Fit in a Mix Have your keyboard play nice with others by Craig Anderton A guitar covers about 3.5 octaves, a bass about 3 octaves, most voices do a few octaves—but keyboards can cover 7 octaves and beyond. What’s more, synthetic sounds often cover a huge part of the frequency spectrum (second only to drums), from thundering bass to trebly highs. Your mission, should you decide to accept it, is to get that monster sound to play well with other instruments, and sit in a mix instead of dominate it (unless, of course, the keyboard is supposed to dominate the mix!). THE ELECTRIC/ACOUSTIC DICHOTOMY If you’re recording primarily acoustic instruments, or electric instruments through amps, mixing in a synthesizer that was recorded direct will often sound just plain “wrong”—it will lack the “air” created by recording acoustic instruments through a mic, as well as have an extended high frequency response compared to acoustic instruments. There are four main solutions, which can be used individually or together: Roll off some of highs. A little high-frequency shelving, down maybe 1.5dB starting at 10kHz, will bring the high-frequency spectrum more into line with acoustic instruments. Be careful, though; don’t dull the sound too much, as it may still have to balance sonically with the high frequency transients caused by, for example, picking an acoustic guitar string. Feed the keyboard through an amp, mic it and record it to a track, then blend that with the direct track. If well-recorded, you might even want to use the amp sound by itself. A PA, or portable PA/instrument amp like the Cerwin-Vega P1000X or P1500X, can give a neutral sound while a guitar amp offers more “character.” Play back the direct recorded sound through your monitors, and mic them. This is a variation on going through an amp, but if you don’t really have any other way to add ambience, this will work in a pinch. Add multiple short delays (around 15-30ms), and mix them in at low volume with the direct sound. This helps simulate the sound of getting early reflections in a room. A tapped delay with 8 or more taps is ideal for this; too few taps probably won’t give a realistic enough sound. THE POTENTIAL OF PROPER PANNING Most current synthesizers have stereo outs to take advantage of any onboard stereo effects, as well as provide panning options. For example, some patches might tie notes to panning so that the left notes come out of the left speaker, and the right notes come out of the right speaker; or splits might be placed in stereo. However, few instruments other than drums are stereo. Guitar, bass, woodwinds, voice, and the like are basically mono sources, with stereo created through the use of ambience (real or artificial). If the keyboard covers the entire stereo field, that doesn’t leave much room for other instruments. Fig. 1 shows a typical rock band panning scenario. Fig. 1: The synth pans more to the left and the guitar more to the right, thus opening up the center for bass, kick, vocals, and other instruments. The synth pans from left to somewhat left of center, and the rhythm guitar pans from right to somewhat right of center. The center is left open for bass, kick, vocals, leads, and other “center-oriented” parts, while the drums can be panned across the stereo field, along with “extras” like percussion or delays. To spread the synth as desired, simply pan the left track full left, and the right track to left of center (if the DAW’s track contains a stereo signal, you may need to split the stereo track into two mono tracks so each can be panned individually, or there may be some kind of balance control that does the job). Sonar users can take advantage of the Channel Tools plug-in (Fig. 2), which allows changing not just the angle of each channel in a stereo track, but also the width. Fig. 2: Sonar’s Channel Tools plug-in includes sliders that allow adjusting the angle and width of a stereo signal’s left and right channels independently. For example, the keyboard could spread in “stereo” from left to left of center, or be centered somewhere along that path—in other words, most of the keyboard’s audio energy could be concentrated at the midpoint between the left and left-of-center points. Remember, the whole point of most mixes is to create a great balance among all the instruments, where they sound like a cohesive ensemble but you can also differentiate among the various sounds. The above tips can definitely help your keyboard synth snuggle comfortably into the mix with all the other instruments, yet retain its identity. Join the discussion on Harmony Central's Keyboard Forum ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. [ATTACH=CONFIG]n31748736[/ATTACH] I loved this iPad app when I first tried it, and several months later…I love it more. But that’s because even though it’s an iPad app, it’s a musical instrument and like all musical instruments, I’m getting better at it. To find out more about GeoShred, please check out my review. So it’s time for a Pro Review, but this time, we’ll be taking a different tack: a crowdsourced Pro Review. Of course, all Pro Reviews are crowdsourced to a great degree because everyone is invited to comment and ask questions, but we’re going one step further: We’ll provide 15 of you with a free, unlimited code to download the app so you can review it yourself! However, there’s a catch: You have to commit to trying out GeoShred, and commenting on your experiences in this thread. And of course, you also need an iPad 2 or higher. How easy/difficult is it to learn? Which options do you like best? What do you think of the modeling effects? Is this something you’d use in the studio or live? Don’t agonize over writing a huge summary, but do a post every now and then as you find out more about the app, starting with your first impressions. I’d rather see 20 quick, interesting posts than one huge post, but if you want to do huge posts…be my guest! And feel free to create audio or video examples we can post on the Harmony Central YouTube channel. So here’s what you need to do: PM me (Anderton) and explain why you want to check out GeoShred (determines who gets a code in case of a tie), and that you understand in return for getting a free download code, you’ll be posting in this thread. That’s all there is to it! Allow a day or two after your request before you get the code. If you don’t receive a reply, it simply means we’ve run out of codes…but you can certainly feel free to spring the bucks to get your own, and comment anyway. I really think you’re gonna love this app, but thanks to this Pro Review, you can be the judge of that and let us know what you think. Have fun, and let the games begin! [ATTACH=CONFIG]n31748737[/ATTACH]
  9. How To Create "Preverb" (Man) Here's how to create a popular sound from the psychedelic era by Craig Anderton “Preverb” was a popular effect in 60s music, where reverb built up to a note instead of decaying after it. It was a fairly time-consuming effect to set up with tape-baesd recording; you needed to record the track to be preverbed, flip the tape reels over to reverse the tape direction, play back the track through reverb, record only the reverb, then flip the tape reels back again so that the music played normally—but the reverb played in reverse. Ironically, now that today’s DAWs make it easy to replicate that effect, people don’t seem to be intrested in using it. Maybe that’s because it could make a track sound “dated” (sort of like how gated reverb on drums screams “80s music”), but it’s still a cool effect that’s worth a try when you want to add a sort of otherwordly quality to vocals, guitar, drums, and other signal sources. ADDING PREVERB Pro Tools makes it particularly easy to add the preverb effect; with their AudioSuite Delay or Reverb effects, just click on the Reverse button. However, this is not quite as flexible as the more “universal” method presented next (which also works with Pro Tools). This requires that the clip have some silence before the first sound, but later we’ll cover what to do if there’s no silence. Start by copying the clip or track to which you want to add preverb, then process the copy with the DAW’s reverse function (Fig. 1). Here’s how you reverse clips in various programs. Fig. 1: The top (red) waveform is the original guitar part. The orange waveform below is the reversed version; the next one down (dark blue) adds reverb, and applies the effect to the clip. The bottom waveform (violet) re-reverses the reverberated track to create “preverb.” During playback, you need only the top and bottom clips. Ableton Live: in the clip overview sample box, click Rev Acoustica Mixcraft: right-click the clip > Reverse Apple Logic: double-click the clip, then choose Functions > Reverse Avid Pro Tools: select clip > AudioSuite > Other > Reverse Cakewalk Sonar: select clip > Process > Apply Effect > Reverse Magix Samplitude: right-click the clip > Effects (Offline) > Sample Manipulation > Reverse MOTU Digital Performer: select clip > Audio > Apply Plug-In > Reverse > Select > Apply Presonus Studio One Pro: right-click the clip > Audio > Reverse Audio Propellerheads Reason: right-click the clip > Reverse Clips Sony Acid Pro: select clip > type U Steinberg Cubase: select clip > Audio > Process > Reverse Insert a reverb or delay plug-in into the copied/reversed track or clip, then adjust the reverb’s effect settings. Choose an all-wet effect mix, with no dry signal. After obtaining the desired reverb sound, select the reversed track and apply (render) the effect so the effect becomes part of the waveform (for example with Studio One Pro invoke Track Transform; in Sonar, use Apply Audio Effect). Now, reverse the backward, reverberated track to “un-reverse” it. Mix this with the original dry track, and now you have preverb. AUDIO THAT STARTS IMMEDIATELY WITH SIGNAL If an audio clip or track has no silence at the beginning, trying to add preverb will be ineffective because there won’t be any place for the reverb to decay when reversed. So, to add preverb at the very beginning of a clip or track, you’ll need a blank section before the first sound. This section needs to be equal to or longer than the reverb’s decay. Either insert silence, slip-edit the track to extend the beginning then bounce it to itself, or whatever lets you pre-pend silence. If you want to add preverb to a track before the entire song starts, then select all tracks and shift them to the right to open up a few measures at the song’s beginning. Now you can extend the original track you want preverbed to the project start so it includes silence. Continue by copying the original track, reversing, and following the steps detailed previously to add preverb. Join The Discussion in the Harmony Central Recording Forum ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. How to "Proof" MIDI Sequences Fix those little “gotchas” before they make it into the final mix by Craig Anderton MIDI sequencing is wonderful, but it’s not perfect—and sometimes, you’ll be sandbagged by problems like false triggers (e.g., what happens when you brush against a key accidentally), having two different notes land on the same beat when quantized, voice-stealing that cuts off notes abruptly, and the like. These glitches may not be obvious when other instruments are playing, but they nonetheless can muddy up a piece or even mess up the rhythm. Just as you’d “proof” your writing, it’s a good idea to “proof” sequenced tracks. Begin by listening to each track in isolation; this reveals flaws more readily than listening to several tracks simultaneously. Headphones can also help, as they may reveal details you’d miss over speakers. As you listen, also check for voice-stealing problems caused by multi-timbral soft synths running out of voices. Sometimes if notes are cut off, merely changing note durations to prevent overlap, or deleting one note from a chord, will solve the problem. But you may also need to dig deeper into some other issues, such as . . . NOTES WITH ABNORMALLY LOW VELOCITIES OR DURATIONS Even if you can’t hear these notes, they still use up voices. They’re easy to find in an event list editor, but if you’re in a hurry, do a global “remove every note with a velocity of less than X” (or for duration, “with a note length less than X ticks”) using a function like Cakewalk SONAR’s DeGlitch option (Fig. 1). Fig. 1: Cakewalk SONAR's DeGlitch function is deleting all notes with velocities under 10 and durations under 10 milliseconds. Note that most MIDI guitar parts benefit greatly from a quick cleanup of notes with low velocities or durations. UNWANTED AFTERTOUCH (CHANNEL PRESSURE) DATA If your master controller generates aftertouch (pressure) but a patch isn’t programmed to use it, you’ll be recording lots of data that serves no useful purpose. When driving hardware synths, this can create timing issues and there may even be negative effects with soft synths if you switch from a sound that doesn’t recognize aftertouch to one that does. Note that there are two types of aftertouch—channel aftertouch, which generates one message that correlates to all notes being pressed, and polyphonic aftertouch, which generates individual messages for each note being pressed. The latter sends a lot of data down the MIDI stream, but as there are few keyboard controllers with polyphonic aftertouch, you may not encounter this issue. However polyphonic aftertouch can be extremely expressive, so if your keyboard has it, be sure to take advantage of it as described in this article. Fig. 2 shows Steinberg Cubase’s Logical Editor, which is ideal for removing specific types of data. Fig. 2: In this basic application of Steinberg Cubase's Logical Editor, all aftertouch data is being removed. Note that many recording programs disable aftertouch recording as the default, but if you enable it at some point, it may stay enabled until you disable it again.) OVERLY WIDE DYNAMIC VARIATIONS This can be a particular problem with drum parts played from a keyboard—for example, some all-important kick drum hits may be much lower than others. There are two fixes: Edit individual notes (accurate, but time-consuming), or use a MIDI edit command that sets a minimum or maximum velocity level, like the one from Sony Acid Pro (Fig. 3). With pop music drum parts, I often limit the minimum velocity to around 60 or 70. Fig. 3: Sony's Acid Pro makes it easy to restrict MIDI dynamics to a particular range of velocity values. DOUBLED NOTES If you “bounce” a key (or drum pad, for that matter) when playing a note, two triggers for the same note can end up close to each other. This is also very common with MIDI guitar. Quantization forces these notes to hit on the same beat, using up an extra voice and producing a flanged/delayed sound. Listening to a track in isolation usually reveals these flanged notes; erase one (if two notes hit on the same beat, I generally erase the one with the lower velocity value). Some programs offer an edit function that deletes duplicates automatically, such as Pro Tools’ Delete Duplicate Notes function (Fig. 4). Fig. 4: Pro Tools has a menu item dedicated specifically to eliminating duplicate MIDI notes. NOTES OVERLAP WITH SINGLE-NOTE LINES This applies mostly to bass and wind instruments. In theory, with single-note lines you want one note to end before another begins. Even slight overlaps make the part sound more mushy (bass in particular loses “crispness”) but what’s worse, two voices will briefly play where only one is needed, causing voice-stealing problems. Some programs let you fix overlaps as a Note Duration editing option. However note that with legato mode, you do want notes to overlap. With this mode, a note transitions smoothly into the next note, without re-triggering an envelope when the next note occurs. Thus in a series of legato notes, the envelope attack occurs only for the first note of the series. If the notes overlap without legato mode selected, then you’ll hear separate articulations for each note. With an instrument like bass, legato mode can simulate sliding from one fret to another to change pitch without re-picking the note. Join the discussion on Craig Anderton's Sound, Studio, and Stage ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Windows 10 Report Card for the Studio So should you upgrade from Windows 7 or not? Here's one person's report card by Craig Anderton A while ago I decided to take a leap of faith and go "all in" on Windows 10 for my studio computer—yes, even with an always-on net connection, and checking the "sure, I don't mind sending you user feedback" box. I even did an in-place install over Windows 7 rather than wipe my hard drive clean, so this was some serious living on the edge. And now, here's the report card on 10 subjects for Windows 10. Look: B+ Making it tablet-friendly has also made the interface generally more obvious and simple. I'd give it a 5 if there was the option to turn on Aero, although to be fair even though I thought I'd miss that gorgeous glassy look, I haven't. MIDI improvements: B+ (and maybe even A) MIDI being multi-client is huge—it means a single MIDI application doesn't hog the computer. But if they fixed the MIDI port limitation (and so far, I haven't encountered that limitation, so it may have been fixed) then it's an A. Groove Music: B I was going to give this Microsoft version of iTunes an F because the last time I tried to use it, album song orders were alphabetical. Seriously? But I tried it again and—let’s hear it for the “rolling updates,” because now the song orders are correct. And it’s showing album art, which I don’t remember seeing before. The only reason it didn’t get an A is because it’s very oriented toward organizing your music collection on a computer, not easy auditioning of individual audio files, like us audio types need to do. Handling of in-app purchases: A This was a big surprise. I expected an onslaught of "Like your 1 GB cloud storage? Get 50 GB for only $1.99 a month!!" notifications, but Microsoft has been surprisingly restrained. Maybe they're just laying low until everyone has converted to Windows 10, but so far, so good. Default music file format choice: A Microsoft has jettisoned Windows Media Audio (which was actually quite good, but couldn’t compete with the iPod’s format of choice) and decided that FLAC will be its main squeeze for an audio format. Full fidelity, less space, none of the artifacts of MP3 or AAC…I’m in. Native audio improvements: C+ Yes, Windows' native audio can have lower latency than before...but it’s not Core Audio or ASIO. However, it gets a + because my Windows DAWs run a little bit more smoothly with ASIO. Making sound adjustments in the control panel: D You want to change your default system audio device, so you go to Settings. Logical, right? But then you go to Personalization…uh, okay, and then…Themes? Yes, Themes, and then under Related Settings you’ll find Advanced Sound Settings. However, what kept the grade from going to D- is the ease of messing with the volume control, which is nice and obvious on the taskbar…and it doesn’t disappear mysteriously. Update reliability: B+ I've experienced nothing nasty from updates except after one update, something strange seemed to happen with the USB ports. Or maybe it was poltergeists. Whatever, it went away after the next update that happened a couple days later, so I’ll blame Microsoft just for the heck of it because I’ll assume it was cause and effect. Update notifications: F It's actually worse than Windows 7, which at least had a parade of inscrutable characters telling you things were being updated. But then you'd turn on the computer and it would take forever to boot. Was the update successful? Was there a problem? Was it updating? Who knows...and that aspect remains. Now when a boot seems to take forever, I just go away for a while...have a snack or something so I don’t sit around nervously. There hasn't been a fail, but in Microsoft's desire to make updating transparent and in the background, they've instead managed to make me nervous by not putting up a message that says something like "Your computer is installing updates, please be patient." Doing what Windows 8 was supposed to do: A Those wretched metro apps have been replaced by a smart handling of tiles in the Start menu. We don’t have the musical equivalent of cool iPad apps to put in there yet, but at least there’s an environment for them that won’t make you want to throw your computer out the window. Edge Browser: B+ Internet Exploder was an easy act to follow, but Edge is a major improvement and not just an incremental one. Although it still has a few rough edges—web pages that look fine in other browsers may have some anomalies—it’s a big step up, and feels likely to become even sleeker in the future. And there you have it. Admittedly, Windows 7 was a seriously great operating system, which is why it stays rooted to many a C: drive. However. I’m glad I updated to Windows 10, which was without a doubt the most pain-free Windows update ever. After the Windows 8 fiasco (an OS I tried and immediately discarded), I was skeptical—but Windows 10 got it right. ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. Better Sound from Acoustic Guitar Piezo Pickups Don't settle for a less-than-the-best acoustic guitar sound by Craig Anderton I just finished a live recording where the player was using an acoustic guitar with a piezo pickup—and every time I hear a piezo pickup, the first thing I want to do is grab a parametric EQ and make it sound like a real guitar! The piezo output doesn’t sound like what you’d hear when listening to a guitar in a room, but it also doesn’t sound like miked guitar. In some ways, a piezo is too accurate because it doesn’t discrimate in what it picks up. Fortunately, properly-applied EQ can tame the piezo sound and make it more realistic. COMPARING FREQUENCY RESPONSES The upper plot in Fig. 1 shows a miked acoustic guitar’s spectrum, while the middle plot shows the unprocessed piezo’s spectrum. Fig.1: Three spectra from a Gibson J-45 acoustic guitar. The top is the miked sound, the middle the piezo sound, and the bottom is the piezo sound after being processed by EQ as described later. In the miked output, note the major boost around 165 Hz. This corresponds to the body’s “acoustic filtering.” Virtually any acoustic guitar exhibits a characteristic low-frequency bump, and capturing that bump is part of the sound. There’s also a slightly higher-frequency dip above this bump. The piezo not only misses the bump’s peak, but the frequency response extends much lower, giving a “boomy” sound. Also, the piezo’s high frequencies are more pronounced because piezos tend to have a natural brightness. Finally, in the miked spectrum, there’s a bit more energy in the upper mids. These differences are why a miked guitar often “sits” better in a track than one recorded with a piezo, as the miked version occupies a narrower part of the frequency spectrum. You can’t make a piezo sound exactly like a miked guitar, because the physics of the transducers are so different. However, EQ can tailor a piezo’s sound (Fig. 2). Fig. 2: Cakewalk SONAR’s ProChannel EQ is using five EQ bands to tame the raw piezo output. Here’s what each filter stage is doing: Highpass filter: A steep, 30dB/octave slope rolls off lows starting at around 116 Hz. Lowpass filter: This reduces highs starting at around 9.3 kHz with a gentler, 18 dB/octave slope. Low parametric stage: Boosts at 161 Hz Lo Mid parametric stage: Cuts around 460 Hz High parametric stage: Lifts the upper mids a bit around 3.1kHz. Now refer back to Fig. 1, and note how the EQ’ed piezo plot at the bottom is much closer to the miked sound. VARIATIONS ON A THEME If you don’t have a miked sound as a reference for comparison, the EQ settings above are fairly consistent “ballpark” settings. But of course you don’t have to imitate the miked sound, and can use EQ to enhance or reduce particular frequencies for specific applications. As just one example, a guitar might have additional resonances you want to reduce (Fig. 3). Fig. 3: Note how the Low, Lo Mid, and High Mid settings have been tweaked to affect three specific midrange resonances. Taken together, these three response dips still reduce the midrange, but do so with more precision. Another option is wanting a “big” sound to accompany a solo singer, but not overwhelm the vocals (Fig. 4). Fig. 4: The highs and lows are accented, and the midrange scooped to make space for vocals. In this curve, the EQ still raises the highs and lows, but doesn’t roll off the highest frequencies to give a bright sound, and gives a significant low end boost to give a big, beefy sound. Also note that the high frequency boost extends down into the upper midrange, which makes the highs less brittle by comparison. Meanwhile, the midrange is taken down to carve out additional room for the vocals. Finally, suppose you want the EQ to support fingerstyle guitar picking and provide a highly articulated sound (Fig. 5). Fig. 5: This curve provides increased definition. The major boost in the 2-3 kHz range imakes the note articulations really stand out, although there’s still some lower midrange drop to make room for vocals. TWEAK THAT PIEZO! Hopefully this will inspire you not to accept what comes out of the piezo pickup, but to tweak it for a more natural sound that’s much more like what we hear from a guitar in a room, or when miked. The results will be much more aesthetically pleasing than the midrangey, “honking” vibe of an unequalized piezo pickup. _____________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Sir George Martin: Thank You You helped so many make better music By Craig Anderton I read the news today, oh boy…such a loss. At Harmony Central, our goal is to help people “make better music.” We can’t think of anyone who lived that ideal more than Sir George Martin, who not only helped the Beatles and many other artists make better music, but influenced and inspired an entire generation to push the envelope of what was possible with recording and artistic expression. However, his dedication to advancing the art of recording wasn’t limited to the elite of the pop world. In 1975, I was establishing a career as an author and had written the book “Home Recording for Musicians.” My publisher asked who would be a good choice to write the foreword. I flippantly said “George Martin,” although of course I was sure there was a better chance of my sprouting wings and flying to the moon than him agreeing to write a foreword. Quite the contrary. He asked to see a sample of the book, and thankfully, loved what he saw so he wrote an eloquent foreword that set a tone of inspiration. It couldn’t have been a better opener…and he did it all for someone he had never even met. But in retrospect, I think I know the reason why: he simply couldn’t pass up any opportunity to encourage others to participate in the joy of making and recording music. Many years later, I attended an event where he was present. I summoned up the courage to approach him, and said “I’m sure you don’t remember, but you did me a tremendous favor years ago. You wrote a foreword to my book and I just wanted to thank you in person.” He smiled and said “Ah yes, Craig Anderton.” I was totally blown away that he took writing a foreword to a book from some nobody so seriously that he remembered it. Apparently he brought the same degree of careful attention to detail to everything he did—not just outstanding record production. Sir George Martin did a lot for the world. He helped re-define the role of the studio, what a producer could contribute, and helped artists make better music…but more importantly, he helped bring joy to millions of people. He changed the world, and he’s leaving behind a better world than the one he was born into. Thank you, Sir George. Leave your thoughts here ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. So That's Why They Call It "Playing" Music by Craig Anderton This story involves a politician, but it doesn’t involve politics (you’re welcome). I was on a plane, and sitting a few rows ahead was Representative Paul Ryan, who’s now Speaker of the House. He had earbuds, and was listening intently to…what? Senate proceedings? An audio book, perhaps? While we stood in the jetway waiting for our gate-checked baggage, I asked what he was listening to so intently. Probably the Carpenters’ Greatest Hits, right? Maybe Kenny G? It was Led Zeppelin. Yes, the purveyors of debauchery and on-tour madness had worked their way into the ear canal of the man who, had George Romney been elected president in 2012, would have become vice president of the United States (although I’m sure it would have been a different kind of vice than Led Zeppelin’s). So I asked if he played guitar. “Yes…air guitar,” and he laughed. But I don’t think it was my imagination that a brief flash of regret seemed to cross his face. It’s one thing to listen to Jimmy Page; it’s another to be strutting across a stage, pounding out riffs on a Les Paul while thousands of fans are screaming their heads off. Yet he didn’t take up the guitar, because he said he just wasn’t good at it. Well, news flash: I could never hop a mogul like Jean-Claude Killy, but I liked to ski. And I’ll never make Celebrity Chef, but frankly, I cook a reasonably good salmon and besides, there are no documented cases of anyone dying from my cooking. Listening to music is about enjoyment, but so is playing music. If you’re reading this, you probably already know that making music is fun. But it’s time to let others know. I have a friend who keeps various percussion toys around, and when he puts on music, encourages guests to pick up an instrument and play along. Although they’re usually embarrassed at first, it doesn’t take long before they’re smiling. Maybe that smile will turn into nothing, or maybe it will turn into checking out a Casio keyboard or inexpensive acoustic guitar. As Lao Tzu said, “The journey of a thousand miles begins with one step.” I’ll probably never see Paul Ryan again, but if I do, I’m going to ask for his shipping address and send him a guitar. He’ll probably never become a great guitarist…but I bet he’ll have fun trying. - Craig Anderton ps: if you have friends who are musicians, forward them Harmony Central's Make Better Music. They'll thank you and we thank you. ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Can Music Really Change the World? by Craig Anderton It may sound hopelessly idealistic, but all of us at HC truly believe that music can change the world. During a time when society seems to be filled with complicated environmental, social, religious, and political problems, we believe music can provide the healing mojo that helps bring joy, and reduces the stress of everyday life. But can music really change the world, or is this all a naive pipe dream? The answer is both more complex, and more simple, than you might think. Plenty of studies show that music affects individuals. A paper in the UK-based Journal of Advanced Nursing describes how listening to music is useful for pain relief and treating depression. Music also decreases post-operative pain. Playing certain types of music can help decrease blood pressure, and reduce heart and breathing rates. Taiwanese researchers have found that listening to Mozart K 448 had an antiepileptic effect in children. And according to a paper published by the National Institutes of Health US National Library of Medicine, music can help in stroke recovery. But the “money quote” from that paper addresses music in general: “Music is a highly complex and versatile stimulus for the brain…Regular musical activities have been shown to effectively enhance the structure and function of many brain areas, making music a potential tool also in neurological rehabilitation.” Or translated into English: Music creates physical changes, too. According to a paper in Neuropsychologia, the corpus callosum—the nervous system highway between the two brain hemispheres—is significantly larger in musicians. Plenty of studies show music is good for your brain. That’s fine, but can music change the world? In some ways, it already has: Music was the soundtrack of the 60s, and musicians like Bob Dylan and the Beatles affected society. And I can’t help but wonder if the “tribal” nature of EDM has something to do with everyone synching to the same beat. If music changes the individual for the better, then hopeful those individuals will also help change the world for the better. But if music is food for the brain, do we want to feed it the junk food of data-compressed files, or quality audio that delivers a more pristine experience? I’d vote for the latter. So maybe while we think about improving the world, maybe we should think about improving music’s delivery medium as well. —Craig Anderton ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. If you're gigging or want to gig, ignore this book at your own peril By Craig Anderton 150 pages, electronic edition If you’ve been enjoying David Himes’ articles as “the Gig Kahuna” on Harmony Central, then you need this book. It includes everything he’s written for HC and more, all in a convenient Kindle or PDF format. However, if you play in a local band, you really need this book. Its brutal honesty helps compensate for the state of denial that afflicts many musicians about “making it.” The irony is that knowing why the odds of “making it” are infinitesimal also tells you want you need to do to increase the odds in your favor because if nothing else, you’ll learn about what you shouldn’t do as well as what you should do. Himes is acutely aware that “music business” is two words—and if you don’t conduct the business part properly, you can forget about being successful with the music part. One of the elements I really like is the specificity of what Himes communicates. For example, he doesn’t just say “be professional”—he describes the tell-tale signs of unprofessionalism in band members. Himes pulls no punches; his conversational and occasionally sketchy writing style (which could have benefited from a second set of eyes to catch some of the repetitions, but that doesn’t dilute the message) is a blast of reality. He covers topics like cover bands vs. original bands, test marketing, myths about gigging that need to be debunked, being honest about your level of commitment, problems you’ll encounter (and believe me, you’ll encounter all of them at some point), the kind of support team you’ll need, how clubs see you (reality check: you’re a vehicle to sell drinks, not an artiste), the importance of communication, and a whole lot of information on gigs—the different types of gigs, what your objectives should be, how to prepare for gigs, even dealing with the sound crew. Himes then segues into an extensive chapter on promotion and marketing (yes, you need to know marketing as well as chord progressions) with an emphasis on using social media to boost your career, and ends with a chapter about what happens beyond local gigging. Himes clearly has a ton of experience informed by over a decade of running a local music paper, and when he writes, it’s like the stern teacher you had in high school—who you didn’t really appreciate until years later, when you realized it was the only class where you actually learned something of true importance. If I had to use two words to describe this book, they would be “tough love.” Himes is unfailingly tough, but the motivation is that he truly cares about his fellow musicians, and really wants to help you avoid the issues that can cut your career short. The bottom line is if you can handle the truth, you can handle this book—and regardless of how much you think you know, your outlook will change and your career will benefit. Kindle edition: Amazon.com Price: $6.95 PDF edition: Direct from publisher; email destechdh@gmail.com. Price $9.95, with PayPal invoice. ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. Looking to go "into the groove"? This book is a fine place to start By Craig Anderton Hal Leonard Books Softcover, 263 pages, $17.48 If you want to get into EDM, you’re going to need some cool beats. And if you’re already into it, additional sources of inspiration never hurt. Enter this book from Josh Bess, an Ableton Certified Trainer and percussionist. But don’t think this book is relevant only to Ableton Live users—it’s applicable to just about anything with a MIDI piano roll view, and arguably even more so to programs like SONAR that include a step sequencer. The first chapter describes Live basics, which complements the downloadable demo version of Live. The second chapter has useful concepts for beginners that translate to a variety of DAWs, but the real meat of the book—170 pages—starts with Chapter 3, which shows screen shots for grooves. Musical styles include house, techno, breakbeat (e.g., hip-hip, drum ‘n’ bass), and world (e.g., dance hall), but these are also broken down into various sub-genres. Although beats are shown using Live’s Piano Roll View, it’s easy to translate to other piano roll views or step sequencers. Each pattern also includes info about some of the techniques used in the pattern, as well as occasional tips. I consider this a strong addition to the book, as they suggest avenues for additional exploration, and give some interesting insights into how particular beats are constructed. Chapter 4 is 14 pages about drum fills and transitions, again using screen shots for examples, while Chapter 5 covers Groove, Swing, and Feel. This takes only slightly more effort to translate into equivalents for other programs. Chapter 6 has 22 pages of how to build drum kits in Live from one-shots, and Chapter 7 is a one-page summary. For an even more universal appeal, the book also includes a download code for a variety of media that are suitable for all DAWs. There are 292 drum samples (mostly WAV, some AIF), along with 642 MIDI files for the grooves described in the book and the 19 described MIDI files for fills. For under $20, some might consider the samples and MIDI files alone worth the price, and the files let you take advantage of what’s presented in the book without even having to read most of it. However, the explanations for the rationale behind programming the beats provide a helpful background for those who want to go beyond just importing something and using it “as is.” Bess’s background as a percussionist certainly helps, as it gives a perspective beyond just “put these notes on these beats.” Overall, for those getting into dance music, this book lets you hit the ground running with actual files you can use in a wide cross-section of styles. I could also see this information as being useful for those doing soundtracks if they’re not as familiar with certain styles, yet need to create music using, for example, Dance Hall of Dubstep beats. For less than the cost of a 12-pack of Red Bull at Walmart, you’ll have something with much greater staying power. ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. The wheel, electricity, the microprocessor...the internet is right up there in that elite group of inventions by Craig Anderton In the 1960s, Marshall McLuhan wrote that he believed the print culture would soon be eclipsed by electronic culture. He coined the term “global village,” where the world becomes a computer-like giant electronic brain. Although the internet wasn’t invented until well after his death, in 1962 he wrote something which has to be considered truly prophetic: “The next medium, whatever it is—it may be the extension of consciousness—will include television as its content, not as its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual's encyclopedic function and flip into a private line to speedily tailored data of a saleable kind.” So not only did he foresee the internet, he even foresaw YouTube, mass databases, and targeted advertising. The guy was a genius…and like most geniuses, was dismissed as just another crackpot at the time. What got me thinking about the global village was the “World’s Biggest Audition” project in which we’re participating. The concept of watching a video on your telephone from any country in the world, then using it to audition for a superstar rhythm section, would have been science fiction not that long ago. But what also intrigues me about Stewart Copeland and Brian Hardgroove’s project is they’re not just looking for vocalists; they’re looking all over the world, and what makes that possible is our wired global village. Someone in India, Qatar, or Brazil can participate just as easily as I can. Ultimately, will the wired global village unite us or divide us? It’s clear that the World’s Biggest Audition is about bringing people together, but not everyone has those motives. From spam to recruiting terrorists to cyber-bullying to snooping, the wired village isn’t always benevolent. Interestingly, McLuhan nailed that, too—as he said, technology does not have morality. Never before in history have we been presented with a gift that allows everyone, everywhere, to communicate. What are we going to do with that gift? I really like what Stewart, Brian, and WholeWorldBand are doing with it…let’s hope that kind of thinking becomes the norm, and not the exception. If you'd like to audition — Go Here To Audition: http://www.harmonycentral.com/forum/forum/Forums_General/hardgroove-and-nothing-less/31616136-official-world-s-biggest-audition-—-don-t-miss-out ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Smartphone guitar app adds hardware and goes Android by Craig Anderton Wait! Don’t stop reading just because the sub-head says “Android,” and you assume the audio performance will give you an unwanted (albeit free) echo unit due to latency. IK Multimedia has created a very successful business by supporting iOS devices, so they must have been salivating at the thought of being able to tap the huge Android market. But while Android OS 5.0 has reduced latency, it’s still not really acceptable for real-time playing—so IK does an end run around the problem by building audio processing DSP into the accompanying hardware interface. I tested iRig UA with a Samsung Galaxy Note4. Note that the interface itself is not limited to Android, but will also work with Mac and Windows computers (although it won't do ASIO). However, the DSP within the interface that provides the amp sim processing works only with the Android application software. What You Need to Know The onboard DSP means a higher cost compared to a simple, I/O-only interface. $99.99 buys you a hardware interface with 1/4” input for guitar, 1/8” stereo headphone jack with associated volume control, 1/8” stereo input jack for jamming along with an external audio source, and micro-B USB connector (with appropriate included cable) to hook it up to your phone. iRig UA hooks into your phone digitally, so it bypasses the internal audio preamp for higher quality. With 5.0, you can also stream audio digitally from the phone into iRig UA and bypass the external input. When listening to music, you’ll get more clean volume out of iRig UA’s headphone amp than what’s in your phone. I didn’t have a way to test latency, but it seems like the only possible latency would be from A/D and D/A conversion. This would result in latency under 2 ms. In any event, the “feel” is zero latency. For the best experience, download AmpliTube UA for free from the Google Play Store with four guitar amps, one bass amp, nine stompboxes, two mics, and five cabs, with the option for in-app purchases of additional stompboxes and amps in the $5-$10 range. Or, you can buy all available amps and stompboxes for $69.99. iRig UA works with Android OS 4.2 and up, providing there’s support for host mode USB/OTG; to find out whether your device supports host mode, download the USB Host Diagnostics app from the Google store. The hardware also works as a 24-bit, 44.1/48kHz audio interface with OS 5.0 (also called “Lollipop”; apparently there’s a law that companies must have cute names for operating systems—although when Apple was doing cat names, they did forego “OS Hello Kitty”). The hardware is plastic, which seems like it might belong more under “Limitations.” But it seems quite rugged, and contributes to lower weight for portability. There are four “slots” in the FX chain—two for pre-amp effects, one for an amp, and one for a post-amp effect. Amp sim tone is subjective, so whether you like the amp tones or not is your call. I’ve always liked AmpliTube and IK’s take on modeling, so it’s probably not surprising that I also like the sounds in iRig UA. I can’t really tell whether they’re on the same level as the desktop version of AmpliTube 3, but even without extra in-app purchases, you get a wide range of useful and satisfying sounds. You can navigate the UI even if you’re semi-conscious. Limitations As with similar smartphone devices, the interface connects via the USB port used for charging the phone, so the “battery charge countdown clock” starts when you plug in and start playing. The battery drain is definitely acceptable (even taking the DSP into account), but of course, you’re putting the battery through more charge/discharge cycles with long sessions. I didn’t find any way to demo in-app purchases prior to purchasing. There’s no landscape mode support, so accessing the amp knobs means swiping left and right a whole lot. There’s no tablet version yet, although of course the phone UI “upscales.” You can’t put an amp in an FX slot if you want to put amps in series. For $99.99, I do think IK could have included a compressor/sustainer you can place in front of the amp. In-app purchases culminate in a higher price tag than most Android users expect. However, given what’s in the free software, I really didn’t feel the need to get a bunch of extra stuff. Conclusions This is a sweet little package that finally brings essentially zero-latency guitar practicing and playing to Android phones. Some will balk at the price, but given the realities of Android’s audio world, there’s really no way to get around latency issues without the hardware DSP. Android users who want satisfying tones out of a simple and portable Android setup, along with considerable sonic versatility, now have a solution. While the amp sim options currently available on Android won't make Mac fanbois green with envy, iRig UA stakes an important—and very well-executed—claim in the quest for parity between the two main smartphone platforms. Buy at B&H ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Here's a clever way for guitarists to tame the "crispiness" of audio interface direct inputs By Craig Anderton Most guitarists are aware that with passive pickups, cable capacitance affects tone when feeding a high-impedance input, like the DI inputs on audio interfaces. Activating your guitar’s tone control will tend to “swamp out” any differences caused by cable capacitance, but if the tone control isn’t play, then cable capacitance will nonetheless affect your sound. Quantifying this difference is more difficult. Different cables have different amounts of capacitance per foot, and the longer the cable, the greater the capacitance. So often when guitar players find a cable that sounds “right,” they’ll just stick with that until it dies (or they do). Part of what inspired me to write this is a comment in another Forum that Shall Go Nameless that dissed the Timbre Plug (of course, without ever actually trying it) because of the assumption that it just duplicates what a tone control does. But a tone control is more complex than most people realize; it doesn’t just roll off highs, but also interacts with passive pickups to create a resonant peak. This boosts the signal somewhat, and is one reason why rolling back on the tone control sounds “creamier.” It’s also why guitarists like to experiment with different tone control capacitors. Within reason, the higher the capacitor value, the lower the resonant frequency. So yes, cables do make a difference. Yet these days, a lot of guitar players will record by going through a relatively short cable into an audio interface, so cable capacitance doesn’t enter into the picture. Which at long last brings us to the Neutrik NP2RX-TIMBRE, which typically costs under $20. Let’s take a closer look. The knob opposite the plug shaft itself has a four position rotary switch. It chooses among no capacitance, and three possible capacitor values strapped between the hot and ground connections (Neutrik preferred I not mention the exact values, but they're in the single-digit nanoFarad range). Note that these capacitors are potted in with a switch assembly, so don’t expect to change them if you’d prefer to try different values. Each of these has a distinct effect on the sound, as you can hear in this demo video. ASSEMBLY It’s actually quite easy to assemble; you’ll need a Phillips head screwdriver, pencil tip soldering iron, wirecutters, and two-conductor shielded cable with an outside diameter of 0.16” to 0.27”. The assembly instructions are downloadable from the Neutrik web site, and also are printed on the back of the packaging. I make my cables using the Planet Waves Cable Station, which uses ¼” cable. It was a tight fit, but by following the assembly instructions and cutting the wire exactly as specified, it all went together as expected. I certainly would advise against using anything thicker. IN USE Some people may think the right-angle jack is an issue, but it fits fine with a Strat and of course, it ideal for front-facing jacks as found on SG and 335-type guitars. However, ultimately it doesn’t really matter because the cable isn’t “polarized”—you can plug the Timbre plug into your amp or interface. All you give up is the ability to have the controls at your fingertips while you play, but I tend to think this would be a more “set and forget” type of device anyway. The Timbre Plug inserted into a TASCAM US-2x2 interface’s direct input. CONCLUSIONS The concept of emulating cable capacitance isn’t new, although sometimes it’s just a high-frequency rolloff—which is not the same as a capacitor interacting with a pickup. Neutrik’s solution is compact, built solidly, truly emulates physical cable capacitance, is accessible to anyone with moderate DIY skills, and isn’t expensive. In a way, it's like a hardware "plug-in" for your computer - and you may very well find it’s just the ticket to taking the “edge” off the crispiness that’s inherent in feeding a passive pickup into a high-impedance input. Buy at B&H __________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Yes, Eddie Kramer is a part of history — but what’s he’s doing today will be tomorrow’s history By Craig Anderton About 20 seconds into the interview, Eddie says the mic sound from my hands-free phone adapter thingie is “…shall we say, not of the best quality.” So I adjusted the mic placement until Eddie was happier with the sound. Nit-picky prima donna? Absolutely not. He’s just a very nice guy who’s unfailingly polite and helpful. Articulate, too. And that’s about 80% of what you need to know about Eddie: He really, really cares about sound, even if it’s just an interviewer’s mic. Which is probably what helps account for the other 20% you really need to know: He’s been behind the boards for some of the most significant musicians of our time, including Jimi Hendrix, Led Zeppelin, Buddy Guy, Kiss, Peter Frampton, the Beatles, AC/DC, the Rolling Stones, Carly Simon, Traffic, Joe Cocker, David Bowie, Johnny Winter, Bad Company, Sammy Davis Jr., the Kinks, Petula Clark, the Small Faces, Vanilla Fudge, NRBQ, the Woodstock festival, John Mayall, Derek & the Dominoes, Santana, Curtis Mayfield, Buddy Guy, Anthrax, Twisted Sister, Ace Frehley, Alcatraz, Triumph, Robin Trower, and Whitesnake. And let’s talk versatility: country act the Kentucky Headhunters, and classical guitarist John Williams. (For more information, there’s a short-form bio on Wikipedia.) And that’s just the tip of the iceberg, as he’s documented much of what he’s done with an incredible body of work as a photographer. You can find out a lot more about Eddie, including his latest F-Pedals project, at his web site. Given his history, if you think he lives in the past, you’d be one-third right. Another third lives in the present, and the remaining third in the future. During the course of an interview, you can find yourself in 1968 one minute, and 2016 the next. Cool. Eddie has enough savvy to know when it’s important to just go with the flow. Like that famous moment in “Whole Lotta Love” where you hear Robert Plant’s voice way in the background during the break. Print-through? An effect they slaved over for days? “The time of that particular mix was 1969, and this all took place over a weekend at A&R studios in New York. Imagine [mixing] the entire Led Zeppelin II on 8 tracks in two days! As we got into “Whole Lotta Love,” I actually only ended up using seven tracks because tracks 7 and 8 were two vocal tracks. I think I used the vocal from track 7. We’d gotten the mix going, I believe it was a 12-chanel console with two panpots. “During the mixdown, I couldn’t get rid of the extra vocal in the break that was bleeding through. Either the fader was bad, or the level was fairly high — as we were wont to do is those days, we hit the tape with pretty high levels. Jimmy [Page] and I looked at each other and said “reverb,” and we cranked up the reverb and left it in. That was a great example of how accidents could become part of the fabric of your mix, or in this case, a part of history. And I always encourage people today not to be so bloody picky.” Eddie has this wacky idea that the music is actually the important part of the recording process, not editing the living daylights out of it. To wit: “We’re living in the age of [computer-based programs like] Pro Tools, where we can spend hours, day, even weeks on end fixing all the little ‘mistakes.’ And by that time, you’ve taken all of the life out of the music. I don’t want to come off as trashing [Digidesign], but I feel that Pro Tools — which is a wonderful device — has its limitations in certain aspects.” And what might that main limitation be? “The people using it! And it becomes a sort of psychological battle . . . yes I can stretch that drum fill or performance [with bad timing], or I can effectively make a bad vocal sound reasonably decent, but what the hell is the point? Why didn’t the drummer play it right in the first place? Why didn’t the singer sing it right in the first place? “And that begs the question, do we have too many choices . . . and when we do, we sit there thinking ‘we can make it better.’ But for God’s sake, make it better in the performance! I want musicians who will look each other in the face, eyeball to eyeball, and I want interaction. I want these guys to be able to play their instruments properly, and I want them to be able to make corrections on the fly. If I say ‘In the second chorus, could you double up that part?’ I don’t want the guitarist giving me a blank look. “Learn your bloody craft, mates! The way we’re recording today does in fact give a tremendous amount of freedom to create in an atmosphere of relaxed inspiration. The individual can record in very primitive circumstances — bathrooms, garages, hall closets. Unfortunately for a lot of people, this means doing it one track at a time, which I think makes the final product sound very computerized and not organic. The other side of the coin is that many bands can think in terms of ‘let’s find a fairly decent acoustic space, set up mics, look each other in the eyes, and hit record.’” MIXED OUT, MIXED DOWN . . . OR MIXED UP? Ah, the lost art of mixing. If all you do is tweak envelopes with a mouse, that’s not mixing — that’s editing. If you think Eddie Kramer is a fix-it-in-the-mix kinda guy, you haven’t been paying attention. But there’s more. “One of the most exciting things as an engineer is to create that sound as it’s happening; having a great-sounding board, set of mics, and acoustic environment can lead one to a higher plane . . . when you hear the sound of the mixed instruments — not each individual mic — and get the sound now, while it’s happening. I don’t want to have to bugger around with the sound after the fact, other than mixing. There’s a thrill in getting a sound that’s unique to that particular situation. “The idea of mixing ‘in the box’ is anathema. It defeats the purpose of using one’s hand and fingers in an instinctive mode of communication. I am primarily a musician at heart; the technology is an ancillary part of what I do, a means to an end. I want to feel like I’m creating something with my hands, my ears, my eyes, my whole being. I can’t do that solely within the box. It’s counter-intuitive and alien. However, I do use some of the items within the box as addenda to the creative process. It lets me mix with some sounds I would normally not be able to get.” So do you use control surfaces when you’re working with computers, or go the console route? “Only consoles. I love to record with vintage Neve, 24-track Dolby SR [noise reduction] at 15 IPS, then I dump it into Pro Tools or whatever system is available, and continue from that point. If the budget permits, I’ll lock the multitrack with the computer. I’d rather mix on an SSL; they’re flexible and easy to work. I like the combination of the vintage Neve sound with the SSL’s crispness. And then I mix down to an ATR reel-to-reel, running at 15 IPS with Dolby SR. “With the SSL, I’m always updating, always in contact with the faders. I always hear little things that I can tweak. To me, mixing is a living process. If you’re mixing in the moment, you get inspired. I just wish I could do more mixes in 4-5 hours instead of 12, but some bands want to throw you 100 tracks. Sometimes I wish we could put a moratorium on the recording industry — you have three hours and eight tracks! [laughs] I’m joking of course, but . . . “On ‘Electric Ladyland,’ ‘1983’ was a ‘performance’ mix: four hands, Jimi and myself. We did that in maybe one take. And the reason why was because we rehearsed the mix, as if it was a performance. We didn’t actually record the mix until we had our [act] together. We were laughing when we got through the 14 minutes or so. Of course, sometimes I would chop up two-track mixes and put pieces together. But those pieces had to be good.” So do you mix with your eyes closed or open? “The only time I close my eyes when mixing is when I’m panning something. I know which way the note has to flip from one side to the other; panning is an art, and you have to be able to sense where the music is going to do the panning properly.” (to be continued)
  22. Meet the ghost in your machine By Craig Anderton Musicians are used to an instant response: Hit a string, hit a key, strike a drum, or blow into a wind instrument, and you hear a sound. This is true even if you’re going through a string of analog processors. But if you play through a digital signal processor, like a digital multieffects, there will be a very slight delay called latency—so small that you probably won’t notice it, but it’s there. Converting an analog signal to digital takes about 600 microseconds at 44.1kHz; converting back into analog takes approximately the same amount, for a “round trip” latency of about 1.2 milliseconds. There may also be a slight delay due to processing time within the processor. Because sound travels at about 1 foot (30 cm) per millisecond, the delay of doing analog/digital/analog conversion is about the same as if you moved a little over a foot away from a speaker, which isn’t a problem. However, with computers, there’s much more going on. In addition to converting your “analog world” signal to digital data, pieces of software called drivers have the job of taking the data generated by an analog-to-digital converter and inserting it into the computer’s data stream. Furthermore, the computer introduces delays as well. Even the most powerful processor can do only so many millions of calculations per second; when it’s busy scanning its keyboard and mouse, checking its ports, moving data in and out of RAM, sending out video data, and more, you can understand why it sometimes has a hard time keeping up. As a result, the computer places some of the incoming audio from your guitar, voice, keyboard, or other signal source in a buffer, which is like a “savings account” for your input signal. When the computer is so busy elsewhere that it can’t deal with audio, it makes a “withdrawal” from the buffer instead so it can go deal with other things. The larger the buffer, the less likely the computer will run out of audio data when it needs it. But a larger buffer also means that your instrument’s signal is being diverted for a longer period of time before being processed by the computer, which increases latency. When the computer goes to retrieve some audio and there’s nothing in the buffer, audio performance suffers in a variety of ways: You may hear stuttering, crackling, “dropouts” where there is no audio, or worse case, the program might crash. The practical result of latency is that if you listen to what’s you’re playing after it goes through the computer, you’ll feel like you’re playing through a delay line, set for processed sound only. If the delay is under 5 ms, you probably won’t care too much. But some systems can exhibit latencies of tens or even hundreds of milliseconds, which can be extremely annoying. Because you want the best possible “feel” when playing your instrument through a computer, let’s investigate how to obtain the lowest possible latency, and what tradeoffs will allow for this. MINIMIZING LATENCY The first step in minimizing delay is the most expensive one: Upgrading your processor. When software synthesizers were first introduced, latencies in the hundreds of milliseconds were common. With today’s multi-core processors and a quality audio interface, it’s possible to obtain latencies well under 10 ms (and often less) at a 44.1kHz sampling rate. The second step toward lower latency involves using the best possible drivers, as more efficient drivers reduce latency. Steinberg devised the first low-latency driver protocol specifically for audio, called ASIO (Advanced Streaming Input Output). This tied in closely with the CPU, bypassing various layers of both Mac and Windows operating systems. At that time the Mac used Sound Manager, and Windows used a variety of protocols, all of which were equally unsuited to musical needs. Audio interfaces that supported ASIO were essential for serious musical applications. Eventually Apple and Microsoft realized the importance of low latency response and introduced new protocols. Microsoft’s WDM and WASAPI in exclusive mode were far better than their previous efforts; starting with OS X Apple gave us Core Audio, which was tied in even more closely with low-level operating system elements. Either of these protocols can perform as well as ASIO. However for Windows, ASIO is so common and so much effort is put into developing ASIO drivers that most musicians select ASIO drivers for their interfaces. So we should just use the lowest latency possible, yes? Well, that’s not always obtainable, because lower latencies stress out your computer more. This is why most audio interfaces give you a choice of latency settings (Fig. 1), so you can trade off between lowest latency and computer performance. Note that latency is given either in milliseconds or samples; while milliseconds is more intuitive, the reality is that you set latency based on what works best (which we’ll describe later, as well as the meaning behind the numbers). The numbers themselves aren’t that significant other than indicating “more” or “less.” Fig. 1: Roland’s VS-700 hardware is being set to 64 samples of latency in Cakewalk Sonar. If all your computer has to do is run something like a guitar amp simulator in stand-alone mode, then you can select really low latency. But if you’re running a complex digital audio recording program and playing back lots of tracks or using virtual software synthesizers, you may need to set the latency higher. So, taking all this into account, here are some tips on how to get the best combination of low latency and high performance. If you have a multi-core-based computer, check whether your host recording program supports multi-core processor operation. If available, you’ll find this under preferences (newer programs are often “multiprocessor aware” so this option isn’t needed). This will increase performance and reduce latency. With Windows, download your audio interface’s latest drivers. Check the manufacturer’s web site periodically to see if new drivers are available, but set a System Restore point before installing them—just in case the new driver has some bug or incompatibility with your system. Macs typically don’t need drivers as the audio interfaces hook directly into the CoreAudio services (Fig. 2), but there may be updated “control panel” software for your interface that provides greater functionality, such as letting you choose from a wider number of sample rates. Fig. 2: MOTU’s Digital Performer is being set up to work with a Core Audio device from Avid. Make sure you choose the right audio driver protocol for your audio interface. For example, with Windows computers, a sound card might offer several possible driver protocols like ASIO, DirectX, MME, emulated ASIO, etc. Most audio interfaces include an ASIO driver written specifically for the audio interface, and that’s the one you want to use. Typically, it will include the manufacturer’s name. There’s a “sweet spot” for latency. Too high, and the system will seem unresponsive; too low, and you’ll experience performance issues. I usually err on the side of being conservative rather than pushing the computer too hard. Avoid placing too much stress on your computer’s CPU. For example, the “track freeze” function in various recording programs lets you premix the sound of a software synthesizer to a hard disk track, which requires less power from your CPU than running the software synthesizer itself. MEASURING LATENCY So far, we’ve mostly talked about latency in terms of milliseconds. However, some manufacturers specify it in samples. This isn’t quitebas easy to understand, but it’s not hard to translate samples to milliseconds. This involves getting into some math, so if the following makes your brain explode, just remember the #1 rule of latency: Use the lowest setting that gives reliable audio operation. In other words, if the latency is expressed in milliseconds, use the lowest setting that works. If it’s specified in samples, you still use the lowest setting that works. Okay, on to the math. With a 44.1kHz sampling rate for digital audio (the rate used by CDs and many recording projects), there are 44,100 samples taken per second. Therefore, each sample is 1/44,100th of a second long, or about 0.023 ms. (If any math wizards happen to be reading this, the exact value is 0.022675736961451247165532879818594 ms. Now you know!) So, if an audio interface has a latency of 256 samples, at 44.1 kHz that means a delay of 256 X 0.023 ms, which is about 5.8 ms. 128 samples of delay would be about 2.9 ms. At a sample rate of 88.2 kHz, each sample lasts half as long as a sample at 44.1 kHz, so each sample would be about 0.0125 ms. Thus, a delay of 256 samples at 88.2 kHz would be around 2.9 ms. From this, it might seem that you’d want to record at higher sample rates to minimize latency, and that’s sort of true. But again, there’s a tradeoff because high sample rates stress out your computer more. So you might indeed have lower latency, but only be able to run, for example, half the number of plug-ins you normally can. SNEAKY LATENCY ISSUES Audio interfaces are supposed to report their latency back to the host program, so it can get a readout of the latency and compensate for this during the recording process. Think about it: If you’re playing along with drums and hear a sound 6 ms late, and then it takes 6 ms for what you play to get recorded into your computer, then what you play will be delayed by 12 ms compared to what you’re listening to. If the program knows this, it can compensate during the playback process so that overdubbed parts “line up” with the original track. However, different interfaces have different ways to report latency. You might assume that a sound card with a latency of 5.8 milliseconds is outperforming one with a listed latency of 11.6 ms. But that’s not necessarily true, because one might list the latency a signal experiences going into the computer (“one-way latency”), while another might give the “round-trip” latency—the input and output latency. Or, it might give both readings. Furthermore, these readings are not always accurate. Some audio interfaces do not report latency accurately, and might be off by even hundreds of samples. So, understand that if an audio interface claims that its latency is lower than another model, but you sense more of a delay with the “lower latency” audio interface, it very well might not be lower. WHAT ABOUT “DIRECT MONITORING”? You may have heard about an audio interface feature called “direct monitoring,” which supposedly reduces latency to nothing, so what you hear as you monitor is essentially in real time. However, it does this by monitoring the signal going into the computer and letting you listen to that, essentially bypassing the computer (Fig. 3). Fig. 3: TASCAM’s UH-7000 interface has a mixer applet with a monitor mix slider (upper right). This lets you choose whether to listen to the input, the computer output, or a combination of the two. While that works well for many instruments, suppose you’re playing guitar through amp simulation plug-in running on your computer. If you don’t listen to what’s coming out of your computer, you won’t hear what the amp simulator is doing. As a result, if you use an audio interface with the option to enable direct monitoring, you’ll need to decide when it’s appropriate to use it. THE VIRTUES OF USING HEADPHONES One tip about minimizing latency is that if you’re listening to monitor speakers and your ears are about 3 feet (1 meter) away, you’ve just added another 3 ms of latency. Monitoring through headphones will remove that latency, leaving only the latency caused by using the audio interface and computer. MAC VS. WINDOWS Note that there is a significant difference between current Mac and Windows machines. Core Audio is a complete audio sub-system that already includes drivers most audio interfaces can access. Therefore, as mentioned earlier, it is usually not necessary to load drivers when hooking an audio interface up to the Mac. With Windows, audio interfaces generally include custom drivers you need to install, that are often on a CD-ROM included with the interface. However, it’s always a good idea to check the manufacturer’s web site for updates—even if you bought a product the day it hit the stores. With driver software playing such a crucial part in performance, you want the most recent version. With Windows, it’s also very important to follow any driver installation instructions exactly. For example, some audio interfaces require that you install the driver software first, then connect the interface to your system. Others require that you hook up the hardware first, then install the software. Pay attention to the instructions! THE FUTURE AND THE PRESENT Over the last 10 years or so, latency has become less and less of a problem. Today’s systems can obtain very low latency figures, and this will continue to improve. But if you experience significant latencies with a modern computer, then there’s something wrong. Check audio options, drivers, and settings for your host program until you find out what’s causing the problem. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Those two screws on the side of your pickup aren’t just there for decoration by Craig Anderton Spoiler alert: The correct answer is “it depends.” Pickup height trades off level, sustain, and attack transient, so you need to decide which characteristics you want to prioritize. I think we all have a sense that changing pickup height changes the sound, but I’d never taken the time to actually quantify these changes. So, Itested the neck and bridge humbucker pickups in a Gibson Les Paul Traditional Pro II 50s guitar, and tried two different pickup height settings. For the “close” position, the strings were 2mm away from the top of the pole pieces. In the “far” position, the distance was 4mm. I then recorded similar strums into Steinberg’s WaveLab digital audio editor; although it’s impossible to get every strum exactly the same, I did enough of them to see a pattern. The illustrations show the neck pickup results, because the bridge pickup results were similar. Fig. 1: This shows the raw signal output from three strums with the rhythm pickup close to the strings, then three strums with the pickup further away. It’s clear from Fig. 1 that the “close” position peak level is considerably higher than the “far” position—about 8 dB. So if what matters most is level and being able to hit an amp hard, then you want the pickups close to the strings. Fig. 2: The last three strums, with the pickups further from the strings, have a higher average level compared to the initial transient. Fig. 2 tells a different story. This screen shot shows what happens when you raise the peaks of the “far” strums (again, the second set of three) to the same peak level as the close strums, which is what would happen if you used a preamp to raise the signal level. The “far” strum initial transients aren’t as pronounced, so the waveform reaches the sustained part of the sound sooner. The waveform in the last three is “fatter” in the sense that there’s a higher average level; with the “close” waveforms, the average level drops off rapidly after the transient. Based on how the pickups react, if you want a higher average level that’s less percussive while keeping transients as much out of the picture as possible (for example, to avoid overloading the input of a digital effect), this would be your preferred option. Fig. 3 shows two chords ringing out, with the waveforms normalized to the same peak value and amplified equally in WaveLab so you can see the sustain more clearly. Fig. 3: The second waveform (pickups further from strings) maintains a higher average level during its sustain. With the “tail” of the second, “far” waveform, the sustain stays louder for longer. So, you do indeed get more sustain—not just a higher average level and less pronounced transients—if the pickup is further away from the strings. However, remember that the overall level is lower, so to benefit from the increased sustain, you’ll need to turn up your amp’s input control to compensate, or use a preamp. ADDITIONAL CONCLUSIONS The reduced transient response caused by the pickups being further away from the strings is helpful when feeding compressors, as large transients tend to “grab” the gain control mechanism to turn the signal down, which can create a “pop” as the compression kicks in. With the pickups further away, the compressor action is smoother although again, you’ll need to increase the input level to compensate for the lower pickup output. Furthermore, amp sims generally don’t like transients as they consist more of “noise” than “tone,” so they don’t distort very elegantly. Reducing transients can give a less “harsh” sound at the beginning of a note or strum. So the end result is that if you’ve set your pickups close to the strings, try increasing the distance. You might find this gives you an overall more consistent sound, as well as better sustain. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Make quantization work for you, not against you by Craig Anderton Quantization was controversial enough when it was limited to MIDI, but now that you can quantize audio, it’s even more of an issue. Although some genres of music work well with quantization, excessive quantization can suck the human feel out of music. Some people take a “holier than thou” approach to quantization by saying it’s for musical morons who lack the chops to get something right in the first place. These people, of course, never use quantization...well, at least while no one’s looking. I feel quantization has its place; it’s the ticket to ultra-tight grooves, and a way to let you keep a first and inspired take, instead of having to play a part over and over again to get it right—and lose the human feel by beating a part to death. But like any tool, if misused quantization can cause more harm than good by giving an overly rigid, non-musical quality to your work. TRUST YOUR FEELINGS, LUKE The first thing to remember is that computers make terrible music critics. Forcing music to fit the rhythmic criteria established by a machine is silly—it’s real people, with real emotions, who make and listen to music. To a computer, having every note hit exactly on the beat may be desirable, but that’s not the way humans work. There’s a fine line between “making a mistake” and “bending the rhythm to your will.” Quantization removes that fine line. Yes, it gets rid of the mistakes, but it also gets rid of the nuances. When sequencers first appeared, musicians would often compare the quantized and non-quantized versions of their playing. Invariably, after hearing the quantized version, the reaction would be a crestfallen “gee, I didn’t realize my timing was that bad.” But in many cases, the human was right, not the machine. I’ve played some solo lines were notes were off as much as 50 milliseconds from the beat, yet they sounded right. Rule #1: You dance; a computer doesn’t. You are therefore much more qualified than a computer to determine what rhythm sounds right. WHY QUANTIZATION SHOULD BE THE LAST THING YOU DO Some people quantize a track as soon as they’ve finished playing it. Don’t! In analyzing unquantized music, you’ll often find that every instrument of every track will tend to rush or lag the beat together. In other words, suppose you either consciously or unconsciously rush the tempo by playing the snare a bit ahead of the beat. As you record subsequent overdubs, these will be referenced to the offset snare, creating a unified feeling of rushing the tempo. If you quantize the snare part immediately after playing, then you will play to the quantized part, which will change the feel. Another possible trap occurs if you play a number of unquantized parts and find that some sound “off.” The expected solution would be to quantize the parts to the beat, yet the “wrong” parts may not be off compared to the absolute beat, but to a part that was purposely rushed or lagged. In the example given above of a slightly rushed snare part, you’d want to quantize your parts in relation to the snare, not a fixed beat. If you quantize to the beat the rhythm will sound even more off, because some parts will be off with respect to absolute timing, while other parts will be off with respect to the relative timing of the snare hit. At this point, most musicians mistakenly quantize everything to the beat, destroying the feel of the piece. Rule #2: Don’t quantize until lots of parts are down and the relative—not absolute—rhythm of the piece has been established. SELECTIVE QUANTIZATION Often only a few parts of a track will need quantization, yet for convenience musicians tend to quantize an entire track, reasoning that it will fix the parts that sound wrong and not affect the parts that sound right. However, the parts that sound right may be consistent to a relative rhythm, not an absolute one. The best approach is to go through a piece, a few measures at a time, and quantize only those parts that are clearly in need of quantization. Very often, what’s needed is not quantization per se but merely shifting an offending note’s start time. Look at the other tracks and see if notes in that particular part of the tune tend to lead or lag the beat, and shift the start time accordingly. Rule #3: If it ain’t broke, don’t fix it. Quantize only the notes that are off enough to sound wrong. BELLS AND WHISTLES Modern-day quantization tools, whether for MIDI or audio, offer many options that make quantization more effective. One of the most useful is quantization strength, which moves a note closer to the absolute beat by a particular percentage. For example, if a note falls 10 mlliseconds ahead of the beat, quantizing to 50% strength would place it 5 milliseconds ahead of the beat. This smooths out gross timing errors while retaining some of the original part’s feel (Fig. 1). Fig. 1: The upper window (from Cakewalk Sonar) shows standard Quenziation options; note that Strength is set to 80%, ad there's a bit of Swing. The lower window handles Groove Quantization, which can apply different feels by choosing a "groove" from a menu. Some programs offer “groove templates” (where you can set up a relative rhythm to which parts are quantized), or the option to quantize notes in one track to the notes in another track (which is great for locking bass and drum parts together). Rule #4: Study your recording software’s manual and learn how to use the more esoteric quantization options. EXPERIMENTS IN QUANTIZATION STRENGTH Here’s an experiment I like to conduct during sequencing seminars to get the point across about quantization strength. First, record an unquantized and somewhat sloppy drum part on one track. It should be obvious that the timing is off. Then copy it to another track, quantize it, and play just that track back; it should be obvious that the timing has been corrected. Then copy the original track again but quantize it to a certain strength—say, 50%. It will probably still sound unquantized. Now try increasing the strength percentage; at some point (typically in the 70% to 90% range), you’ll perceive it as quantized because it sounds right. Finally, play back that track along with the one quantized to 100% strength and check out the timing differences, as evidenced by lots of slapback echoes. If you now play the 100% strength track by itself, it will sound dull and artificial compared to the one quantized at a lesser strength. Rule #5: Correct rhythm is in the ear of the beholder, and a totally quantized track never seems to win out over a track quantized to a percentage of total quantization. REMEMBER, MIDI IS NOT AUDIO Quantizing a MIDI part will not affect fidelity, but quantizing audio will usually need to shift audio around and stretch it. Although digital audio stretching has made tremendous progress over the years in terms of not butchering digital audio, the process is not flawless. If significant amounts of quantization are involved, you’ll likely notice some degree of audio degradation but you’ll be able to get away with lesser amounts. Rule #6: Like any type of correction, rhythmic correction is most transparent with signals that don’t need a lot of correction. Yes, quantization is a useful tool. But don’t use it indiscriminately, or your music may end up sounding mechanical—which is not a good thing unless, of course, you want it to sound mechanical! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Yes, this sounds insane...but try it By Craig Anderton Do you want better mixes? Of course you do—the mix, along with mastering, is what makes or breaks your music. Even the best tracks won’t come across if they’re not mixed correctly. Different people approach mixing differently, but I don’t think anyone has described something as whacked-out as what we’re going to cover in this article. Some people will read this and just shake their heads, but others will actually try the suggested technique, and craft tighter, punchier mixes without any kind of compression or other processing. THE MIXING PROBLEM What makes mixing so difficult is, unfortunately, a limitation of the human ear/brain combination. Our hearing can discern very small changes in pitch, but not level. You’ll easily hear a 3% pitch change as being distinctly out of tune, but a 3% level change is nowhere near as dramatic. Also, our ears have an incredibly wide dynamic range—much more than a CD, for example. So when we mix and use only the top 20-40 dB of average available dynamic range, even extreme musical dynamics don’t represent that much of a change for the ear’s total dynamic range. Another problem with mixing is that the ear’s frequency response changes at different levels. This is why small changes in volume are often perceived as tonal differences, and why it is so important to balance levels exactly when doing A-B comparisons. Because our ears hear low and high end signals better at higher levels, just a slight volume boost might produce a subjective feeling of greater “warmth” (from the additional low end) and “sparkle” (from the increased perception of treble). The reason why top mixing engineers are in such demand is because through years of practice, they’ve trained their ears to discriminate among tiny level and frequency response differences (and hopefully, taken care of their ears so they don’t suffer from their own frequency response problems). They are basically “juggling” the levels of multiple tracks, making sure that each one occupies its proper level with respect to the other tracks. Remember, a mix doesn’t compare levels to an absolute standard; all the tracks are interrelated. As an obvious example, the lead instruments usually have higher levels than the rhythm instruments. But there are much smaller hierarchies. Suppose you have a string pad part, and the same part delayed a bit to produce chorusing. To avoid having excessive peaking when the signals reach maximum amplitude at the same time, as well as better preserve any rhythmic “groove,” you’ll probably mix the delayed track around 6 dB behind the non-delayed track. The more tracks, the more intricate this juggling act becomes. However, there are certain essential elements of any mix—some instruments that just have to be there, and mixed fairly closely in level to one another because of their importance. Ensuring that these elements are clearly audible and perfectly balanced is, I believe, one of the most important qualities in creating a “transportable” mix (i.e., one that sounds good over a variety of systems). Perhaps the lovely high end of some bell won’t translate on a $29.95 boombox, but if the average listener can make out the vocals, leads, beat, and bass, you have the high points covered. Ironically, though, our ears are less sensitive to changes in relatively loud levels than to relatively soft ones. This is why some veteran mixers start work on a mix at low levels, not just to protect their hearing but because it makes it easier to tell if the important instruments are out of balance with respect to each other. At higher levels, differences in balance are harder to detect. ANOTHER ONE OF THOSE ACCIDENTS The following mixing technique is a way to check whether a song’s crucial elements are mixed with equal emphasis. Like many other techniques that ultimately turn out to be useful, this one was discovered by accident. At one point I had a home studio in Florida that didn’t have central air conditioning, and the in-wall air conditioner made a fair amount of background noise. One day, I noticed that the mixes I did when the air conditioner was on often sounded better than the ones I did when it was off. This seemed odd at first, until I made the connection with how many musicians use the “play the music in the car” test as the final arbiter of whether a mix is going to work or not. In both cases the background noise masks low-level signals, making it easier to tell which signals make it above the noise. Curious whether this phenomenon could be quantized further, I started injecting pink noise (Fig. 1) into the console while mixing. Fig. 1: Sound Forge can generate a variety of noise types, including pink noise. This just about forces you to listen at relatively low levels, because the noise is really obnoxious! But more importantly, the noise adds a sort of “cloud cover” over the music, and as mountain peaks poke out of a cloud cover, so do sonic peaks poke out of the noise. APPLYING THE TECHNIQUE You’ll want to add in the pink noise very sporadically during a mix, because the noise covers up high frequency sounds like hi-hat. You cannot get an accurate idea of the complete mix while you’re mixing with noise injected into the bus, but what you can do is make sure that all the important instruments are being heard properly. (Similarly, when listening in a car system, road noise will often mask lower frequencies.) Typically, I’ll take the mix to the point where I’m fairly satisified with the sound. Then I’ll add in lots of noise—no less than 10 dB below 0 with dance mixes, for example, which typically have restricted dynamics anyway—and start analyzing. While listening through the song, I pay special attention to vocals, snare, kick, bass, and leads (with this much noise, you’re not going to hear much else in the song anway). It’s very easy to adjust their relative levels, because there’s a limited range between overload on the high end, and dropping below the noise on the low end. If all the crucial sounds make it into that window and can be heard clearly above the noise without distorting, you have a head start toward an equal balance. Also note that the “noise test” can uncover problems. If you can hear a hi-hat or other minor part fairly high above the noise, it’s probably too loud. I’ll generally run through the song a few more times, carefully tweaking each track for the right relative balance. Then it’s time to take out the noise. First, it’s an incredible relief not to hear that annoying hiss! Second, you can now get to work balancing the supporting instruments so that they work well with the lead sounds you’ve tweaked. Although so far I’ve only mentioned instruments being above the noise floor, there are actually three distinct zones created by the noise: totally masked by the noise (inaudible), above the noise (clearly audible), and “melded,” where an instrument isn’t loud enough to stand out or soft enough to be masked, so it blends in with the noise. I find that mixing rhythm parts so that they sound melded can work if the noise is adjusted to a level suitable for the rhythm parts. FADING OUT Overall, I estimate spending only about 3% of my mixing time using the injected noise, and I don't use it at all for some mixes. But sometimes, espeically with dense mixes, it’s the factor responsible for making the mix sound good over multiple systems. Mixing with noise may sound crazy, but give it a try. With a little practice, there are ways to make noise work for you. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  • Create New...