Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Anderton

  1. If you're gigging or want to gig, ignore this book at your own peril By Craig Anderton 150 pages, electronic edition If you’ve been enjoying David Himes’ articles as “the Gig Kahuna” on Harmony Central, then you need this book. It includes everything he’s written for HC and more, all in a convenient Kindle or PDF format. However, if you play in a local band, you really need this book. Its brutal honesty helps compensate for the state of denial that afflicts many musicians about “making it.” The irony is that knowing why the odds of “making it” are infinitesimal also tells you want you need to do to increase the odds in your favor because if nothing else, you’ll learn about what you shouldn’t do as well as what you should do. Himes is acutely aware that “music business” is two words—and if you don’t conduct the business part properly, you can forget about being successful with the music part. One of the elements I really like is the specificity of what Himes communicates. For example, he doesn’t just say “be professional”—he describes the tell-tale signs of unprofessionalism in band members. Himes pulls no punches; his conversational and occasionally sketchy writing style (which could have benefited from a second set of eyes to catch some of the repetitions, but that doesn’t dilute the message) is a blast of reality. He covers topics like cover bands vs. original bands, test marketing, myths about gigging that need to be debunked, being honest about your level of commitment, problems you’ll encounter (and believe me, you’ll encounter all of them at some point), the kind of support team you’ll need, how clubs see you (reality check: you’re a vehicle to sell drinks, not an artiste), the importance of communication, and a whole lot of information on gigs—the different types of gigs, what your objectives should be, how to prepare for gigs, even dealing with the sound crew. Himes then segues into an extensive chapter on promotion and marketing (yes, you need to know marketing as well as chord progressions) with an emphasis on using social media to boost your career, and ends with a chapter about what happens beyond local gigging. Himes clearly has a ton of experience informed by over a decade of running a local music paper, and when he writes, it’s like the stern teacher you had in high school—who you didn’t really appreciate until years later, when you realized it was the only class where you actually learned something of true importance. If I had to use two words to describe this book, they would be “tough love.” Himes is unfailingly tough, but the motivation is that he truly cares about his fellow musicians, and really wants to help you avoid the issues that can cut your career short. The bottom line is if you can handle the truth, you can handle this book—and regardless of how much you think you know, your outlook will change and your career will benefit. Kindle edition: Amazon.com Price: $6.95 PDF edition: Direct from publisher; email destechdh@gmail.com. Price $9.95, with PayPal invoice. ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Looking to go "into the groove"? This book is a fine place to start By Craig Anderton Hal Leonard Books Softcover, 263 pages, $17.48 If you want to get into EDM, you’re going to need some cool beats. And if you’re already into it, additional sources of inspiration never hurt. Enter this book from Josh Bess, an Ableton Certified Trainer and percussionist. But don’t think this book is relevant only to Ableton Live users—it’s applicable to just about anything with a MIDI piano roll view, and arguably even more so to programs like SONAR that include a step sequencer. The first chapter describes Live basics, which complements the downloadable demo version of Live. The second chapter has useful concepts for beginners that translate to a variety of DAWs, but the real meat of the book—170 pages—starts with Chapter 3, which shows screen shots for grooves. Musical styles include house, techno, breakbeat (e.g., hip-hip, drum ‘n’ bass), and world (e.g., dance hall), but these are also broken down into various sub-genres. Although beats are shown using Live’s Piano Roll View, it’s easy to translate to other piano roll views or step sequencers. Each pattern also includes info about some of the techniques used in the pattern, as well as occasional tips. I consider this a strong addition to the book, as they suggest avenues for additional exploration, and give some interesting insights into how particular beats are constructed. Chapter 4 is 14 pages about drum fills and transitions, again using screen shots for examples, while Chapter 5 covers Groove, Swing, and Feel. This takes only slightly more effort to translate into equivalents for other programs. Chapter 6 has 22 pages of how to build drum kits in Live from one-shots, and Chapter 7 is a one-page summary. For an even more universal appeal, the book also includes a download code for a variety of media that are suitable for all DAWs. There are 292 drum samples (mostly WAV, some AIF), along with 642 MIDI files for the grooves described in the book and the 19 described MIDI files for fills. For under $20, some might consider the samples and MIDI files alone worth the price, and the files let you take advantage of what’s presented in the book without even having to read most of it. However, the explanations for the rationale behind programming the beats provide a helpful background for those who want to go beyond just importing something and using it “as is.” Bess’s background as a percussionist certainly helps, as it gives a perspective beyond just “put these notes on these beats.” Overall, for those getting into dance music, this book lets you hit the ground running with actual files you can use in a wide cross-section of styles. I could also see this information as being useful for those doing soundtracks if they’re not as familiar with certain styles, yet need to create music using, for example, Dance Hall of Dubstep beats. For less than the cost of a 12-pack of Red Bull at Walmart, you’ll have something with much greater staying power. ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. The wheel, electricity, the microprocessor...the internet is right up there in that elite group of inventions by Craig Anderton In the 1960s, Marshall McLuhan wrote that he believed the print culture would soon be eclipsed by electronic culture. He coined the term “global village,” where the world becomes a computer-like giant electronic brain. Although the internet wasn’t invented until well after his death, in 1962 he wrote something which has to be considered truly prophetic: “The next medium, whatever it is—it may be the extension of consciousness—will include television as its content, not as its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual's encyclopedic function and flip into a private line to speedily tailored data of a saleable kind.” So not only did he foresee the internet, he even foresaw YouTube, mass databases, and targeted advertising. The guy was a genius…and like most geniuses, was dismissed as just another crackpot at the time. What got me thinking about the global village was the “World’s Biggest Audition” project in which we’re participating. The concept of watching a video on your telephone from any country in the world, then using it to audition for a superstar rhythm section, would have been science fiction not that long ago. But what also intrigues me about Stewart Copeland and Brian Hardgroove’s project is they’re not just looking for vocalists; they’re looking all over the world, and what makes that possible is our wired global village. Someone in India, Qatar, or Brazil can participate just as easily as I can. Ultimately, will the wired global village unite us or divide us? It’s clear that the World’s Biggest Audition is about bringing people together, but not everyone has those motives. From spam to recruiting terrorists to cyber-bullying to snooping, the wired village isn’t always benevolent. Interestingly, McLuhan nailed that, too—as he said, technology does not have morality. Never before in history have we been presented with a gift that allows everyone, everywhere, to communicate. What are we going to do with that gift? I really like what Stewart, Brian, and WholeWorldBand are doing with it…let’s hope that kind of thinking becomes the norm, and not the exception. If you'd like to audition — Go Here To Audition: http://www.harmonycentral.com/forum/forum/Forums_General/hardgroove-and-nothing-less/31616136-official-world-s-biggest-audition-—-don-t-miss-out ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Smartphone guitar app adds hardware and goes Android by Craig Anderton Wait! Don’t stop reading just because the sub-head says “Android,” and you assume the audio performance will give you an unwanted (albeit free) echo unit due to latency. IK Multimedia has created a very successful business by supporting iOS devices, so they must have been salivating at the thought of being able to tap the huge Android market. But while Android OS 5.0 has reduced latency, it’s still not really acceptable for real-time playing—so IK does an end run around the problem by building audio processing DSP into the accompanying hardware interface. I tested iRig UA with a Samsung Galaxy Note4. Note that the interface itself is not limited to Android, but will also work with Mac and Windows computers (although it won't do ASIO). However, the DSP within the interface that provides the amp sim processing works only with the Android application software. What You Need to Know The onboard DSP means a higher cost compared to a simple, I/O-only interface. $99.99 buys you a hardware interface with 1/4” input for guitar, 1/8” stereo headphone jack with associated volume control, 1/8” stereo input jack for jamming along with an external audio source, and micro-B USB connector (with appropriate included cable) to hook it up to your phone. iRig UA hooks into your phone digitally, so it bypasses the internal audio preamp for higher quality. With 5.0, you can also stream audio digitally from the phone into iRig UA and bypass the external input. When listening to music, you’ll get more clean volume out of iRig UA’s headphone amp than what’s in your phone. I didn’t have a way to test latency, but it seems like the only possible latency would be from A/D and D/A conversion. This would result in latency under 2 ms. In any event, the “feel” is zero latency. For the best experience, download AmpliTube UA for free from the Google Play Store with four guitar amps, one bass amp, nine stompboxes, two mics, and five cabs, with the option for in-app purchases of additional stompboxes and amps in the $5-$10 range. Or, you can buy all available amps and stompboxes for $69.99. iRig UA works with Android OS 4.2 and up, providing there’s support for host mode USB/OTG; to find out whether your device supports host mode, download the USB Host Diagnostics app from the Google store. The hardware also works as a 24-bit, 44.1/48kHz audio interface with OS 5.0 (also called “Lollipop”; apparently there’s a law that companies must have cute names for operating systems—although when Apple was doing cat names, they did forego “OS Hello Kitty”). The hardware is plastic, which seems like it might belong more under “Limitations.” But it seems quite rugged, and contributes to lower weight for portability. There are four “slots” in the FX chain—two for pre-amp effects, one for an amp, and one for a post-amp effect. Amp sim tone is subjective, so whether you like the amp tones or not is your call. I’ve always liked AmpliTube and IK’s take on modeling, so it’s probably not surprising that I also like the sounds in iRig UA. I can’t really tell whether they’re on the same level as the desktop version of AmpliTube 3, but even without extra in-app purchases, you get a wide range of useful and satisfying sounds. You can navigate the UI even if you’re semi-conscious. Limitations As with similar smartphone devices, the interface connects via the USB port used for charging the phone, so the “battery charge countdown clock” starts when you plug in and start playing. The battery drain is definitely acceptable (even taking the DSP into account), but of course, you’re putting the battery through more charge/discharge cycles with long sessions. I didn’t find any way to demo in-app purchases prior to purchasing. There’s no landscape mode support, so accessing the amp knobs means swiping left and right a whole lot. There’s no tablet version yet, although of course the phone UI “upscales.” You can’t put an amp in an FX slot if you want to put amps in series. For $99.99, I do think IK could have included a compressor/sustainer you can place in front of the amp. In-app purchases culminate in a higher price tag than most Android users expect. However, given what’s in the free software, I really didn’t feel the need to get a bunch of extra stuff. Conclusions This is a sweet little package that finally brings essentially zero-latency guitar practicing and playing to Android phones. Some will balk at the price, but given the realities of Android’s audio world, there’s really no way to get around latency issues without the hardware DSP. Android users who want satisfying tones out of a simple and portable Android setup, along with considerable sonic versatility, now have a solution. While the amp sim options currently available on Android won't make Mac fanbois green with envy, iRig UA stakes an important—and very well-executed—claim in the quest for parity between the two main smartphone platforms. Buy at B&H ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Here's a clever way for guitarists to tame the "crispiness" of audio interface direct inputs By Craig Anderton Most guitarists are aware that with passive pickups, cable capacitance affects tone when feeding a high-impedance input, like the DI inputs on audio interfaces. Activating your guitar’s tone control will tend to “swamp out” any differences caused by cable capacitance, but if the tone control isn’t play, then cable capacitance will nonetheless affect your sound. Quantifying this difference is more difficult. Different cables have different amounts of capacitance per foot, and the longer the cable, the greater the capacitance. So often when guitar players find a cable that sounds “right,” they’ll just stick with that until it dies (or they do). Part of what inspired me to write this is a comment in another Forum that Shall Go Nameless that dissed the Timbre Plug (of course, without ever actually trying it) because of the assumption that it just duplicates what a tone control does. But a tone control is more complex than most people realize; it doesn’t just roll off highs, but also interacts with passive pickups to create a resonant peak. This boosts the signal somewhat, and is one reason why rolling back on the tone control sounds “creamier.” It’s also why guitarists like to experiment with different tone control capacitors. Within reason, the higher the capacitor value, the lower the resonant frequency. So yes, cables do make a difference. Yet these days, a lot of guitar players will record by going through a relatively short cable into an audio interface, so cable capacitance doesn’t enter into the picture. Which at long last brings us to the Neutrik NP2RX-TIMBRE, which typically costs under $20. Let’s take a closer look. The knob opposite the plug shaft itself has a four position rotary switch. It chooses among no capacitance, and three possible capacitor values strapped between the hot and ground connections (Neutrik preferred I not mention the exact values, but they're in the single-digit nanoFarad range). Note that these capacitors are potted in with a switch assembly, so don’t expect to change them if you’d prefer to try different values. Each of these has a distinct effect on the sound, as you can hear in this demo video. ASSEMBLY It’s actually quite easy to assemble; you’ll need a Phillips head screwdriver, pencil tip soldering iron, wirecutters, and two-conductor shielded cable with an outside diameter of 0.16” to 0.27”. The assembly instructions are downloadable from the Neutrik web site, and also are printed on the back of the packaging. I make my cables using the Planet Waves Cable Station, which uses ¼” cable. It was a tight fit, but by following the assembly instructions and cutting the wire exactly as specified, it all went together as expected. I certainly would advise against using anything thicker. IN USE Some people may think the right-angle jack is an issue, but it fits fine with a Strat and of course, it ideal for front-facing jacks as found on SG and 335-type guitars. However, ultimately it doesn’t really matter because the cable isn’t “polarized”—you can plug the Timbre plug into your amp or interface. All you give up is the ability to have the controls at your fingertips while you play, but I tend to think this would be a more “set and forget” type of device anyway. The Timbre Plug inserted into a TASCAM US-2x2 interface’s direct input. CONCLUSIONS The concept of emulating cable capacitance isn’t new, although sometimes it’s just a high-frequency rolloff—which is not the same as a capacitor interacting with a pickup. Neutrik’s solution is compact, built solidly, truly emulates physical cable capacitance, is accessible to anyone with moderate DIY skills, and isn’t expensive. In a way, it's like a hardware "plug-in" for your computer - and you may very well find it’s just the ticket to taking the “edge” off the crispiness that’s inherent in feeding a passive pickup into a high-impedance input. Buy at B&H __________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Yes, Eddie Kramer is a part of history — but what’s he’s doing today will be tomorrow’s history By Craig Anderton About 20 seconds into the interview, Eddie says the mic sound from my hands-free phone adapter thingie is “…shall we say, not of the best quality.” So I adjusted the mic placement until Eddie was happier with the sound. Nit-picky prima donna? Absolutely not. He’s just a very nice guy who’s unfailingly polite and helpful. Articulate, too. And that’s about 80% of what you need to know about Eddie: He really, really cares about sound, even if it’s just an interviewer’s mic. Which is probably what helps account for the other 20% you really need to know: He’s been behind the boards for some of the most significant musicians of our time, including Jimi Hendrix, Led Zeppelin, Buddy Guy, Kiss, Peter Frampton, the Beatles, AC/DC, the Rolling Stones, Carly Simon, Traffic, Joe Cocker, David Bowie, Johnny Winter, Bad Company, Sammy Davis Jr., the Kinks, Petula Clark, the Small Faces, Vanilla Fudge, NRBQ, the Woodstock festival, John Mayall, Derek & the Dominoes, Santana, Curtis Mayfield, Buddy Guy, Anthrax, Twisted Sister, Ace Frehley, Alcatraz, Triumph, Robin Trower, and Whitesnake. And let’s talk versatility: country act the Kentucky Headhunters, and classical guitarist John Williams. (For more information, there’s a short-form bio on Wikipedia.) And that’s just the tip of the iceberg, as he’s documented much of what he’s done with an incredible body of work as a photographer. You can find out a lot more about Eddie, including his latest F-Pedals project, at his web site. Given his history, if you think he lives in the past, you’d be one-third right. Another third lives in the present, and the remaining third in the future. During the course of an interview, you can find yourself in 1968 one minute, and 2016 the next. Cool. Eddie has enough savvy to know when it’s important to just go with the flow. Like that famous moment in “Whole Lotta Love” where you hear Robert Plant’s voice way in the background during the break. Print-through? An effect they slaved over for days? “The time of that particular mix was 1969, and this all took place over a weekend at A&R studios in New York. Imagine [mixing] the entire Led Zeppelin II on 8 tracks in two days! As we got into “Whole Lotta Love,” I actually only ended up using seven tracks because tracks 7 and 8 were two vocal tracks. I think I used the vocal from track 7. We’d gotten the mix going, I believe it was a 12-chanel console with two panpots. “During the mixdown, I couldn’t get rid of the extra vocal in the break that was bleeding through. Either the fader was bad, or the level was fairly high — as we were wont to do is those days, we hit the tape with pretty high levels. Jimmy [Page] and I looked at each other and said “reverb,” and we cranked up the reverb and left it in. That was a great example of how accidents could become part of the fabric of your mix, or in this case, a part of history. And I always encourage people today not to be so bloody picky.” Eddie has this wacky idea that the music is actually the important part of the recording process, not editing the living daylights out of it. To wit: “We’re living in the age of [computer-based programs like] Pro Tools, where we can spend hours, day, even weeks on end fixing all the little ‘mistakes.’ And by that time, you’ve taken all of the life out of the music. I don’t want to come off as trashing [Digidesign], but I feel that Pro Tools — which is a wonderful device — has its limitations in certain aspects.” And what might that main limitation be? “The people using it! And it becomes a sort of psychological battle . . . yes I can stretch that drum fill or performance [with bad timing], or I can effectively make a bad vocal sound reasonably decent, but what the hell is the point? Why didn’t the drummer play it right in the first place? Why didn’t the singer sing it right in the first place? “And that begs the question, do we have too many choices . . . and when we do, we sit there thinking ‘we can make it better.’ But for God’s sake, make it better in the performance! I want musicians who will look each other in the face, eyeball to eyeball, and I want interaction. I want these guys to be able to play their instruments properly, and I want them to be able to make corrections on the fly. If I say ‘In the second chorus, could you double up that part?’ I don’t want the guitarist giving me a blank look. “Learn your bloody craft, mates! The way we’re recording today does in fact give a tremendous amount of freedom to create in an atmosphere of relaxed inspiration. The individual can record in very primitive circumstances — bathrooms, garages, hall closets. Unfortunately for a lot of people, this means doing it one track at a time, which I think makes the final product sound very computerized and not organic. The other side of the coin is that many bands can think in terms of ‘let’s find a fairly decent acoustic space, set up mics, look each other in the eyes, and hit record.’” MIXED OUT, MIXED DOWN . . . OR MIXED UP? Ah, the lost art of mixing. If all you do is tweak envelopes with a mouse, that’s not mixing — that’s editing. If you think Eddie Kramer is a fix-it-in-the-mix kinda guy, you haven’t been paying attention. But there’s more. “One of the most exciting things as an engineer is to create that sound as it’s happening; having a great-sounding board, set of mics, and acoustic environment can lead one to a higher plane . . . when you hear the sound of the mixed instruments — not each individual mic — and get the sound now, while it’s happening. I don’t want to have to bugger around with the sound after the fact, other than mixing. There’s a thrill in getting a sound that’s unique to that particular situation. “The idea of mixing ‘in the box’ is anathema. It defeats the purpose of using one’s hand and fingers in an instinctive mode of communication. I am primarily a musician at heart; the technology is an ancillary part of what I do, a means to an end. I want to feel like I’m creating something with my hands, my ears, my eyes, my whole being. I can’t do that solely within the box. It’s counter-intuitive and alien. However, I do use some of the items within the box as addenda to the creative process. It lets me mix with some sounds I would normally not be able to get.” So do you use control surfaces when you’re working with computers, or go the console route? “Only consoles. I love to record with vintage Neve, 24-track Dolby SR [noise reduction] at 15 IPS, then I dump it into Pro Tools or whatever system is available, and continue from that point. If the budget permits, I’ll lock the multitrack with the computer. I’d rather mix on an SSL; they’re flexible and easy to work. I like the combination of the vintage Neve sound with the SSL’s crispness. And then I mix down to an ATR reel-to-reel, running at 15 IPS with Dolby SR. “With the SSL, I’m always updating, always in contact with the faders. I always hear little things that I can tweak. To me, mixing is a living process. If you’re mixing in the moment, you get inspired. I just wish I could do more mixes in 4-5 hours instead of 12, but some bands want to throw you 100 tracks. Sometimes I wish we could put a moratorium on the recording industry — you have three hours and eight tracks! [laughs] I’m joking of course, but . . . “On ‘Electric Ladyland,’ ‘1983’ was a ‘performance’ mix: four hands, Jimi and myself. We did that in maybe one take. And the reason why was because we rehearsed the mix, as if it was a performance. We didn’t actually record the mix until we had our [act] together. We were laughing when we got through the 14 minutes or so. Of course, sometimes I would chop up two-track mixes and put pieces together. But those pieces had to be good.” So do you mix with your eyes closed or open? “The only time I close my eyes when mixing is when I’m panning something. I know which way the note has to flip from one side to the other; panning is an art, and you have to be able to sense where the music is going to do the panning properly.” (to be continued)
  7. Meet the ghost in your machine By Craig Anderton Musicians are used to an instant response: Hit a string, hit a key, strike a drum, or blow into a wind instrument, and you hear a sound. This is true even if you’re going through a string of analog processors. But if you play through a digital signal processor, like a digital multieffects, there will be a very slight delay called latency—so small that you probably won’t notice it, but it’s there. Converting an analog signal to digital takes about 600 microseconds at 44.1kHz; converting back into analog takes approximately the same amount, for a “round trip” latency of about 1.2 milliseconds. There may also be a slight delay due to processing time within the processor. Because sound travels at about 1 foot (30 cm) per millisecond, the delay of doing analog/digital/analog conversion is about the same as if you moved a little over a foot away from a speaker, which isn’t a problem. However, with computers, there’s much more going on. In addition to converting your “analog world” signal to digital data, pieces of software called drivers have the job of taking the data generated by an analog-to-digital converter and inserting it into the computer’s data stream. Furthermore, the computer introduces delays as well. Even the most powerful processor can do only so many millions of calculations per second; when it’s busy scanning its keyboard and mouse, checking its ports, moving data in and out of RAM, sending out video data, and more, you can understand why it sometimes has a hard time keeping up. As a result, the computer places some of the incoming audio from your guitar, voice, keyboard, or other signal source in a buffer, which is like a “savings account” for your input signal. When the computer is so busy elsewhere that it can’t deal with audio, it makes a “withdrawal” from the buffer instead so it can go deal with other things. The larger the buffer, the less likely the computer will run out of audio data when it needs it. But a larger buffer also means that your instrument’s signal is being diverted for a longer period of time before being processed by the computer, which increases latency. When the computer goes to retrieve some audio and there’s nothing in the buffer, audio performance suffers in a variety of ways: You may hear stuttering, crackling, “dropouts” where there is no audio, or worse case, the program might crash. The practical result of latency is that if you listen to what’s you’re playing after it goes through the computer, you’ll feel like you’re playing through a delay line, set for processed sound only. If the delay is under 5 ms, you probably won’t care too much. But some systems can exhibit latencies of tens or even hundreds of milliseconds, which can be extremely annoying. Because you want the best possible “feel” when playing your instrument through a computer, let’s investigate how to obtain the lowest possible latency, and what tradeoffs will allow for this. MINIMIZING LATENCY The first step in minimizing delay is the most expensive one: Upgrading your processor. When software synthesizers were first introduced, latencies in the hundreds of milliseconds were common. With today’s multi-core processors and a quality audio interface, it’s possible to obtain latencies well under 10 ms (and often less) at a 44.1kHz sampling rate. The second step toward lower latency involves using the best possible drivers, as more efficient drivers reduce latency. Steinberg devised the first low-latency driver protocol specifically for audio, called ASIO (Advanced Streaming Input Output). This tied in closely with the CPU, bypassing various layers of both Mac and Windows operating systems. At that time the Mac used Sound Manager, and Windows used a variety of protocols, all of which were equally unsuited to musical needs. Audio interfaces that supported ASIO were essential for serious musical applications. Eventually Apple and Microsoft realized the importance of low latency response and introduced new protocols. Microsoft’s WDM and WASAPI in exclusive mode were far better than their previous efforts; starting with OS X Apple gave us Core Audio, which was tied in even more closely with low-level operating system elements. Either of these protocols can perform as well as ASIO. However for Windows, ASIO is so common and so much effort is put into developing ASIO drivers that most musicians select ASIO drivers for their interfaces. So we should just use the lowest latency possible, yes? Well, that’s not always obtainable, because lower latencies stress out your computer more. This is why most audio interfaces give you a choice of latency settings (Fig. 1), so you can trade off between lowest latency and computer performance. Note that latency is given either in milliseconds or samples; while milliseconds is more intuitive, the reality is that you set latency based on what works best (which we’ll describe later, as well as the meaning behind the numbers). The numbers themselves aren’t that significant other than indicating “more” or “less.” Fig. 1: Roland’s VS-700 hardware is being set to 64 samples of latency in Cakewalk Sonar. If all your computer has to do is run something like a guitar amp simulator in stand-alone mode, then you can select really low latency. But if you’re running a complex digital audio recording program and playing back lots of tracks or using virtual software synthesizers, you may need to set the latency higher. So, taking all this into account, here are some tips on how to get the best combination of low latency and high performance. If you have a multi-core-based computer, check whether your host recording program supports multi-core processor operation. If available, you’ll find this under preferences (newer programs are often “multiprocessor aware” so this option isn’t needed). This will increase performance and reduce latency. With Windows, download your audio interface’s latest drivers. Check the manufacturer’s web site periodically to see if new drivers are available, but set a System Restore point before installing them—just in case the new driver has some bug or incompatibility with your system. Macs typically don’t need drivers as the audio interfaces hook directly into the CoreAudio services (Fig. 2), but there may be updated “control panel” software for your interface that provides greater functionality, such as letting you choose from a wider number of sample rates. Fig. 2: MOTU’s Digital Performer is being set up to work with a Core Audio device from Avid. Make sure you choose the right audio driver protocol for your audio interface. For example, with Windows computers, a sound card might offer several possible driver protocols like ASIO, DirectX, MME, emulated ASIO, etc. Most audio interfaces include an ASIO driver written specifically for the audio interface, and that’s the one you want to use. Typically, it will include the manufacturer’s name. There’s a “sweet spot” for latency. Too high, and the system will seem unresponsive; too low, and you’ll experience performance issues. I usually err on the side of being conservative rather than pushing the computer too hard. Avoid placing too much stress on your computer’s CPU. For example, the “track freeze” function in various recording programs lets you premix the sound of a software synthesizer to a hard disk track, which requires less power from your CPU than running the software synthesizer itself. MEASURING LATENCY So far, we’ve mostly talked about latency in terms of milliseconds. However, some manufacturers specify it in samples. This isn’t quitebas easy to understand, but it’s not hard to translate samples to milliseconds. This involves getting into some math, so if the following makes your brain explode, just remember the #1 rule of latency: Use the lowest setting that gives reliable audio operation. In other words, if the latency is expressed in milliseconds, use the lowest setting that works. If it’s specified in samples, you still use the lowest setting that works. Okay, on to the math. With a 44.1kHz sampling rate for digital audio (the rate used by CDs and many recording projects), there are 44,100 samples taken per second. Therefore, each sample is 1/44,100th of a second long, or about 0.023 ms. (If any math wizards happen to be reading this, the exact value is 0.022675736961451247165532879818594 ms. Now you know!) So, if an audio interface has a latency of 256 samples, at 44.1 kHz that means a delay of 256 X 0.023 ms, which is about 5.8 ms. 128 samples of delay would be about 2.9 ms. At a sample rate of 88.2 kHz, each sample lasts half as long as a sample at 44.1 kHz, so each sample would be about 0.0125 ms. Thus, a delay of 256 samples at 88.2 kHz would be around 2.9 ms. From this, it might seem that you’d want to record at higher sample rates to minimize latency, and that’s sort of true. But again, there’s a tradeoff because high sample rates stress out your computer more. So you might indeed have lower latency, but only be able to run, for example, half the number of plug-ins you normally can. SNEAKY LATENCY ISSUES Audio interfaces are supposed to report their latency back to the host program, so it can get a readout of the latency and compensate for this during the recording process. Think about it: If you’re playing along with drums and hear a sound 6 ms late, and then it takes 6 ms for what you play to get recorded into your computer, then what you play will be delayed by 12 ms compared to what you’re listening to. If the program knows this, it can compensate during the playback process so that overdubbed parts “line up” with the original track. However, different interfaces have different ways to report latency. You might assume that a sound card with a latency of 5.8 milliseconds is outperforming one with a listed latency of 11.6 ms. But that’s not necessarily true, because one might list the latency a signal experiences going into the computer (“one-way latency”), while another might give the “round-trip” latency—the input and output latency. Or, it might give both readings. Furthermore, these readings are not always accurate. Some audio interfaces do not report latency accurately, and might be off by even hundreds of samples. So, understand that if an audio interface claims that its latency is lower than another model, but you sense more of a delay with the “lower latency” audio interface, it very well might not be lower. WHAT ABOUT “DIRECT MONITORING”? You may have heard about an audio interface feature called “direct monitoring,” which supposedly reduces latency to nothing, so what you hear as you monitor is essentially in real time. However, it does this by monitoring the signal going into the computer and letting you listen to that, essentially bypassing the computer (Fig. 3). Fig. 3: TASCAM’s UH-7000 interface has a mixer applet with a monitor mix slider (upper right). This lets you choose whether to listen to the input, the computer output, or a combination of the two. While that works well for many instruments, suppose you’re playing guitar through amp simulation plug-in running on your computer. If you don’t listen to what’s coming out of your computer, you won’t hear what the amp simulator is doing. As a result, if you use an audio interface with the option to enable direct monitoring, you’ll need to decide when it’s appropriate to use it. THE VIRTUES OF USING HEADPHONES One tip about minimizing latency is that if you’re listening to monitor speakers and your ears are about 3 feet (1 meter) away, you’ve just added another 3 ms of latency. Monitoring through headphones will remove that latency, leaving only the latency caused by using the audio interface and computer. MAC VS. WINDOWS Note that there is a significant difference between current Mac and Windows machines. Core Audio is a complete audio sub-system that already includes drivers most audio interfaces can access. Therefore, as mentioned earlier, it is usually not necessary to load drivers when hooking an audio interface up to the Mac. With Windows, audio interfaces generally include custom drivers you need to install, that are often on a CD-ROM included with the interface. However, it’s always a good idea to check the manufacturer’s web site for updates—even if you bought a product the day it hit the stores. With driver software playing such a crucial part in performance, you want the most recent version. With Windows, it’s also very important to follow any driver installation instructions exactly. For example, some audio interfaces require that you install the driver software first, then connect the interface to your system. Others require that you hook up the hardware first, then install the software. Pay attention to the instructions! THE FUTURE AND THE PRESENT Over the last 10 years or so, latency has become less and less of a problem. Today’s systems can obtain very low latency figures, and this will continue to improve. But if you experience significant latencies with a modern computer, then there’s something wrong. Check audio options, drivers, and settings for your host program until you find out what’s causing the problem. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Those two screws on the side of your pickup aren’t just there for decoration by Craig Anderton Spoiler alert: The correct answer is “it depends.” Pickup height trades off level, sustain, and attack transient, so you need to decide which characteristics you want to prioritize. I think we all have a sense that changing pickup height changes the sound, but I’d never taken the time to actually quantify these changes. So, Itested the neck and bridge humbucker pickups in a Gibson Les Paul Traditional Pro II 50s guitar, and tried two different pickup height settings. For the “close” position, the strings were 2mm away from the top of the pole pieces. In the “far” position, the distance was 4mm. I then recorded similar strums into Steinberg’s WaveLab digital audio editor; although it’s impossible to get every strum exactly the same, I did enough of them to see a pattern. The illustrations show the neck pickup results, because the bridge pickup results were similar. Fig. 1: This shows the raw signal output from three strums with the rhythm pickup close to the strings, then three strums with the pickup further away. It’s clear from Fig. 1 that the “close” position peak level is considerably higher than the “far” position—about 8 dB. So if what matters most is level and being able to hit an amp hard, then you want the pickups close to the strings. Fig. 2: The last three strums, with the pickups further from the strings, have a higher average level compared to the initial transient. Fig. 2 tells a different story. This screen shot shows what happens when you raise the peaks of the “far” strums (again, the second set of three) to the same peak level as the close strums, which is what would happen if you used a preamp to raise the signal level. The “far” strum initial transients aren’t as pronounced, so the waveform reaches the sustained part of the sound sooner. The waveform in the last three is “fatter” in the sense that there’s a higher average level; with the “close” waveforms, the average level drops off rapidly after the transient. Based on how the pickups react, if you want a higher average level that’s less percussive while keeping transients as much out of the picture as possible (for example, to avoid overloading the input of a digital effect), this would be your preferred option. Fig. 3 shows two chords ringing out, with the waveforms normalized to the same peak value and amplified equally in WaveLab so you can see the sustain more clearly. Fig. 3: The second waveform (pickups further from strings) maintains a higher average level during its sustain. With the “tail” of the second, “far” waveform, the sustain stays louder for longer. So, you do indeed get more sustain—not just a higher average level and less pronounced transients—if the pickup is further away from the strings. However, remember that the overall level is lower, so to benefit from the increased sustain, you’ll need to turn up your amp’s input control to compensate, or use a preamp. ADDITIONAL CONCLUSIONS The reduced transient response caused by the pickups being further away from the strings is helpful when feeding compressors, as large transients tend to “grab” the gain control mechanism to turn the signal down, which can create a “pop” as the compression kicks in. With the pickups further away, the compressor action is smoother although again, you’ll need to increase the input level to compensate for the lower pickup output. Furthermore, amp sims generally don’t like transients as they consist more of “noise” than “tone,” so they don’t distort very elegantly. Reducing transients can give a less “harsh” sound at the beginning of a note or strum. So the end result is that if you’ve set your pickups close to the strings, try increasing the distance. You might find this gives you an overall more consistent sound, as well as better sustain. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Make quantization work for you, not against you by Craig Anderton Quantization was controversial enough when it was limited to MIDI, but now that you can quantize audio, it’s even more of an issue. Although some genres of music work well with quantization, excessive quantization can suck the human feel out of music. Some people take a “holier than thou” approach to quantization by saying it’s for musical morons who lack the chops to get something right in the first place. These people, of course, never use quantization...well, at least while no one’s looking. I feel quantization has its place; it’s the ticket to ultra-tight grooves, and a way to let you keep a first and inspired take, instead of having to play a part over and over again to get it right—and lose the human feel by beating a part to death. But like any tool, if misused quantization can cause more harm than good by giving an overly rigid, non-musical quality to your work. TRUST YOUR FEELINGS, LUKE The first thing to remember is that computers make terrible music critics. Forcing music to fit the rhythmic criteria established by a machine is silly—it’s real people, with real emotions, who make and listen to music. To a computer, having every note hit exactly on the beat may be desirable, but that’s not the way humans work. There’s a fine line between “making a mistake” and “bending the rhythm to your will.” Quantization removes that fine line. Yes, it gets rid of the mistakes, but it also gets rid of the nuances. When sequencers first appeared, musicians would often compare the quantized and non-quantized versions of their playing. Invariably, after hearing the quantized version, the reaction would be a crestfallen “gee, I didn’t realize my timing was that bad.” But in many cases, the human was right, not the machine. I’ve played some solo lines were notes were off as much as 50 milliseconds from the beat, yet they sounded right. Rule #1: You dance; a computer doesn’t. You are therefore much more qualified than a computer to determine what rhythm sounds right. WHY QUANTIZATION SHOULD BE THE LAST THING YOU DO Some people quantize a track as soon as they’ve finished playing it. Don’t! In analyzing unquantized music, you’ll often find that every instrument of every track will tend to rush or lag the beat together. In other words, suppose you either consciously or unconsciously rush the tempo by playing the snare a bit ahead of the beat. As you record subsequent overdubs, these will be referenced to the offset snare, creating a unified feeling of rushing the tempo. If you quantize the snare part immediately after playing, then you will play to the quantized part, which will change the feel. Another possible trap occurs if you play a number of unquantized parts and find that some sound “off.” The expected solution would be to quantize the parts to the beat, yet the “wrong” parts may not be off compared to the absolute beat, but to a part that was purposely rushed or lagged. In the example given above of a slightly rushed snare part, you’d want to quantize your parts in relation to the snare, not a fixed beat. If you quantize to the beat the rhythm will sound even more off, because some parts will be off with respect to absolute timing, while other parts will be off with respect to the relative timing of the snare hit. At this point, most musicians mistakenly quantize everything to the beat, destroying the feel of the piece. Rule #2: Don’t quantize until lots of parts are down and the relative—not absolute—rhythm of the piece has been established. SELECTIVE QUANTIZATION Often only a few parts of a track will need quantization, yet for convenience musicians tend to quantize an entire track, reasoning that it will fix the parts that sound wrong and not affect the parts that sound right. However, the parts that sound right may be consistent to a relative rhythm, not an absolute one. The best approach is to go through a piece, a few measures at a time, and quantize only those parts that are clearly in need of quantization. Very often, what’s needed is not quantization per se but merely shifting an offending note’s start time. Look at the other tracks and see if notes in that particular part of the tune tend to lead or lag the beat, and shift the start time accordingly. Rule #3: If it ain’t broke, don’t fix it. Quantize only the notes that are off enough to sound wrong. BELLS AND WHISTLES Modern-day quantization tools, whether for MIDI or audio, offer many options that make quantization more effective. One of the most useful is quantization strength, which moves a note closer to the absolute beat by a particular percentage. For example, if a note falls 10 mlliseconds ahead of the beat, quantizing to 50% strength would place it 5 milliseconds ahead of the beat. This smooths out gross timing errors while retaining some of the original part’s feel (Fig. 1). Fig. 1: The upper window (from Cakewalk Sonar) shows standard Quenziation options; note that Strength is set to 80%, ad there's a bit of Swing. The lower window handles Groove Quantization, which can apply different feels by choosing a "groove" from a menu. Some programs offer “groove templates” (where you can set up a relative rhythm to which parts are quantized), or the option to quantize notes in one track to the notes in another track (which is great for locking bass and drum parts together). Rule #4: Study your recording software’s manual and learn how to use the more esoteric quantization options. EXPERIMENTS IN QUANTIZATION STRENGTH Here’s an experiment I like to conduct during sequencing seminars to get the point across about quantization strength. First, record an unquantized and somewhat sloppy drum part on one track. It should be obvious that the timing is off. Then copy it to another track, quantize it, and play just that track back; it should be obvious that the timing has been corrected. Then copy the original track again but quantize it to a certain strength—say, 50%. It will probably still sound unquantized. Now try increasing the strength percentage; at some point (typically in the 70% to 90% range), you’ll perceive it as quantized because it sounds right. Finally, play back that track along with the one quantized to 100% strength and check out the timing differences, as evidenced by lots of slapback echoes. If you now play the 100% strength track by itself, it will sound dull and artificial compared to the one quantized at a lesser strength. Rule #5: Correct rhythm is in the ear of the beholder, and a totally quantized track never seems to win out over a track quantized to a percentage of total quantization. REMEMBER, MIDI IS NOT AUDIO Quantizing a MIDI part will not affect fidelity, but quantizing audio will usually need to shift audio around and stretch it. Although digital audio stretching has made tremendous progress over the years in terms of not butchering digital audio, the process is not flawless. If significant amounts of quantization are involved, you’ll likely notice some degree of audio degradation but you’ll be able to get away with lesser amounts. Rule #6: Like any type of correction, rhythmic correction is most transparent with signals that don’t need a lot of correction. Yes, quantization is a useful tool. But don’t use it indiscriminately, or your music may end up sounding mechanical—which is not a good thing unless, of course, you want it to sound mechanical! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Yes, this sounds insane...but try it By Craig Anderton Do you want better mixes? Of course you do—the mix, along with mastering, is what makes or breaks your music. Even the best tracks won’t come across if they’re not mixed correctly. Different people approach mixing differently, but I don’t think anyone has described something as whacked-out as what we’re going to cover in this article. Some people will read this and just shake their heads, but others will actually try the suggested technique, and craft tighter, punchier mixes without any kind of compression or other processing. THE MIXING PROBLEM What makes mixing so difficult is, unfortunately, a limitation of the human ear/brain combination. Our hearing can discern very small changes in pitch, but not level. You’ll easily hear a 3% pitch change as being distinctly out of tune, but a 3% level change is nowhere near as dramatic. Also, our ears have an incredibly wide dynamic range—much more than a CD, for example. So when we mix and use only the top 20-40 dB of average available dynamic range, even extreme musical dynamics don’t represent that much of a change for the ear’s total dynamic range. Another problem with mixing is that the ear’s frequency response changes at different levels. This is why small changes in volume are often perceived as tonal differences, and why it is so important to balance levels exactly when doing A-B comparisons. Because our ears hear low and high end signals better at higher levels, just a slight volume boost might produce a subjective feeling of greater “warmth” (from the additional low end) and “sparkle” (from the increased perception of treble). The reason why top mixing engineers are in such demand is because through years of practice, they’ve trained their ears to discriminate among tiny level and frequency response differences (and hopefully, taken care of their ears so they don’t suffer from their own frequency response problems). They are basically “juggling” the levels of multiple tracks, making sure that each one occupies its proper level with respect to the other tracks. Remember, a mix doesn’t compare levels to an absolute standard; all the tracks are interrelated. As an obvious example, the lead instruments usually have higher levels than the rhythm instruments. But there are much smaller hierarchies. Suppose you have a string pad part, and the same part delayed a bit to produce chorusing. To avoid having excessive peaking when the signals reach maximum amplitude at the same time, as well as better preserve any rhythmic “groove,” you’ll probably mix the delayed track around 6 dB behind the non-delayed track. The more tracks, the more intricate this juggling act becomes. However, there are certain essential elements of any mix—some instruments that just have to be there, and mixed fairly closely in level to one another because of their importance. Ensuring that these elements are clearly audible and perfectly balanced is, I believe, one of the most important qualities in creating a “transportable” mix (i.e., one that sounds good over a variety of systems). Perhaps the lovely high end of some bell won’t translate on a $29.95 boombox, but if the average listener can make out the vocals, leads, beat, and bass, you have the high points covered. Ironically, though, our ears are less sensitive to changes in relatively loud levels than to relatively soft ones. This is why some veteran mixers start work on a mix at low levels, not just to protect their hearing but because it makes it easier to tell if the important instruments are out of balance with respect to each other. At higher levels, differences in balance are harder to detect. ANOTHER ONE OF THOSE ACCIDENTS The following mixing technique is a way to check whether a song’s crucial elements are mixed with equal emphasis. Like many other techniques that ultimately turn out to be useful, this one was discovered by accident. At one point I had a home studio in Florida that didn’t have central air conditioning, and the in-wall air conditioner made a fair amount of background noise. One day, I noticed that the mixes I did when the air conditioner was on often sounded better than the ones I did when it was off. This seemed odd at first, until I made the connection with how many musicians use the “play the music in the car” test as the final arbiter of whether a mix is going to work or not. In both cases the background noise masks low-level signals, making it easier to tell which signals make it above the noise. Curious whether this phenomenon could be quantized further, I started injecting pink noise (Fig. 1) into the console while mixing. Fig. 1: Sound Forge can generate a variety of noise types, including pink noise. This just about forces you to listen at relatively low levels, because the noise is really obnoxious! But more importantly, the noise adds a sort of “cloud cover” over the music, and as mountain peaks poke out of a cloud cover, so do sonic peaks poke out of the noise. APPLYING THE TECHNIQUE You’ll want to add in the pink noise very sporadically during a mix, because the noise covers up high frequency sounds like hi-hat. You cannot get an accurate idea of the complete mix while you’re mixing with noise injected into the bus, but what you can do is make sure that all the important instruments are being heard properly. (Similarly, when listening in a car system, road noise will often mask lower frequencies.) Typically, I’ll take the mix to the point where I’m fairly satisified with the sound. Then I’ll add in lots of noise—no less than 10 dB below 0 with dance mixes, for example, which typically have restricted dynamics anyway—and start analyzing. While listening through the song, I pay special attention to vocals, snare, kick, bass, and leads (with this much noise, you’re not going to hear much else in the song anway). It’s very easy to adjust their relative levels, because there’s a limited range between overload on the high end, and dropping below the noise on the low end. If all the crucial sounds make it into that window and can be heard clearly above the noise without distorting, you have a head start toward an equal balance. Also note that the “noise test” can uncover problems. If you can hear a hi-hat or other minor part fairly high above the noise, it’s probably too loud. I’ll generally run through the song a few more times, carefully tweaking each track for the right relative balance. Then it’s time to take out the noise. First, it’s an incredible relief not to hear that annoying hiss! Second, you can now get to work balancing the supporting instruments so that they work well with the lead sounds you’ve tweaked. Although so far I’ve only mentioned instruments being above the noise floor, there are actually three distinct zones created by the noise: totally masked by the noise (inaudible), above the noise (clearly audible), and “melded,” where an instrument isn’t loud enough to stand out or soft enough to be masked, so it blends in with the noise. I find that mixing rhythm parts so that they sound melded can work if the noise is adjusted to a level suitable for the rhythm parts. FADING OUT Overall, I estimate spending only about 3% of my mixing time using the injected noise, and I don't use it at all for some mixes. But sometimes, espeically with dense mixes, it’s the factor responsible for making the mix sound good over multiple systems. Mixing with noise may sound crazy, but give it a try. With a little practice, there are ways to make noise work for you. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. It's not just a signal processor, but an audio interface you can aggregate with Mac and Windows By Craig Anderton DigiTech’s iPB-10 is best known as a live-performance multieffects pedal that you program with an iPad, but it’s also an excellent 44.1kHz/24-bit, USB 2.0 stereo audio interface for guitar. USING IT WITH THE MAC Core Audio is plug-and-play. Patch the iPB-10 USB output into an available Mac USB port. Now, select “DigiTech iPB-10 In/Out” as the input and output under Audio MIDI Setup. With my quad core Mac, the system played reliably with a buffer size of 64 samples in Digital Performer, and even at 45 samples with simple Ableton Live projects (Fig. 1)—that’s excellent performance. Fig. 1: Setting up the iPB-10 for Ableton Live with a 45-sample buffer size. USING IT WITH WINDOWS The driver isn’t ASIO, so in your host select WDM or one of its variants as the preferred driver mode (MME or DirectX drivers work, too, but latency is objectionable). With Sonar using WDM, the lowest obtainable latency was 441 samples. With WASAPI, it was 220 samples. Mixcraft 6 listed the lowest latency as 5ms (see Fig. 2; Mixcraft doesn’t indicate sample buffers). Fig. 2: Working as a Windows WaveRT (WASAPI) interface with Acoustica’s Mixcraft 6. I was surprised the iPB-10 drivers were compatible with multiple protocols but in any event, the performance equalled many dedicated audio interfaces. ZERO-LATENCY MONITORING A really cool feature is that under the iPB-10’s Settings, you can adjust the ratio of what you’re hearing from the DAW’s output via USB, and what’s coming from the iPB-10. If you monitor from the iPB-10, you essentially get zero-latency monitoring with effects, because you’re listening to the iPB-10 output—not monitoring through the computer. Typically, for this mode, you’d turn off the DAW track’s input echo (also called input monitor), and set the iPB-10 XLR Mix slider for 50% USB and 50% iPB-10. (If you’re monitoring from the 1/4” outs, choose the 1/4” Mix slider). Then, you’ll hear your DAW tracks from the USB side, and your guitar—with zero latency and any iPB-10 processing—from the iPB-10 side. If your computer is fast enough that latency isn’t an issue, then you can monitor solely via USB, and turn on your DAW’s input monitoring/input echo to monitor your guitar through the computer. This lets you hear the guitar through any plug-ins inserted into your guitar’s DAW track. THERE’S MORE! As the audio interfacing is class-compliant and doesn’t require installing drivers, with Core Audio or WDM/WASAPI/WaveRT drivers you can use more than one audio interface (called “aggregation”). So keep your go-to standard audio interface connected, but also use the iPB-10 for recording guitar. As long as your host supports one of the faster Windows audio protocols—or you’re using a recent Mac—I think you’ll be pleasantly surprised by the performance. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. It’s easier to carry a laptop than an arsenal of keyboards—but you’ll need to optimize your computer for the task By Craig Anderton There are two important truths when using virtual instruments live: Your entire system could die at any moment, but your system will probably give you years of reliable operation. So feel free to go ahead and file this under“hope for the best, but plan for the worst”—but in this article, we'll plan for the worst. MAC VS. PC For desktop computing, I use both; with laptops, for almost a decade I used only Macs, but now I use only Windows. Computers aren't a religion to me—and for live performance, they're simply appliances. I'd switch back to Mac tomorrow if I thought it would serve my needs better, but here's why I use Windows live. Less expensive. If the laptop dies, I'll cope better. Often easier to fix. With my current Windows laptop, replacing the system drive takes about 90 seconds. Laptop drives are smaller and more fragile, so this matters. Easier to replace. Although it's getting much easier to find Macs, if it's two hours before the gig in East Blurfle and an errant bear ate your laptop, you'll have an easier time finding a Windows machine. Optimization options. This is a double-edged sword, because if you buy a laptop from your local big box office supply store, it will likely be anti-optimized for live performance with virtual instruments. We'll cover tweaks that address this, but you’ll have to enter geek mode. If you just want a Windows machine that works . . . There are companies that integrate laptops for music. I've used laptops from PC Audio Labs and ADK, and they've all outperformed stock laptops. I’ve even used a PC Audio Labs x64 machine that was optimized for video editing, but the same qualities that make it rock for video make it rock and roll for music. Of course, if you're into using a Mac laptop (e.g., MainStage is your act's centerpiece, or you use Logic to host virtual instruments), be my guest—I have a Mac laptop that runs Mavericks as well as a Windows machine that’s currently on Windows 7, and they’re both excellent machines. Apple makes great computers, and even a MacBook Air has enough power to do the job. But if you're starting with a blank slate, or want to dedicate a computer to live performance, Windows is currently a pretty compelling choice. PREPARING FOR DISASTER There are two main ways disaster can strike. The computer can fail entirely. One solution—although pricey—is a redundant, duplicate system. Consider this an insurance policy, because it will seem inexpensive if your main machine dies an hour before the gig. Another solution is to use a master keyboard controller with internal sounds. If your computer blows up, at least you'll have enough sounds to limp through the gig. If you must use a controller-only keyboard, then carry an external tone module you can use in emergencies. If you have enough warning, you can buy a new computer before the gig. In that case, though, you'll need to carry everything needed to re-install the software you use. One reason I use Ableton Live for live performance and hosting virtual instruments is that the demo version is fully functional except for the ability to save—it won't time out in the middle of a set, or emit white noise periodically. I carry a DVD-ROM and USB memory stick (redundancy!) with everything needed to load into Live to do my performance; if all else fails I can buy a new computer, install Live, and be ready to go after making the tweaks we'll cover shortly. Software can become corrupted. If you use a Mac, bring along a Time Machine hard drive. With Windows, enable system restore—the performance hit is very minor. Returning to a previous configuration that’s known to be good may be all you need to fix a system problem. For extra security, carry a portable hard drive with a disk image of your system drive. Macs make it easy to boot from an external drive, as do Windows machines if you're not afraid to go into the BIOS and change the boot order. WINDOWS 7 TWEAKS Neither Windows nor the Mac OS are real-time operating systems. Music is a real-time activity. Do you sense trouble ahead? A computer juggles multiple tasks simultaneously, so it gets around to musical tasks when it can. Although computers are pretty good at juggling, occasional heavy CPU loading (“spikes”) can cause audio dropouts. Although one option is increasing latency, this produces a much less satisfying feel. A better option is to seek out and destroy the source of the spikes. Your ally in this quest is DPC Latency Checker, a free program available at www.thesycon.de/eng/latency_check.shtml. LatencyMon (www.resplendence.com/latencymon) is another useful program, but a little more advanced. DPC Latency Checker monitors your system and shows when spikes occur (Fig. 1); you can then turn various processes on and off to see what's causing the problems. Fig. 1: The left screen shows a Windows laptop with its wireless card enabled, and system power plan set to balanced. The one on the right shows what happens when you disable wireless and change the system power plan to high performance. From the Start menu, choose Control Panel then open Device Manager. Disable (don't uninstall) any hardware devices you're not using, starting with any internal wireless card—it’s a major spike culprit. Even if your laptop has a physical switch to turn this on and off, that's not the same as actually disabling it (Fig. 2). Also disable any other hardware you're not using: internal USB camera, ethernet port, internal audio (which you should do anyway), fingerprint sensor, and the like. Fig. 2: In Device Manager, disable any hardware you’re not using. Onboard wireless is particularly problematic. By now you should see a lot less spiking. Next, right-click on the Taskbar, and open Task Manager. You'll see a variety of running tasks, many of which may be unnecessary. Click on a process, then click on End Process to see if it makes a difference. If you stop something that interferes with the computer's operation, no worries—you can always restart, and the service will restart as well. Finally, click on Start. Type msconfig into the Search box, then click on the Startup tab. Uncheck any unneeded programs that load automatically on startup. If all of this seems too daunting, don't worry; simply disabling the onboard wireless in Device Manager will often solve most spiking issues. BUT WAIT—THERE'S MORE! Laptops try hard to maximize battery life. For example if you're just composing an email, the CPU can loaf along at a reduced speed, thus saving power. But for real-time performance situations, you want as much CPU power as possible. Always use an AC adapter, as relying on the battery alone will almost invariably shift into a lower-power mode. With Windows machines, the most important adjustment is to create a power plan with maximum CPU power. With Windows 7, choose Control Panel > Power Options and create a new power plan. Choose the highest performance power plan as a starting point. After creating the plan, click on Change Plan Settings, then click on Change Advanced Power Settings. Open up Processor Power Management, and set the Maximum and Minimum processor states to 100% (Fig. 3). If there's a system cooling policy, set it to Active to discourage overheating. Fig. 3: Create a power plan that runs the processor at 100% for both minimum and maximum power states. Laptops will have an option to specify different CPU power states for battery operation; set those to 100% as well. If overheating becomes an issue (it shouldn't), you can probably throttle back a bit on the CPU power, like to 80%. Just make sure the minimum and maximum states are the same; I've experienced audio clicks when the CPU switched states. (And in the immortal words of Herman Cain, “I don't have the facts to back me up” but it seems this is more problematic with FireWire interfaces than USB.) A HAPPIER LAPTOP A laptop's connectors are not built to rock and roll specs. If damaged, the result may be an expensive motherboard replacement. Ideally, every computer connection should be a break-away connection; Macs with MagSafe power connectors are outstanding in this respect. With standard power connectors, use an extension cable that plugs between the power supply plug and your computer's jack. Secure this extension cable (duct tape, tie it around a stand leg, or whatever) so that if there's a tug on the power supply, it will pull the power supply plug out of the extension cable jack—not the extension cable plug out of the computer.. Similarly, with USB memory sticks or dongles, use a USB extender (Fig. 4) between the USB port and external device. Fig. 4: A USB extension cable can help keep a USB stick from breaking off at its base (and possibly damaging your motherboard) if pressure is applied to it. It’s also important to invest in a serious laptop travel bag. I prefer hardshell cases, which usually means getting one from a photo store and customizing it for a computer instead of cameras. Finally, remember when going through airport scanners to put your laptop last on the conveyer belt, after other personal effects. People on the incoming side of security can’t run off with your laptop, but those who’ve gone through the scanner can if they get to your laptop before you do. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Let there be light—if you have a USB port by Craig Anderton When I saw my first light powered by a USB-port, I was smitten. Whether trying to get work done on a plane without disturbing the grumpy person sitting next to me or running a live laptop set in a dark club, I had found the answer. Or more realistically, almost the answer...it used an incandescent bulb, drew much current, weighed a lot, and burned out at an early age. I guess it was sort of the Elvis Presley of laptop accessories. Mighty Bright introduced a single-LED USB light a few years ago that fulfilled the same functions, but much more elegantly. And now, unlike Scotty in Star Trek they actually can give you more power—with their new 2-LED USB Light. What You Need to Know The two white LEDs are controlled by a push switch so you can light one or both LEDs. Compared to the single LED version, having the extra LED available makes a big difference in terms of throwing more light on a subject. The gooseneck is very flexible but holds it positions, and the weight is reasonable The size is about the same as the single-LED version, and it fits in the average laptop bag without problems. Limitations My only concern is the weight—not because it weights a lot, but because USB ports aren’t exactly industrial-strength. However if you plug into a laptop’s side USB port and bend the light in a U so the top is over where the USB connector plugs into the port (Fig. 1), then it becomes balanced and places little weight on the port itself. Fig. 1: Optimum laptop positioning for the 2-LED USB Light. Conclusions Once you have one of these things sitting around, you’ll find other uses. Given how many computers have USB ports on the back, plug this in and you’ll be able to see where all your mystery wires are routed. I take the 2-LED USB Light when I’m on the road, and combined with a general-purpose charger for USB devices, the combo makes a dandy night light—helpful in strange hotel rooms when the fire alarm goes off in the middle of the night, and you don’t want to trip on your way out the door. Also, lots of keyboards have USB ports and assuming it’s not occupied with a thumb drive or similar, the 2-LED USB Light can help illuminate the keyboard’s top panel. Considering the low cost and long LED life (100,000 hours, which equals three hours a day for 90 years), I’d definitely recommend having one of these babies around. You never know when you’re going to need a quick light source, and these days, it’s not too hard to find a suitable USB connector to provide the power. Resources Musician’s Friend Mighty Bright 2-LED USB Light online catalog page ($14.00 MSRP, $11.99 “street”) Mighty Bright’s 2-LED USB Light product web page Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. It's time to play "stompbox reloaded" by Craig Anderton The studio world is not experiencing a compressor shortage. Between hardware compressors, software compressors, rack compressors, and whatever other compressors I’ve forgotten, you’re pretty much covered. But there may be a useful compressor that you haven’t used recently: one of the stompbox persuasion. With most DAWs including inserts so you can integrate external effects easily, interfacing stompboxes isn’t very difficult. Yes, you’ll need to match levels (likely attenuating on the way into the compressor and amplifying on the way out), but that’s not really a big deal. But why bother? Unlike studio compressors, which are a variation on limiters and whose main purpose is to control peaks, guitar compressors were generally designed to increase sustain by raising the level as a string decayed (Fig. 1). Fig. 1: The upper waveform is an uncompressed guitar signal, while the lower one adds compression to increase the sustain. Both waveforms have the same peak level, but the compressed guitar’s decay has a much higher level. In fact some compressors were called “sustainers,” and used designs based on the Automatic Level Control (ALC) circuitry used to keep mic signals at a constant level for CB and ham radio. The gain control elements were typically field-effect transistors (FET) or photoresistors, and had minimal controls—usually sustain, which was either a threshold control or input level that “slammed” the compressor input harder—and output level. Some guitar players felt that compressors made the sound “duller,” so a few designs tuned the compressor feedback to compress lower-frequency signals more than higher-frequency signals—the opposite of a de-esser. Many guitarists patched a preamp between the guitar and compressor to give even more sustain because higher input levels increased the amount of compression. Putting compressors before octave dividers often caused them to work more reliably, and adding a little compression before an envelope-controller filter (like the Mutron III) gave less variation between the low and high filter frequencies. Some legendary compressors include the Dan Armstrong Orange Squeezer (Fig, 2), MXR Dyna-Comp, and BOSS CS-1. But many companies produced compressors, and continue to do so. Fig. 2: Several years ago the classic Dan Armstrong Orange Squeezer was re-issued. Although it has since been discontinued, schematics for Dan’s original design exist on the web. APPLICATIONS RE-LOADED Bass. Not all compressors designed for guitar could handle bass frequencies, especially not a synthesizer set for sub-bass. So, it’s usually best to patch the compressor in parallel with your bass signal. With a hardware synthesizer or bass, split the output and feed two interface (or amp) inputs, one with the compressor inserted. With a virtual synthesizer or recorded track, send a bus output to a spare audio interface output, patch that to the compressor input, then patch the compressor output to a spare audio interface input. Use the bass channel’s send control to send signal into the bus that feeds the compressor. Synthesizers are particularly good with vintage compressors because you can edit the amplitude envelope for a fast attack and quick decay before the sustain. Turn the bass output way up to hit the compressor hard, and you’ll get the aggressive kind of attack you hear with guitar. Drums. Guitar compressors can give a punchy, “trashy” sound that’s good for punk and some metal. As with synth bass, parallel compression is usually best to keep the kick drum sound intact (Fig. 3). Adding midrange filtering before or after the compression can give an even funkier sound. Fig. 3: This setup provides parallel compression. The channel on the left is the drum track; the one on the right is a bus with an “external insert” plug-in. This plug-in routes the insert effect to your audio interface, which allows patching in a hardware compressor as if it was a plug-in. The drum channel has a send control to feed some drum signal to the compressor bus, whose output goes to the master bus. Bus compression. You wouldn’t want to compress a master bus with a stompbox compressor (well, maybe you would!), but try sending bass and drums to am additional bus, then compressing that bus and patching it in parallel with the unprocessed bass and drums sound. This makes for a fatter sound, and “glues” the two instruments together. What’s more, many older compressors had some degree of distortion, which adds even more character to any processing. Vintage compressors with relatively short decay times (most stompbox compressors had fixed attack or decay times) give a “pumping” sound to rhythm sections. EMULATING STOMPBOX COMPRESSION WITH MODERN GEAR Don’t have an old compressor around? There are ways to come close with modern gear. If your compressor has a lookahead option, turn it off. Set the attack to the absolute minimum time possible. Decay time varied depending on the designer; a shorter release (around 100ms) gives a “rougher” sound with chords, but some compressors had quite long release times—over 250ms—to smooth out the decaying string sound. Set a high compression ratio, like 20:1, and a low threshold, as older compressors had low thresholds to pick up weak string vibrations. Finally, try overloading the compressor input to create distortion, which also gives a harder attack. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Whether you're quantizing sequences, programming drum machines, creating beats, or synching to tempo, it helps to know rhythmic notation by Craig Anderton As we all know, lots of great musicians have been able to create an impressive body of work without knowing how to read music. But regardless of whether you expect to be able to read lead sheets on the fly—or even will need to do so—there are some real advantage to “knowing the language.” In particular, it’s hard not to run into references to rhythmic notation. Today’s DAWs quantize to particular rhythmic values, and effects often sync to particular rhythms as well. And if you want to program your own beats, it also helps to know how rhythm works. So let’s forget the tough stuff and take some baby steps into the world of rhythmic notation. This brief overview of rhythmic notation provides the basics; but if you’re new to all this, you’ll probably need to read this section over several times and fool around a bit with something like a drum machine before it all falls into place. Measures. A piece of music is divided into smaller units called measures (also called bars), and each measure is divided into beats. The number of beats per measure, and the rhythmic value of the beats, depends on both the composition and the time signature. Time Signatures. A time signature (also called metric signature) defines a piece of music’s rhythmic nature by describing a measure’s rhythmic framework. The time signature is notated at the beginning of the music (and whenever there’s a change) with two numbers, one on top of the other. The top number indicates the number of beats in each measure, while the bottom number indicates the rhythmic value of the beat (e.g. 4 is a quarter note, 8 is an eighth note, etc.). If that doesn’t make sense yet, let’s move on to some examples. Rhythmic Values for Notes. With a measure written in 4/4, there are four beats per measure, and each beat represents a quarter note. Thus, there are four quarter notes per measure of 4/4 music. Quarter note symbol With a 3/4 time signature, the numerator (upper number) indicates that there are three beats per measure, while the denominator indicates that each of these beats is a quarter note. There are two eighth notes per quarter note so there are eight eighth notes per measure of 4/4 music. Eighth note symbol There are four 16th notes per quarter note, which means there are 16 16th notes per measure of 4/4 music. 16th note symbol There are eight 32nd notes per quarter note. If you’ve been following along, you’ve probably already guessed there are 32 32nd notes per measure of 4/4 music. 32nd note symbol There are also notes that span a greater number of beats than quarter notes. A half note equals two quarter notes. Therefore, there are two half notes per measure of 4/4 music. Half note symbol A whole note equals four quarter notes, so there is one whole note per measure of 4/4 music. (We keep referring these notes to 4/4 music because that’s the most commonly used time signature in contemporary western music.) Whole note symbol Triplets The notes we’ve covered so far divide measures by factors of two. However, there are some cases where you want to divide a beat into thirds, giving three notes per beat. Dividing a quarter note by three results in eighth-note triplets. The reason we use the term “eighth-note triplets” is because the eighth note is closest to the actual rhythmic value. Dividing an eighth note by three results in 16th-note triplets. Dividing a 16th note by three results in 32nd-note triplets. Eighth-note triplet symbol Note the numeral 3 above the notes, which indicates triplets. Rests. You can also specify where notes should not be played; this is indicated by a rest, which can be the same length as any of the rhythmic values used for notes. Rest symbols (from left to right): whole note, half note, quarter note, eighth note, and 16th note Dotted Notes and Rests. Adding a dot next to a note or rest means that it should play one and a half times as long as the indicated value. For example, a dotted eighth would last as long as three 16th notes (since an eighth note is the same length as two 16th notes). A dotted eighth note lasts as long as three 16th notes Uncommon Time Signatures. 4/4 (and to a lesser extent 3/4) are the most common time signatures in our culture, but they are by no means the only ones. In jazz, both 5/4 (where each measure consists of five quarter notes) and 7/4 (where each measure consists of seven quarter notes) are somewhat common. In practice, complex time signatures are often played like a combination of simpler time signatures; for example, some 7/4 compositions would have you count each measure not as 1, 2, 3, 4, 5, 6, 7 but as 1, 2, 3, 4, 1, 2, 3. It’s often easier to think of 7/4 as a bar of 4/4 followed by a bar of 3/4 (or a bar of 3/4 followed by a bar of 4/4, depending upon the phrasing), since as we mentioned, 4/4 and 3/4 are extremely common time signatures. Other Symbols. There are many, many other symbols used in music notation. > indicates an accent; beams connect multiple consecutive notes to simplify sight reading; and so on. Any good book on music notation can fill you in on the details. Two 16th notes beamed together Drawing beams on notes makes them easier to sight-read compared to seeing each note drawn indivicually. FOR MORE INFORMATION These books can help acquaint you with the basics of music theory and music notation. Alfred’s Pocket Dictionary of Music is a concise but thorough explanation of music theory and terms for music students or teachers alike. Practical Theory Complete, by Sandy Feldstein is a self-instruction music theory course that begins with the basics—explanations of the staff and musical notes—and ends with lesson 84: “Composing a Melody in Minor.” Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), andSound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Get more emotion out of your ’boards by putting on the pressure By Craig Anderton Synthesizer keyboards are basically a series of on-off switches, so wresting expressiveness from them is hard. There’s velocity, which produces dynamics based on how fast the key goes from key up to key down; you also have mod wheel, footpedal, pitch bend, and usually one or two sustain switches, all of which can help with expressiveness. But some keyboards have an additional, and powerful, way to increase expressiveness: Aftertouch, also called Pressure. THE TWO KINDS OF AFTERTOUCH Aftertouch is a type of MIDI control signal. Like pitch bend, it’s not grouped in with MIDI Continuous Controller signals but is deemed important enough to be its own dedicated control signal. It produces an output based on how hard you press on the keys after they’re down. There are two types of aftertouch: Channel aftertouch (or pressure). This is the most common form of aftertouch, where the average pressure being applied to the keys produces a MIDI control signal. More pressure increases the value of the control signal. From a technical standpoint, the usual implementation places a force-sensing resistor under the keyboard keys. Pressing on this changes the resistance, which produces a voltage. Converting this voltage to a digital value produces MIDI aftertouch data. Key (or polyphonic) aftertouch (or pressure). Each key generates its own control signal, and the output value for each key corresponds to the pressure being applied to that key. AFTERTOUCH ISSUES Key aftertouch is extremely expressive, but with a few exceptions—notably Keith McMillen Instruments QuNexus (Fig. 1) and CME Xkey USB Mobile MIDI Keyboard—it’s not common in today’s keyboards. Fig. 1: Keith McMillen Instruments QuNexus is a compact keyboard with polyphonic aftertouch. The late, great synthesizer manufacturer Ensoniq made several keyboards with key aftertouch, but the company is no more. Another concern is that key aftertouch is data-intensive, because every key produces data. In the early days of MIDI, this much data often “choked” MIDI sequencers running on old computers that couldn’t keep up. Although many virtual synthesizers (and even hardware ones) can accept key aftertouch data, most likely you’ll be using a keyboard with channel aftertouch. Back then even channel aftertouch could produce too much data, so most MIDI sequencers included MIDI data filters that let you filter out aftertouch and prevent it from being recorded. Most DAWs that support MIDI still include filtering, and for aftertouch, this usually defaults to off. If you want to use aftertouch, make sure it’s not being filtered out (Fig. 2). Fig. 2: Apple Logic (left) and Cakewalk Sonar (right) are two examples of programs that let you filter out particular types of data, including aftertouch, from an incoming MIDI data stream. Depending on the keyboard, the smoothness of how the aftertouch data responds to your pressure can vary considerably. Some people refer to a keyboard as having “afterswitch” if it’s difficult to apply levels of pressure between full off and full on. However, most recent keyboards implement aftertouch reasonably well, and some allow for a very smooth response. A final issue is that many patches don’t incorporate aftertouch as an integral element because the sound designers have no idea whether the controller someone will be using has aftertouch. So, most sounds are designed to respond to mod wheel, velocity, and pitch bend because those are standard. If you want a patch to respond to aftertouch you’ll need to decide which parameter(s) you want to control, do your own programming to assign aftertouch to these parameters, and then save the edited patch. AFTERTOUCH APPLICATIONS Now that you know what aftertouch is and how it works, let’s consider some useful applications. Add “swells” to brass patches. Assign aftertouch to a lowpass filter cutoff, then press harder on the keys to make the sound brighter. You may need to lower the initial filter cutoff frequency slightly so the swell can be sufficiently dramatic. You could even assign aftertouch to both filter and to a lesser extent to level, so that the level increases as well as the brightness. Guitar string bends. Assign aftertouch to pitch so that pressing on the key raises pitch—just like bending a string on a guitar. However, there are two cautions: Don’t make the response too sensitive, or the pitch may vary when you don’t want it to; and this works best when applied to single-note melodies, unless you want more of a pedal steel-type effect. Introduce vibrato. This is a very popular aftertouch application. Assign aftetouch to pitch LFO depth, and you can bring vibrato in and out convincingly on string, guitar, and wind patches. The same concept applies to introducing tremolo to a signal. “Bend” percussion. Some percussion instruments become slighter sharp when first struck. Assign aftertouch to pitch; if you play the keys percussively and hit them hard, you’re bound to apply at least some pressure after the key is down, and bend the pitch up for a fraction of a second. This can add a degree of realism, even if the effect is mostly subliminal. Morph between waveforms. This may take more effort to program if you need to control multiple parameters to do morphing. For example, I use this technique with overdriven guitar sounds to create “feedback.” I’ll program a sine wave an octave or octave and fifth above the guitar note, and tie its level and the guitar note’s level to aftertouch so that pressing on a key fades out the guitar while fading in the “feedback.” This can create surprisingly effective lead guitar sounds. Control signal processors. Although not all synths expose signal processing parameters to MIDI control, if they do pressure can be very useful—mix in echoed sounds, increase delay feedback, change the rate of chorusing for a more randomized effect, increase feedback in a flanger patch, and the like. I’d venture a guess that few synthesists use aftertouch to its fullest—so do a little parameter tweaking, and find out what it can do for you. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. Time for a quick trip down the disinformation superhighway by Craig Anderton Maybe it’s just the contentious nature of the human race, but as soon as digital audio appeared, the battle lines were drawn between proponents of analog and those who embraced digital. A lot of claims about the pros and cons of both technologies have been thrown back and forth; let’s look at what’s true and what isn’t. A device that uses 16-bit linear encoding with a 44.1 kHz sampling rate gives “CD quality” sound. All 16-bit/44.1 kHz systems do not exhibit the same audio quality. The problem is not with the digital audio per se, but interfacing to the analog world. The main variables are the A/D converter and output smoothing filter, and to a lesser extent, the D/A converter. Simply replacing a device’s internal A/D converter with an audiophile-quality outboard model that feeds an available AES/EBU or S/PDIF input can produce a noticeable (and sometimes dramatic) change. What’s more, one of digital audio’s dirty little secrets is that when the CD was introduced, some less expensive players used 12-bit D/A converters—so even though the CD provided 16 bits of resolution, it never made it past the output. I can’t help but think that some of the early negative reaction to the CD’s fidelity was about limitations in the playback systems rather than an inherent problem with CDs. 16 bits gives 96 dB of dynamic range, and 24 bits gives 144dB of dynamic range. There are two things wrong with this statement. First, it’s not really true that each bit gives 6dB of dynamic range; for reasons way too complex to go into here, the actual number is (6.02 X N) + 1.76, where “N” is the number of bits. Based on this equation, an ideal 16 bit system has a dynamic range of 98.08 dB. As a rule of thumb, though, 6 dB per bit is a close enough approximation for real-world applications. Going from theory to practice, though, many factors prevent a 16-bit system from reaching its full potential. Noise, calibration errors within the A/D converter, improper grounding techniques, and other factors can raise the noise floor and lower the available dynamic range. Many real-world 16-bit devices offer (at best) the performance of an ideal 14-bit device, and if you find a 24-bit converter that really delivers 24 bits of resolution...I want to buy one! Also note that for digital devices, dynamic range is not the same as signal-to-noise ratio. The AES has a recommended test procedure for testing noise performance of a digital converter; real-world devices spec out in the 87 to 92 dB range, not the 96 dB that’s usually assumed. (By the way, purists should note that all the above refers to undithered converters.) Digital has better dynamic range than analog. With quality components and engineering, analog circuits can give a dynamic range in excess of 120 dB — roughly equivalent to theoretically perfect 20-bit operation. Recording and playing back audio with that kind of dynamic range is problematic for either digital or analog technology, but when 16-bit linear digital recording was introduced and claimed to provide “perfect sound forever,” the reality was that quality analog tape running Dolby SR had betters specs. With digital data compression like MP3 encoding, even though the sound quality is degraded, you can re-save it at a higher bit rate to improve quality. Data compression programs for computers (as applied to graphics, text, samples, etc.) use an encoding/decoding process that restores a file to its original state upon decompression. However, the data compression used with MP3, Windows Media, AAC, etc. is very different; as engineer Laurie Spiegel says, it should be called “data omission” instead of “data compression.” This is because parts of the audio are judged as not important (usually because stronger sounds are masking weaker sounds), so the masked parts are simply omitted and are not available for playback. Once discarded, that data cannot be retrieved, so a copy of a compressed file can never exhibit higher quality than the source. Don’t ever go over 0 VU when recording digitally. The reason for this rule is that digital distortion is extremely ugly, and when you go over 0 VU, you’ve run out of headroom. And frankly, I do everything I can to avoid going over 0. However, as any guitarist can tell you, a little clipping can do wonders for increasing a signal’s “punch.” Sometimes when mixing, engineers will let a sound clip just a tiny bit—not enough to be audible, but enough to cut some extremely sharp, short transients down to size. It seems that as long as clipping doesn’t occur for more than about 10 ms or so, there is no subjective perception of distortion, but there can be a perception of punch (especially with drum sounds). Now, please note I am by no means advocating the use of digital distortion! But if a mix is perfect except for a couple clipped transients, you needn’t lose sleep over it unless you can hear that there’s distortion. And here’s one final hint: If something contains unintentional distortion that’s judged as not being a deal-breaker, it’s a good idea to include a note to let “downstream” engineers (e.g., those doing mastering) know it’s there, and supposed to stay there. You might also consider normalizing a track with distortion to -0.1dB, as some CD manufacturers will reject anything that hits 0 because they will assume it was unintentional. Digital recording sounds worse than vinyl or tape because it’s unnatural to convert sound waves into numbers. The answer to this depends a lot on what you consider “natural,” but consider tape. Magnetic particles are strewn about in plastic, and there’s inherent (and severe) distortion unless you add a bias in the form of an ultrasonic AC frequency to push the audio into the tape’s linear range. What’s more, there’s no truly ideal bias setting: you can raise the bias level to reduce distortion, or lower it to improve frequency response, but you can’t have both so any setting is by definition a compromise. There are also issues with the physics of the head that can produce response anomalies. Overall, the concept of using an ultrasonic signal to make magnetic particles line up in a way that represents the incoming audio doesn’t seem all that natural. Fig. 1: This is the equalization curve your vinyl record goes through before it reaches your ears. Vinyl doesn’t get along with low frequencies, so there’s a huge amount of pre-emphasis added during the cutting process, and equally huge de-emphasis on playback—the RIAA curve (Fig. 1) boosts the response by up to 20 dB at low frequencies and cuts by up to 20 dB at high frequencies, which hardly seems natural. We’re also talking about a playback medium that depends on dragging a rock through yards and yards of plastic. Which of these options is “most natural” is a matter of debate, but it doesn’t seem that any of them can make too strong a claim about being “natural”! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. Anderton

    Re-Thinking Reverb

    Create ethereal, unusual reverb effects with phase reversal by Craig Anderton Reverb hasn’t changed a lot over the years: you emulate a gazillion sound waves bouncing off surfaces. But if you’re a gigging musician, one thing that may have changed is how you hear reverb. Back when you were in the audience, you heard reverb coming at you from all sides of a room. Then when you graduated to the stage, reverb started sounding different: you initially heard the sound of your amp or monitors, and then you heard the reverb as it reflected off the the walls, ceiling, and other surfaces. The effect is a little like pre-delay at first, but as the reverb waves continue to bounce, you hear a sort of “bloom” where the reverb level increases before decaying. Controlling reverb to give this kind of effect can produce some lovely, ethereal results. It also has the added bonus of not “stepping on” the original signal being reverberated, because the reverb doesn’t reach its full level until after the original signal has occured It’s not hard to set up this effect; here’s how (Fig. 1). CREATING ETHEREAL REVERB You’ll need two sends from the track to which you want to add reverb that go to two reverb effects buses. These should have the same settings for send level, pan, and pre-post. Insert your reverb of choice into one of the effects bus returns, and set the reverb parameters for the desired reverb sound. For starters, set a decay time of around 2 seconds. Next, insert the same reverb into the other effects bus return, with the same settings. If you can’t do something like drag/copy the existing reverb into another track, save the first reverb’s settings as a preset so you can call it up in the other reverb. The returns should have identical settings as well. Assuming the sends are pre-fader, turn down the original signal’s track fader so you hear only the reverb returns (Fig. 1). Fig. 1: The yellow lines represent sends from a guitar track to two send returns; each has a reverb inserted (in this example. One return also has a plug-in that reverses the phase. Now it’s time for the “secret sauce”: reverse one of the reverb return’s phase (also called polarity). Different DAWs handle this in different ways. Some may have a phase button, while others might have a phase button only for tracks but not for send returns. For situations like this, you can usually insert some kind of phase-switching plug-in like Cakewalk Sonar’s Channel Tools, PreSonus Studio One Pro’s Mixtool, or Ableton Live’s Phase. Reversing the phase should cause the reverb to disappear. If not, then there’s a mismatch somewhere with your settings—check the send control levels, reverb parameters, reverb return controls, etc. Another possibility is that the reverb has some kind of randomizing option to give more “motion.” For example, with Overloud’s Breverb 2, you’ll need to go into the Mod page and turn down the Depth control. In any event, find the cause of the problem and fix it before proceeding. Finally, decrease the reverb decay time on one of the reverbs (e.g., to around 1 second), and start playback. When a signal first hits the reverbs, they’ll be identical or at least very similar and cancel; as the reverb decays, the two reverbs will diverge more, so there will be less cancellation and the reverb tail will “bloom.” Because the cancellation reduces the overall level of the reverbs, you’ll likely need to compensate for this by increasing the reverb return levels. However, note that the two reverb returns need to remain identical with respect to each other. I find the easiest way to deal with this is to group the two faders so that adjusting one fader automatically adjusts the other one. If you’re using long reverb times and there’s not much difference between the two decay times, the volume will be considerably softer. In that case, you may need to send the bus outputs to another bus so you raise the overall level of the combined reverb sound, APPLICATIONS Because it takes a while for the reverb to develop, this technique probably isn’t something you’ll want to use on uptempo songs. It’s particularly evocative with vocals, especially ones where the phrasing has some “space,” as well as with languid, David Gilmour-type solo guitar lines. But I’ve also tried this ethereal reverb effect on individual snare hits and a variety of other signals, so feel free to experiment—maybe you’ll discover additional applications. Happy ambience! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Replace your pickup selector switch with a panpot by Craig Anderton I've tried several designs over the years to be able to do a continuous pan between the bridge and neck pickups, like how a mixer panpot sweeps between the left and right channels. This isn’t as easy as it sounds, but if you’re in an experimental mood, this mod gives you a wider range of colors from your axe without needing outboard boxes like equalizers. However, there are some tradeoffs. A pickup selector switch has no ambiguous positions: it’s either neck, bridge, or both—end of story. A panpot control has two unamibiguous positions at the extremes of rotation, but there's a whole range of possible sounds in between. These variations are subtle, but while it's more difficult to dial in an exact setting than with a standard pickup selector switch, in return there are more possibilities. ABOUT THE SCHEMATIC This circuit uses a standard potentiometer for volume, a dual-ganged potentiometer to do the panning, and an SPDT (single-pole, double-throw) switch with a third, center-off position. Although you won’t need to drill any extra holes if you guitar has a selector switch/volume/tone control combination, the dual gang pot is thicker than standard pots; this could be a problem with thinner-body guitars. Due to all the variables in this circuit, I recommend running a pair of wires (hot and ground) from each pickup to a test jig so you can experiment with different parts values. To avoid hum problems, make sure the metal cases of any pots or switches are grounded. If you end up deciding this mod’s for you, build the circuitry inside the guitar. The dual-ganged panpot (R3) provides the panning. Ideally, this would have a log taper for one element and an antilog taper for the other element but these kinds of pots are very difficult to find. A suitable workaround is use a standard dual-ganged linear taper pot and add "tapering" resistors R1 and R2. If these are 20% of the pot's total resistance, they’ll change the pot taper to a log/antilog curve. The panpot value can range between 100k and 1 Meg, which would require 22k and 220k tapering resistors respectively. Higher resistance values will provide a crisper, more accurate high end while lower values will reduce the highs and output somewhat. A 100k panpot with 22k tapering resistors will cause noticeable dulling and a loss of volume unless you use active pickups, in which case lower values are preferred to higher values; however, some people might prefer the reduced high end when playing through distortion, because this can warm up the sound. The volume control (R4) can be a 250K, 500K, or 1 Meg log (audio) taper control. The three-position switch provides a tone control designed specifically for this circuit, and connects a capacitor (C1) across one pickup, the other pickup, or neither pickup (the tone switch's center position). I was surprised at how switching in the capacitor can change the timbre at the panpot's mid position, and this definitely multiplies the number of tonal options. The optimum capacitor value will depend on the pickups and amp you use, but will probably range from 10 nF (0.01 uF; less bassy) to 50 nF (0.05 uF; more bassy). For even more versatility, you could connect the switch center terminal to ground, and wire different capacitor values from each switch terminal to its corresponding pickup. Two final notes: adjust the two pickups for the same relative output by adjusting their distance from the strings. If one pickup predominates, it will shift the panpot's apparent center off to one side. Finally, switching one pickup out of phrase provides yet another bunch of sounds; also note that removing the tapering resistors may produce a feel that you prefer, particularly if one of the pickups is out of phase. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Signal processing and cool effects aren't just for electric guitars by Craig Anderton Although the goal with acoustic guitar is often to create the most realistic, organic sound possible, a little electric-type processing can enhance an acoustic’s sound in many ways that open up new creative avenues. We’ll assume your acoustic has been electrified (presumably with a piezo pickup) and can produce a signal of sufficient level, and of the proper impedance, to drive contemporary effects units. If you're not sure about this, contact the manufacturer of the pickup assembly, or whoever did the installation. There are quite a few processors dedicated to acoustic guitar, like Zoom’s A3 (Fig. 1). Fig. 1: Zoom’s A3 packages acoustic guitar emulations and effects in a floor pedal format. While convenient and cost-effective, but this article takes more of an à la carte approach with conventional, individual effects. IMPROVING TONE Most electrified acoustics have frequency response anomalies—peaky midrange, boomy bass, and so on—caused primarily by the interaction among the guitar body, pickup, and strings. While some of these anomalies are desirable (classical guitars wouldn't sound as full without the bass resonance most instruments exhibit), some are unwanted. Smoothing out the response is a task for equalization. There are two main types of equalizers (EQ for short) used with acoustic guitar, graphic and parametric. A graphic EQ splits the audio spectrum into numerous frequency bands (Fig. 2). Fig. 2: Source Audio’s Programmable EQ is a graphic EQ that can save and recall custom settings. Depending on the model, the range of frequencies (bandwidth) covered by each band can be as wide as an octave to as narrow as 1/3 octave. The latter types are more expensive because of the extra resolution. The response of each band can be boosted to accent the frequency range covered by that band, or attenuated to make a frequency range less prominent. Graphic equalizers are excellent for general tone-shaping applications such as making the sound "brighter" (more treble), "warmer" (more lower midrange), "fuller" (more bass), etc. A parametric equalizer has fewer bands—typically two to four—but offers more precision since you can dial in a specific frequency and bandwidth for each band, as well as boost or cut the response. So, if your guitar is boomy at a particular frequency, you can reduce the response at that specific frequency only and set a narrow bandwidth to avoid altering the rest of the sound. Or, you can set a wider bandwidth if you want to affect more of the sound. Either type of equalization can help balance your guitar with the rest of the instruments in a band. For example, both the guitar and the male voice tend to fall into the midrange area, which means that they compete to a certain extent. Reducing the guitar's midrange response will leave more "space" for your voice. Another example: if your band has a bass player, you might want to trim back on the bass to avoid a cluttered low end. However, if your band is bassless, then try boosting the low end to help fill out the bottom a bit. Note that piezo pickups have response anomalies, and equalization is very helpful for evening out the response. For more information, check out the article “Make Acoustic Guitar Piezo Pickups Sound Great” at Gibson.com. BRIGHTNESS OR FULLNESS WITHOUT EQUALIZATION Many multieffects offer pitch transposition. I've found that transposing an acoustic guitar sound up an octave (for a brighter sound) or down an octave (for a fuller sound) can sound pretty good, providing that you mix the transposed signal way in the background of the straight sound—you don't want to overwhelm the straight sound, particularly since the processed sound will generally sound artificial anyway. BIGGER SOUNDS A delay line can simulate having another guitarist mimicking your part to create a bigger-than-life, ensemble sound. Run your guitar through a delay set for a short delay (30 to 50 milliseconds). Turn the feedback (or regeneration) and modulation controls to minimum; this produces a slapback echo effect, giving a tight doubling effect. Another option is chorusing, which creates more of a swirling, animated sound as opposed to a straight doubling. The settings are similar to slapback, except use a shorter delay (around 10 to 30 milliseconds) and add a little modulation to vary the delay time and produce the "swirling" effect. Note: with most delay effects, it's best to set the balance (mix) control so that the delayed sound is less prominent than the dry sound. INCREASED SUSTAIN Guitars are percussive instruments that produce a huge burst of energy when you first pluck a string, but then rapidly decays to a much lower level. Often this is what you want, but in some cases the decay occurs too quickly and you might prefer more sustain. A limiter is just the ticket. This device decreases the guitar's dynamic range by holding the peaks to a preset level called a threshold, then optionally amplifying the limited signal to bring the peaks back up to their original level (Fig. 3). Fig. 3: The signal with 4dB limiting (blue) has a higher average level than the original recording. Don't set the threshold too low, or the guitar will sound "squeezed" and unnatural. Also, although many people confuse limiters and compressors, these are not identical devices. A compressor tries to maintain a constant output in the face of varying input signals, which means that not only are high-level signals attenuated, but low-level signals may be subject to a lot of amplification. The above explanation of limiting is fairly basic, and there are several variations on this particular theme. Early model limiters would simply clamp the signal to the threshold; newer models can do that, but may also allow for a gentler limiting action to provide a more natural sound. PEDALLING YOUR WAY TO BIGGER SOUNDS If you have a two-channel amp or mixer, one trick that's applicable to all of the above options is to split your guitar signal into two paths with one split carrying the straight guitar sound, while the other goes through a volume pedal before feeding the desired signal processor. Use the volume pedal to go from a normal to processed acoustic guitar sound, and bring in as much of the processed sound as you want. The possibilities for processing acoustic guitar are just as exciting as for processing electric guitars. The best way to learn, though, is not just by reading this article—my intention is to get you inspired enough to experiment. You never know what sounds you'll discover as you plug your guitar output into various device inputs. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Get the most out of today’s digital wonderboxes by Craig Anderton Everyone’s always looking for a better guitar sound, and while the current infatuation with vintage boutique effects has stolen a bit of the spotlight from digital multieffects, don’t sell these processors short. When properly programmed, they can emulate a great many “vintage” timbres, as well as create sounds that are extremely difficult to achieve with analog technology. As with many other aspects of audio, there is no one “secret” that gives the ultimate sound; great sounds are often assembled, piece by piece. Following are ten tips to help you put together a better guitar sound using multieffects. Line 6's POD HD500 is one of today's most popular digital multieffects for guitar. 1. DON’T BELIEVE THE INPUT LEVEL METERS Unintentional digital distortion can be nasty, so minimize any distortion other than what’s created intentionally within the multieffects. The input level meters help you avoid input overload, but they may not tell you about the output. For example, a highly resonant filter sound (e.g.,wa) can increase the signal level internally so that even if the original signal doesn’t exceed the unit’s input headroom, it can nonetheless exceed the available headroom elsewhere. Some multieffects meters can monitor the post-processed signal, but this isn’t a given. If the distortion starts to “splatter” yet the meters don’t indicate overload, try reducing the input level. 2. USE PROPER GAIN-STAGING If a patch uses many effects then there are several level-altering parameters, and these should interact properly—just like gain-staging with a mixer. Suppose an equalizer follows distortion. The distortion will probably include input and output levels, and the filter will have level boost/cut controls for the selected frequency. As one illustration of gain-staging, suppose the output filter boosts the signal at a certain frequency by 6 dB. If the signal coming into the filter already uses up the available headroom, asking it to increase by 6 dB means crunch time. Reducing the distortion output level so that the signal hitting the filter is at least 6 dB below the maximum available headroom lets the filter do its work without distortion. 3. ADD EQ PEAKS AND DIPS FOR REALISM Speakers, pickups, and guitar bodies have anything but a flat response. Much of the characteristic difference between different devices is due to frequency response variations—peaks and dips that form a particular “sonic signature.” For example, I analyzed some patches David Torn programmed for a multieffects and found that he likes to add 1 kHz boosts. On the other hand I often add a slight boost around 3.5 kHz so guitars can cut through a mix even at lower volume levels. With 12-strings, I usually cut the low end to get more of a Rickenbacker sound. Parametric EQ is ideal for this type of processing. 4. CUT DELAY FEEDBACK LOOP HIGH FREQUENCIES Each successive repeat with tape echo and analog delay units has progressively fewer high frequencies, due to analog tape’s limited bandwidth. If your multieffects can reduce high frequencies in the delay line’s feedback path, the sound will resemble tape echo rather than straight digital delay. 5. A SOLUTION FOR THE TREMOLO-IMPAIRED If your pre-retro craze multieffects doesn’t have a tremolo, check for a stereo autopanner function. This shuttles the signal between the left and right channels at a variable rate (and sometimes with a choice of waveforms, such as square to switch the sound back and forth, or triangle for a smoother sweeping effect). To use the autopanner for tremolo, simply monitor one channel and turn down the other one. The signal in the remaining channel will fade in and out cyclically, just like a tremolo. 6. CABINET SIMULATORS ARE COOL, BUT… Many multieffects have speaker simulators, which supposedly recreate the frequency response of a typical guitar speaker in a cabinet. If you’re feeding the multieffects output directly into a mixer or PA instead of a guitar amp and this effect is not active, the timbre will often be objectionably buzzy. Inserting the speaker emulator in the signal chain should give a more realistic sound. However, if you go through a guitar amp and the emulator is on, the sound will probably be much duller, and possibly have a thin low end as well—so bypass it. You might be surprised how many people have thought a processor sounded bad because they plugged an emulated cabinet output designed for direct feeds to mixers into a guitar amp. 7. USE A MIDI PEDAL FOR MORE EXPRESSION A multieffects will generally let you assign at least one parameter per patch to a MIDI continuous controller number. For example, if you set echo feedback to receive continuous controller message 04, and set a MIDI pedal to transmit message 04, then moving the pedal will vary the amount of echo feedback. You can usually scale the response as well, so that moving the pedal from full off to full on creates a change that’s less than the maximum amount. This allows greater precision because the pedal covers a narrower range. Scaling can sometimes invert the “sense” of the pedal, so that pressing down creates less of an effect rather than more. 8. MAKE SURE STEREO OUTPUTS DON’T CANCEL Some cheapo effects, and a large number of “vintage” effects, create stereo with time delay effects by sending the processed signal to one channel, and an out-of-phase version of the processed signal to the other channel. While this can sound pretty dramatic with near-field monitoring, should the two outputs ever collapse to mono , the effect will cancel and leave only the dry sound. To test for this, plug the stereo outs into a two-channel mono amp or mixer (set the channel pans to center). Start with one channel at normal listening volume, and the second channel down full. Gradually turn up the second channel; if the effect level decreases, then the processed outputs are out of phase. If the effect level increases, all is well. 9. PARALLELING MULTIEFFECTS WITH GUITAR AMPS One way to enrich a sound is to double a multieffects with an amp, and mix the sounds together. Although you could simply split the guitar through a Y-cord and feed both, here’s a way that can work better. To supplement the multieffects sound with an amp sound, send the multieffects “loop send” (if available) to the amp input. This preserves the way the multieffects input stage alters your guitar. If you’d rather supplement the basic amp sound with a multieffects, feed the amp’s loop send to the multieffects signal input to preserve the amp’s preamp characteristics. 10. BE AWARE OF THE PROBLEMS WITH PRESETS Many musicians evaluate a multieffects by stepping through the presets, but you need to be aware of two very important issues. First, whoever designed the presets wasn’t you—it’s very doubtful they were using the same guitar, pickups, string gauge, pick, touch, etc. If a preset works with your playing style, it’s due to luck more than anything else. Second, presets are usually designed to sound impressive during demos, and will be loaded up with effects. Sometimes creating your own cool presets simply involves taking a factory preset and removing some selected effects, and adjusting an emulated amp’s drive control to match your playing style. Well, that covers the 10 tips. Have fun strumming those wires—and remember that the magic word for all guitar multieffects is “equalization.” Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Prevent "tone suckage" with this simple test procedure by Craig Anderton Is your guitar sounding run down? Tired? Dull and anemic? It may not have the flu, but be feeding the wrong kind of input. A guitar pickup puts out relatively weak signals, and the input it feeds can either coddle those signals or stomp on them. It’s all a question of the input’s impedance, so lets look at a simple test for determining whether that amp or signal processor you’re feeding is a signal coddler or a signal stomper. You might think that testing for input impedance is pretty esoteric, and that you need an expensive impedance tester, or at least have to findone of those matchbooks that says “Learn Electronics at Home in Your Spare Time.” But in this case, testing for impedance is pretty simple. You’ll need a standard issue analog or digital volt-ohmmeter (VOM), as sold by Radio Shack and other electronics stores (a good digital model should cost less than $40). This is one piece of test equipment no guitarist should be without anyway, as you can test anything from whether your stage outlets are really putting out 117V to whether your cable is shorted. You’ll also need a steady test tone generator, which can be anything from an FM tuner emitting a stream of white noise to a synthesizer set for a constant tone (or even a genuine test oscillator). WHAT IS IMPEDANCE? If theory scares you, skip ahead to the next subhead. If you can, though, stay tuned since impedance crops up a lot if you work with electronic devices. Impedance is a pretty complex subject, but we can just hit the highlights for the purposes of this article. An amp or effect’s input impedance essentially drapes a resistance from the input to ground, thus shunting some of your signal to ground. The lower the resistance to ground, the greater the amount of signal that gets shunted. The guitar’s output impedance, which is equivalent to putting a resistance in series with your guitar and the amp input, works in conjunction with the input impedance to impede the signal. If you draw an equivalent circuit for these two resistances, it looks suspiciously like the schematic for a volume control (Fig. 1). Fig. 1: The rough equvalent of impedance, expressed as resistance. If the guitar’s output impedance is low and the amp input impedance is high, there’s very little loss. Conversely, a high guitar output impedance and low amp input impedance creates a lot of loss. The reason why a low input impedance "dulls" the sound is because a pickup’s output impedance changes with frequency—at higher frequencies, the guitar pickup exhibits a higher output impedance. Thus, low frequency signals may not be attenuated that much, but high frequencies could get clobbered. Buffer boards and on-board preamps can turn the guitar output into a low impedance output for all frequencies, but many devices are already designed to handle guitars, so adding anything else would be redundant. The trick is finding out which devices are guitar-friendly and which aren’t; you have to be particularly careful with processors designed for the studio, as there may be enough gain to kick the meters into the red but not a high enough input impedance to preserve your tone. Hence, the following test. IMPEDANCE TESTING This test takes advantage of the fact that impedance and resistance are, at least for this application, roughly equivalent. So, if we can determine the effect’s input resistance to ground, we’re covered. (Just clipping an ohmmeter across a dummy plug inserted in the input jack isn’t good enough; the input will usually be capacitor-coupled, making it impossible to measure resistance without taking the device’s cover off.) Wire up the test jig in Fig. 2, which consists of a 1 Meg linear taper pot and two 1/4" phone jacks. Plug in the signal generator and amplifier (or other device being tested), then perform the following steps. Fig. 2: The test jig for measuring impedance. Test points are marked in blue. 1. Set the VOM to the 10V AC range so it can measure audio signals. You may later need to switch to a more sensitive range (e.g., 2.5V or so) if the test oscillator signal isn’t strong enough for the meter to give a reliable reading. 2. Set R1 to zero ohms (no resistance). 3. Measure the signal generator level by clipping the VOM leads to test points 1 and 2. The polarity doesn’t matter since we’re measuring AC signals. Try for a signal generator level between 1 and 2 volts AC but be careful not to overload the effect and cause clipping. 4. Rotate R1 until the meter reads exactly 50% of what it did in step 3. 5. Be very careful not to disturb R1’s setting as you unplug the signal generator and amplifier input from the test jig. 6. Set the VOM to measure ohms, then clip the leads to test points 1 and 3. 7. Measure R1’s resistance. This will essentially equal the input impedance of the device being tested. INTERPRETING THE RESULTS If the impedance is under 100k, I’d highly recommend adding a preamp or buffer board between your guitar and amp or effect to eliminate dulling and signal loss. The range of 100k to 200k is acceptable although you may hear some dulling. An input impedance over 200k means the designer either knows what guitarists want, or got lucky. Note, however, that more is not always better. Input impedances above approximately 1 megohm are often more prone to picking up radio frequency interference and noise, without offering much of a sonic advantage. So there you have it: amaze your friends, impress your main squeeze (well, on second thought maybe not), and strike fear into the forces of evil with your new-found knowledge. A guitar that feeds the right input impedance comes alive, with a crispness and fidelity that’s a joy to hear. Happy picking—and testing. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Not quite sure how digital audio works? Here's your refresher course by Craig Anderton Digital technology—which brought us home computers, $5 calculators, cars you can't repair yourself, Netflix, and other modern miracles—has fundamentally re-shaped the way we record and listen to music. Yet there's still controversy over whether digital audio represents an improvement over analog audio. Is there some inherent aspect of digital audio that justifies this skepticism? Let's take a look at the basics of digital sound audio: why it’s different from analog sound, its benefits, and its potential drawbacks. Although digital audio continues to improve, the more you know about it, the more you can optimize your gear to take full advantage of what digital audio can offer. BASICS OF SOUND What we call “sound” is actually variations in air pressure (at least that’s the accepted explanation) that interact with our hearing mechanism. The information received by our ears is passed along to the brain, which processes this information. However, while acoustic instruments automatically generate changes in air pressure which we hear as sound, electronic instruments create their sound in the form of voltage variations. Hearing these voltage variations requires converting them into moving air. A transducer is a device that converts one form of energy into another; for example, a loudspeaker can convert voltage variations into changes in air pressure, while a microphone can change air pressure changes into voltage variations. Other transducers include guitar pickups (which convert mechanical energy to electrical energy), and tape recorder heads (which convert magnetic energy into electrical energy). If you look at audio on a piece of test equipment, it looks like a squiggly line, which graphically represents sound (Fig. 1). Fig. 1: An audio waveform. This could stand for air pressure changes, voltage changes, string motion, or whatever. A straight horizontal line represents a condition of no change (i.e. zero air pressure, zero voltage, etc.), and the squiggly line is referenced to this base line. For example, if the line is showing a speaker cone’s motion, excursions above the base line might indicate that the speaker cone is moving outward, while excursions below the base line might indicate that the speaker cone is moving inward. These excursions could just as easily represent a fluctuating voltage (such as what comes out of a synthesizer) that alternates between positive and negative, or even the air pressure changes that occur if you strike a piano key. The squiggly line is called a “waveform.” Let’s assume that striking a single piano note produces the waveform shown in Fig. 1. If we take that waveform and press an exact analogy of the waveform into a vinyl record, that record will contain the sound of a piano note. Now, suppose we play that record. As the stylus traces this waveform, the phono cartridge will send out voltage variations which are analogous to the original air pressure changes caused by the piano note. This low-level signal then passes through an amplifier, which augments the voltage enough to drive a speaker cone back and forth. The final result is that the speaker cone follows the waveform motion, thus producing the same air variations originally pressed into the vinyl record. Notice that each stage transfers a signal in its own medium (vinyl, wire, air, etc.) that is analogous to the input signal; hence the term, analog recording. Unfortunately, analog recording is not without its faults. First of all, if the record has pops, clicks, or other problems, these will be added on to the original sound and show up as undesirable “artifacts” in the output. Second, the cartridge will add its own coloration; if it can’t follow rapid changes due to mechanical inertia, distortion will result. Phono cartridge preamps also require massive equalization (changes in frequency reponse) to accommodate cartridge limitations. Amplifiers add noise and hum, and speakers are subject to all kinds of distortion and other problems. So, while the signal appearing at the speaker output may be very similar to what was originally recorded, it will not duplicate the original sound due to these types of errors. When you duplicate a master tape or press it into vinyl, other problems will occur due to the flawed nature of the transfer process. In fact, every time you dub an analog sound, or pass it through a transducer, the sound quality deteriorates. THE CONSISTENCY OF DIGITAL Digital audio removes some of the variables from the recording and playback process by converting audio into a string of numbers, and then passing these numbers through the audio chain (in a bit, we’ll see exactly why this improves the sound). Fig. 2 illustrates the conversion process from an analog signal into a number. Fig. 2: The digital conversion process. Fig. 2a represents a typical waveform which we want to record. A computer takes a “snapshot” of the signal every few microseconds (1/1,000,000th of a second) and notes the analog signal's level, then translates this “snapshot” into a number representing the signal's level. Taking additional samples creates the “digitized” signal shown in Fig. 2b. Note that the original signal has been converted into a series of samples, each of which has its own unique value. Let’s relate what we’ve discussed so far to a typical audio system. A traditional microphone picks up the audio signal, and sends it to an Analog-to-Digital Converter, or ADC for short. The computer takes this numerical information and optionally processes it—for example, delays it in the case of a digital delay or with a sampling keyboard, stores the information in memory. So far so good, but listening to a bunch of numbers does not exactly make for a wonderful audio experience. After all, this is an analog world, and our ears hear analog sound, so we need to convert this string of numbers back into an analog signal that can do something useful such as drive a loudspeaker. This is where the Digital-to-Analog Converter (DAC) comes into the picture; it takes each of the numerical samples and re-converts it to a voltage level, as shown in Fig. 2c. A lowpass filter works in conjunction with the DAC to filter the stair-step signal, thus “smoothing” the series of discrete voltages into a continuous waveform (Fig. 2d). We may then take this newly converted analog signal and do all of our familiar analog tricks like putting it through an amplifier/speaker combination. But what’s the point of going through all these elaborate transformations? And doesn’t it all affect the sound? Let’s examine each question individually. The main advantage of this approach is that a digitally-encoded signal is not subject to the deterioration an analog signal experiences. Consider the compact disc, the first example of mass-market digital audio; it stores digital information on a disc which is then read by a laser and converted back into analog. By taking this approach, if a scratch appears on the disc it doesn’t really matter—the laser recognizes only numbers, and will tend to ignore extraneous information. Even more importantly, using digital audio preserves quality as this audio goes through the signal chain. For example, a conventional analog multi-track tape gets mixed down to an analog two-track tape, which introduces some sound degradation due to limits of the two-track machine. It then gets mastered (another chance for error), converted into a metal stamper (where even more errors can occur), and finally gets pressed into a record (and we all know what kinds of problems that can cause, from pops to warpage). At each audio transfer stage, signal quality goes down. With digital recording, suppose you record a piece of music into a computer-based recording system that stores sounds as numbers. When it’s time to mix down, the numbers—not the actual signal—get mixed down to the final stereo or surround master (of course, the numbers are monitored in analog so you can tell what’s going on). Now, we can transfer that digitally-mixed signal directly to the compact disc; this is an exact duplicate (not just an analogy) of the mix, so there's no deterioration in the transfer process. Essentially, the Analog-to-Digital Converter at the beginning of the signal chain “freeze dries” the sound, which is not reconstituted until it hits the Digital-to-Analog Converter in the listener’s audio system. This is why digital audio can sound so clean; it hasn’t been subjected to the petty humiliations endured by an analog signal as it works its way from studio to home stereo speaker. LIMITATIONS OF DIGITAL AUDIO So is digital audio perfect? Unfortunately,digital audio introduces its own problems which are very different from those associated with analog sound. Let’s consider these one at a time. Insufficient sampling rate. Consider Fig. 3, which shows two different waveforms being sampled at the same sampling rate. Fig. 3: Sampling rate applied to two different waveforms. The original waveforms are the light lines, each sample is taken at the time indicated by the vertical dashed line, and the heavy black line indicates what the waveform looks like after sampling. Fig. 3a is a reasonably good approximation of the waveform, but Fig. 3b just happens to have each sample land on a peak of the waveform, so there is no amplitude difference between samples, and the resulting waveform looks nothing at all like the original. Thus, what comes out of the DAC can, in extreme cases, be transformed into an entirely different waveform from what went into the ADC. The solution to the above problems is to make sure that enough samples are taken to adequately represent the signal being sampled. According to the Nyquist theorem, the sampling frequency should be at least twice as high as the highest frequency being sampled. There is some controversy as to whether this really is enough, but that’s a controversy we won’t get into here. Filter coloration. As mentioned earlier, we need a filter after the DAC to convert the stair-step samples into something smooth and continuous. The only problem is that filters can add their own coloration, although over the years digital filtering has become much more sophisticated and transparent. Quantization. Another sampling problem relates to resolution. Suppose a digital audio system can resolve levels to 10 mv (1/100th of a volt). Therefore, a level of 10 mV would be assigned one number, a level of 20 mV another number, a level of 30 mV yet another number, and so on. Now suppose the computer is trying to sample a 15 mV signal—does it consider this a 10 mV or 20 mV signal? In either case, the sample does not correspond exactly to the original input level, thus producing a quantization error. Interestingly, note that digital audio has a harder time resolving lower levels (where each quantized level represents a large portion of the overall signal level) than higher levels (where each quantized level represents a small portion of the overall signal level). Thus, unlike analog gear where distortion increases at high amplitudes, digital systems tend to exhibit the greatest amount of distortion at lower levels. Dynamic range errors. A computer cannot resolve an infinite number of quantized levels; therefore, the number of levels it can resolve represents the systen's dynamic range. Computers express numbers in terms of binary digits (also called “bits”), and the greater the number of bits, the greater the number of voltage levels it can quantize. For example, a four-bit system can quantize 16 different levels, an eight-bit system 256 different levels, and a 16-bit system can resolve 65,536 different levels. Clearly, a 16 bit system offers far greater dynamic range and less quantization error than four or eight-bit systems, and 20 or 24 bits is even better. Incidentally, there’s a simple formula to determine the approximate dynamic range in dB based on the bits used in a digital audio system, where dynamic range = 6 X number of bits. Thus, a 16 bit system offers 96 dB of dynamic range—excellent by any standards. However, this is theoretical spec. In reality, factors like noise, circuit board layouts, and component limitations reduce the maximum potential dynamic range. THE DIGITAL AUDIO DIFFERENCE Despite any limitations, when the CD was introduced most consumers voted with their dollars and seemed to feel that despite any limitations, the CD's audio quality sure beat ticks, pops, and noise. Unfortunately, the first generation of CD players didn't always realize the full potential of the medium; the less expensive ones sometimes used 12-bit converters, which didn't do the sound quality any favors. Also, engineers re-mastering audio for the CD had to learn a new skill set, as what worked with tape and vinyl didn't always translate to digital media. While digital audio may not be perfect, it’s pretty close and besides, the whole field is still relatively young compared to the decades over which analog audio matured. An alternate digital technology, Direct Stream Digital, was introduced several years to a less-than-enthusiastic response from consumers yet many believe it sounds better than standard digital audio based on PCM technology; furthermore, as of this writing the industry is considering transitioning to 24-bit systems with a 96kHz sampling rate. While controversial (many feel any advantage is theoretical, not practical), this does indicate that efforts are being made to further digital audio's evolution. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Lock your bass to the kick (or other drums) for a super-tight groove By Craig Anderton One of life’s better moments is when the bass/drum combination plays like a single person with four hands—and really tight hands at that. When the rhythm section is totally locked, everything else seems just that much tighter. When you’re in the studio, several tools can help lock the rhythm section together. While they’re no replacement for “the real thing” (i.e., humans that play well together!), they can provide some pretty cool effects. Following is one of my favorites: a technique that locks bass to the kick drum so that they hit at exactly the same time. BASS GATE Understanding this process requires using a noise gate, a signal processor designed to remove hiss from a signal. It typically has two inputs and one output. One of the inputs is for the audio signal you want to clean up, while the output is where the processed signal exits. In between, there’s a “gate” that either is open and lets the signal through, or is closed and blocks the signal. The second input is a “control” input that senses an incoming signal level and converts it into a control signal. If the signal level is above a user-settable threshold, then the gate opens and lets the signal through. If the signal level is below the threshold, then the gate closes, and there’s no output. Noise gates were very popular in the days of analog tape, which had a consistent level of background hiss. You’d set the gate threshold just above the hiss, so that (at least in theory) any “real” signal, which presumably was higher in level than the hiss, would open the gate. If the signal consisted of just noise, then the gate would close, blocking the hiss. Most noise gates can do more than just simply turn the signal on and off. Other controls include: Decay: Determines how long it takes the gate to fade out after the control signal goes below the threshold. Attack: Sets how long it takes for the gate to fade in after the control signal goes above the threshold (good for attack delay effects). Gating amount: his determines whether the "gate closed" condition blocks the signal completely, or only to a certain extent (e.g., 10 or 20dB below normal). Typically, the control input senses the signal present at the main audio input. However, some hardware noise gates bring this input to its own jack, called a “key” input. This allows some signal other than the main audio input (like a kick drum) to turn the gate on and off. In today’s computer-based recording system, noise gates typically have a “sidechain” input which acts like a key input. A send from a different audio track can feed the sidechain input as a destination, and thus control that gate independently of the signal going through it. CONNECTIONS Fig. 1 shows the basic setup. The kick track has a send bus, with one of the available destinations being the bass track’s gate sidechain input. Whenever the kick hits, the bass passes through the gate; if there’s no kick signal, the bass track’s gate closes and the bass signal becomes inaudible or reduced in level.. Fig. 1: The kick track’s Bus 1 feeds the PC4K Expander/Gate module’s sidechain input, which is shown as part of the Sonar ProChannel that's “flown out” from the Gated Bass track (track 3). Track 2 carries the unprocessed bass sound. However, note there are two copies of the bass track, although you don’t necessarily need this. You may want to vary the blend between the gated and “continuous” tracks, or process the gated track—for example, send the bass through some distortion, then gate it and mix this track in behind the main bass track. Every time the kick hits, it lets through the distorted bass burst (which can be kind of cool). Another example involves adding a significant treble or upper midrange boost to the gated track. Whenever the kick and bass hit simultaneously the bass will sound a little brighter, thus better differentiating the two sounds. Also note that the kick track send post-fader button is turned off, so the send signal is pre-fader. This means the send level is constant regardless of the channel’s fader setting. Having the bass gated on/off can be very dramatic, but don’t forget about using gating to bring in variations on the core sound. Also remember this technique isn’t exclusive to the studio—you can gate live as well. Sure, gating is a “trick”—but it can add some really rhythmic, useable effects. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. It's not as simple as just placing a mic up against a speaker by Craig Anderton Miking guitar cabinets may seem like a simple process, because all you really need to do is to pick up moving air with a mic. But there are many variables: the mic, its placement the room environment, the cabinet itself, and the amp settings. So, let’s consider some of the most important considerations when miking amp cabinets. MIC SELECTION Many guitarists record with the amp cranked to get “that” sound, so under these circumstances it’s important to choose a mic that can handle high sound pressure levels (SPL). Dynamic mics are ideal for these situations, and the inexpensive Shure SM57 is the classic guitar cabinet mic—many engineers choose it even when cost is no object (Fig. 1). Although dynamic mics sometimes seem deficient in terms of brightness, this doesn’t matter much with amp cabinets, which typically start losing response around 5kHz or so. A couple less dB at 17kHz isn’t going to make a lot of difference. That said, there are also more upscale dynamic mics, like the Electrovoice RE20 and Sennheiser MD421, which give excellent results. Fig. 1: Shure’s SM57 is the go-to cab mic in many pro and project studios. Condenser mics are often too sensitive for close miking of loud amps, but they can give a more “open” response. They also make good “auxiliary” mics—placing one further back from the amp adds definition to the dynamic primary mic, and picks up room ambience that can add character to the basic amp sound. For condenser mics, AKG’s C414B-ULS is a great, but pricey, choice; their C214 gives similar performance but at a much lower cost. Neumann’s U87 is beyond most budgets, but the more affordable Audio-Technica AT 4051 has a similar character and it’s also great for vocals. Then there’s the ribbon mic. Although ribbon mics used to be fragile, newer models use more modern construction techniques and are much more rugged. Ribbon mics have an inherently “warm” personality, and a polar pattern that picks up sounds from the front and back—but not the sides. This characteristic is very useful with multi-cab guitar setups; by choosing which sounds to accept or reject based on mic placement, ribbon mics let you do some pretty cool tricks. Royer’s R-121 and R-101 are popular for miking cabs, Beyer’s M160 is a classic ribbon mic that’s been used quite a bit with cabs. Regardless of what mic you use, check to see whether the mic has a switchable attenuator (called a “pad”) to reduce the mic’s sensitivity. For example, a -10dB pad will make the mic 10dB less sensitive. With loud amps, engage this to avoid distortion. MIC PLACEMENT First, remember that while each speaker in a cab should sound the same, that’s not always true. Try miking each speaker in exactly the same place, and listen for any significant differences. Start off with the mic an inch or two back from the cone, perpendicular to the speaker, and about half to two-thirds of the way toward the speaker’s edge. To capture more of the cabinet’s influence on the sound (as well as some room sound), try moving the mic a few inches further back from the speaker. Moving the mic closer to the speaker’s center tends to give a brighter sound, while angling the mic toward the speaker or moving it further away provides a tighter, warmer sound. Also, the amp interacts with the room: Placing the amp in a corner or against a wall increases bass. Raising it off the floor also changes the sound. The room’s ambience makes a difference as well. If the room is small and has hard surfaces, the odds are there will be quite a bit of ambient sound making its way into the mic, even if it’s close to the speaker. This isn’t necessarily a bad thing; I’m a fan of ambience, because I find it often adds a more lively feel to the overall sound. DIRECT VS. MIKED Some amps offer direct feeds (sometimes with cabinet simulation); combining this with the miked sound can give a “big” sound. However, the miked sound will be delayed compared to the direct sound—about 1ms per foot away from the speaker. This can result in comb filtering, which you can think of as a kind of sonic kryptonite because it weakens the sound. To counteract this, nudge the miked sound earlier in your recording program until the miked and direct sounds line up, and are in-phase (Fig. 2) Fig. 2: In the top pair of waveforms, the top waveform is the direct sound and the next one down is the miked signal. Note how it’s delayed compared to the direct sound. In the bottom pair, the miked signal (bottom waveform) has been “nudged” forward so it lines up with the direct sound. THE MIC PLACEMENT “FLIGHT SIMULATOR” IK Multimedia’s AmpliTube 3 (Fig. 3) lets you move four “virtual mics” around in relation to the virtual amp. The results parallel what you’d hear in the “real world,” and you can learn a lot about how mic placement affects the overall sound by moving these virtuals mics. While this doesn’t substitute for going into the studio, moving mics around various amps, and monitoring the results, it’s a great introduction. Nor is AmpliTube alone; Softibe’s Metal Room offers two cabs and mics (Fig. 4), Overloud’s TH2 has two moveable mics for their cabinets (Fig. 5), and MOTU’s Live Room G plug-in for Digital Performer 8 (Fig. 6) also allows various mic positions for three difference mics. Fig. 3: IK Multimedia’s AmpliTube offers four mics you can place in various positions. Fig. 4: Softube’s Metal Room has two cabs, each with two mics you can position as desired. Fig. 5: Overloud’s TH2 has two mics for covering their cabs. Fig. 6: MOTU’s Digital Performer 8 has two “live room” plug-is, one for guitar and one for bass, that provides for various miking options. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  • Create New...