Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Anderton

  1. It's not just a signal processor, but an audio interface you can aggregate with Mac and Windows By Craig Anderton DigiTech’s iPB-10 is best known as a live-performance multieffects pedal that you program with an iPad, but it’s also an excellent 44.1kHz/24-bit, USB 2.0 stereo audio interface for guitar. USING IT WITH THE MAC Core Audio is plug-and-play. Patch the iPB-10 USB output into an available Mac USB port. Now, select “DigiTech iPB-10 In/Out” as the input and output under Audio MIDI Setup. With my quad core Mac, the system played reliably with a buffer size of 64 samples in Digital Performer, and even at 45 samples with simple Ableton Live projects (Fig. 1)—that’s excellent performance. Fig. 1: Setting up the iPB-10 for Ableton Live with a 45-sample buffer size. USING IT WITH WINDOWS The driver isn’t ASIO, so in your host select WDM or one of its variants as the preferred driver mode (MME or DirectX drivers work, too, but latency is objectionable). With Sonar using WDM, the lowest obtainable latency was 441 samples. With WASAPI, it was 220 samples. Mixcraft 6 listed the lowest latency as 5ms (see Fig. 2; Mixcraft doesn’t indicate sample buffers). Fig. 2: Working as a Windows WaveRT (WASAPI) interface with Acoustica’s Mixcraft 6. I was surprised the iPB-10 drivers were compatible with multiple protocols but in any event, the performance equalled many dedicated audio interfaces. ZERO-LATENCY MONITORING A really cool feature is that under the iPB-10’s Settings, you can adjust the ratio of what you’re hearing from the DAW’s output via USB, and what’s coming from the iPB-10. If you monitor from the iPB-10, you essentially get zero-latency monitoring with effects, because you’re listening to the iPB-10 output—not monitoring through the computer. Typically, for this mode, you’d turn off the DAW track’s input echo (also called input monitor), and set the iPB-10 XLR Mix slider for 50% USB and 50% iPB-10. (If you’re monitoring from the 1/4” outs, choose the 1/4” Mix slider). Then, you’ll hear your DAW tracks from the USB side, and your guitar—with zero latency and any iPB-10 processing—from the iPB-10 side. If your computer is fast enough that latency isn’t an issue, then you can monitor solely via USB, and turn on your DAW’s input monitoring/input echo to monitor your guitar through the computer. This lets you hear the guitar through any plug-ins inserted into your guitar’s DAW track. THERE’S MORE! As the audio interfacing is class-compliant and doesn’t require installing drivers, with Core Audio or WDM/WASAPI/WaveRT drivers you can use more than one audio interface (called “aggregation”). So keep your go-to standard audio interface connected, but also use the iPB-10 for recording guitar. As long as your host supports one of the faster Windows audio protocols—or you’re using a recent Mac—I think you’ll be pleasantly surprised by the performance. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. It’s easier to carry a laptop than an arsenal of keyboards—but you’ll need to optimize your computer for the task By Craig Anderton There are two important truths when using virtual instruments live: Your entire system could die at any moment, but your system will probably give you years of reliable operation. So feel free to go ahead and file this under“hope for the best, but plan for the worst”—but in this article, we'll plan for the worst. MAC VS. PC For desktop computing, I use both; with laptops, for almost a decade I used only Macs, but now I use only Windows. Computers aren't a religion to me—and for live performance, they're simply appliances. I'd switch back to Mac tomorrow if I thought it would serve my needs better, but here's why I use Windows live. Less expensive. If the laptop dies, I'll cope better. Often easier to fix. With my current Windows laptop, replacing the system drive takes about 90 seconds. Laptop drives are smaller and more fragile, so this matters. Easier to replace. Although it's getting much easier to find Macs, if it's two hours before the gig in East Blurfle and an errant bear ate your laptop, you'll have an easier time finding a Windows machine. Optimization options. This is a double-edged sword, because if you buy a laptop from your local big box office supply store, it will likely be anti-optimized for live performance with virtual instruments. We'll cover tweaks that address this, but you’ll have to enter geek mode. If you just want a Windows machine that works . . . There are companies that integrate laptops for music. I've used laptops from PC Audio Labs and ADK, and they've all outperformed stock laptops. I’ve even used a PC Audio Labs x64 machine that was optimized for video editing, but the same qualities that make it rock for video make it rock and roll for music. Of course, if you're into using a Mac laptop (e.g., MainStage is your act's centerpiece, or you use Logic to host virtual instruments), be my guest—I have a Mac laptop that runs Mavericks as well as a Windows machine that’s currently on Windows 7, and they’re both excellent machines. Apple makes great computers, and even a MacBook Air has enough power to do the job. But if you're starting with a blank slate, or want to dedicate a computer to live performance, Windows is currently a pretty compelling choice. PREPARING FOR DISASTER There are two main ways disaster can strike. The computer can fail entirely. One solution—although pricey—is a redundant, duplicate system. Consider this an insurance policy, because it will seem inexpensive if your main machine dies an hour before the gig. Another solution is to use a master keyboard controller with internal sounds. If your computer blows up, at least you'll have enough sounds to limp through the gig. If you must use a controller-only keyboard, then carry an external tone module you can use in emergencies. If you have enough warning, you can buy a new computer before the gig. In that case, though, you'll need to carry everything needed to re-install the software you use. One reason I use Ableton Live for live performance and hosting virtual instruments is that the demo version is fully functional except for the ability to save—it won't time out in the middle of a set, or emit white noise periodically. I carry a DVD-ROM and USB memory stick (redundancy!) with everything needed to load into Live to do my performance; if all else fails I can buy a new computer, install Live, and be ready to go after making the tweaks we'll cover shortly. Software can become corrupted. If you use a Mac, bring along a Time Machine hard drive. With Windows, enable system restore—the performance hit is very minor. Returning to a previous configuration that’s known to be good may be all you need to fix a system problem. For extra security, carry a portable hard drive with a disk image of your system drive. Macs make it easy to boot from an external drive, as do Windows machines if you're not afraid to go into the BIOS and change the boot order. WINDOWS 7 TWEAKS Neither Windows nor the Mac OS are real-time operating systems. Music is a real-time activity. Do you sense trouble ahead? A computer juggles multiple tasks simultaneously, so it gets around to musical tasks when it can. Although computers are pretty good at juggling, occasional heavy CPU loading (“spikes”) can cause audio dropouts. Although one option is increasing latency, this produces a much less satisfying feel. A better option is to seek out and destroy the source of the spikes. Your ally in this quest is DPC Latency Checker, a free program available at www.thesycon.de/eng/latency_check.shtml. LatencyMon (www.resplendence.com/latencymon) is another useful program, but a little more advanced. DPC Latency Checker monitors your system and shows when spikes occur (Fig. 1); you can then turn various processes on and off to see what's causing the problems. Fig. 1: The left screen shows a Windows laptop with its wireless card enabled, and system power plan set to balanced. The one on the right shows what happens when you disable wireless and change the system power plan to high performance. From the Start menu, choose Control Panel then open Device Manager. Disable (don't uninstall) any hardware devices you're not using, starting with any internal wireless card—it’s a major spike culprit. Even if your laptop has a physical switch to turn this on and off, that's not the same as actually disabling it (Fig. 2). Also disable any other hardware you're not using: internal USB camera, ethernet port, internal audio (which you should do anyway), fingerprint sensor, and the like. Fig. 2: In Device Manager, disable any hardware you’re not using. Onboard wireless is particularly problematic. By now you should see a lot less spiking. Next, right-click on the Taskbar, and open Task Manager. You'll see a variety of running tasks, many of which may be unnecessary. Click on a process, then click on End Process to see if it makes a difference. If you stop something that interferes with the computer's operation, no worries—you can always restart, and the service will restart as well. Finally, click on Start. Type msconfig into the Search box, then click on the Startup tab. Uncheck any unneeded programs that load automatically on startup. If all of this seems too daunting, don't worry; simply disabling the onboard wireless in Device Manager will often solve most spiking issues. BUT WAIT—THERE'S MORE! Laptops try hard to maximize battery life. For example if you're just composing an email, the CPU can loaf along at a reduced speed, thus saving power. But for real-time performance situations, you want as much CPU power as possible. Always use an AC adapter, as relying on the battery alone will almost invariably shift into a lower-power mode. With Windows machines, the most important adjustment is to create a power plan with maximum CPU power. With Windows 7, choose Control Panel > Power Options and create a new power plan. Choose the highest performance power plan as a starting point. After creating the plan, click on Change Plan Settings, then click on Change Advanced Power Settings. Open up Processor Power Management, and set the Maximum and Minimum processor states to 100% (Fig. 3). If there's a system cooling policy, set it to Active to discourage overheating. Fig. 3: Create a power plan that runs the processor at 100% for both minimum and maximum power states. Laptops will have an option to specify different CPU power states for battery operation; set those to 100% as well. If overheating becomes an issue (it shouldn't), you can probably throttle back a bit on the CPU power, like to 80%. Just make sure the minimum and maximum states are the same; I've experienced audio clicks when the CPU switched states. (And in the immortal words of Herman Cain, “I don't have the facts to back me up” but it seems this is more problematic with FireWire interfaces than USB.) A HAPPIER LAPTOP A laptop's connectors are not built to rock and roll specs. If damaged, the result may be an expensive motherboard replacement. Ideally, every computer connection should be a break-away connection; Macs with MagSafe power connectors are outstanding in this respect. With standard power connectors, use an extension cable that plugs between the power supply plug and your computer's jack. Secure this extension cable (duct tape, tie it around a stand leg, or whatever) so that if there's a tug on the power supply, it will pull the power supply plug out of the extension cable jack—not the extension cable plug out of the computer.. Similarly, with USB memory sticks or dongles, use a USB extender (Fig. 4) between the USB port and external device. Fig. 4: A USB extension cable can help keep a USB stick from breaking off at its base (and possibly damaging your motherboard) if pressure is applied to it. It’s also important to invest in a serious laptop travel bag. I prefer hardshell cases, which usually means getting one from a photo store and customizing it for a computer instead of cameras. Finally, remember when going through airport scanners to put your laptop last on the conveyer belt, after other personal effects. People on the incoming side of security can’t run off with your laptop, but those who’ve gone through the scanner can if they get to your laptop before you do. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. Let there be light—if you have a USB port by Craig Anderton When I saw my first light powered by a USB-port, I was smitten. Whether trying to get work done on a plane without disturbing the grumpy person sitting next to me or running a live laptop set in a dark club, I had found the answer. Or more realistically, almost the answer...it used an incandescent bulb, drew much current, weighed a lot, and burned out at an early age. I guess it was sort of the Elvis Presley of laptop accessories. Mighty Bright introduced a single-LED USB light a few years ago that fulfilled the same functions, but much more elegantly. And now, unlike Scotty in Star Trek they actually can give you more power—with their new 2-LED USB Light. What You Need to Know The two white LEDs are controlled by a push switch so you can light one or both LEDs. Compared to the single LED version, having the extra LED available makes a big difference in terms of throwing more light on a subject. The gooseneck is very flexible but holds it positions, and the weight is reasonable The size is about the same as the single-LED version, and it fits in the average laptop bag without problems. Limitations My only concern is the weight—not because it weights a lot, but because USB ports aren’t exactly industrial-strength. However if you plug into a laptop’s side USB port and bend the light in a U so the top is over where the USB connector plugs into the port (Fig. 1), then it becomes balanced and places little weight on the port itself. Fig. 1: Optimum laptop positioning for the 2-LED USB Light. Conclusions Once you have one of these things sitting around, you’ll find other uses. Given how many computers have USB ports on the back, plug this in and you’ll be able to see where all your mystery wires are routed. I take the 2-LED USB Light when I’m on the road, and combined with a general-purpose charger for USB devices, the combo makes a dandy night light—helpful in strange hotel rooms when the fire alarm goes off in the middle of the night, and you don’t want to trip on your way out the door. Also, lots of keyboards have USB ports and assuming it’s not occupied with a thumb drive or similar, the 2-LED USB Light can help illuminate the keyboard’s top panel. Considering the low cost and long LED life (100,000 hours, which equals three hours a day for 90 years), I’d definitely recommend having one of these babies around. You never know when you’re going to need a quick light source, and these days, it’s not too hard to find a suitable USB connector to provide the power. Resources Musician’s Friend Mighty Bright 2-LED USB Light online catalog page ($14.00 MSRP, $11.99 “street”) Mighty Bright’s 2-LED USB Light product web page Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. It's time to play "stompbox reloaded" by Craig Anderton The studio world is not experiencing a compressor shortage. Between hardware compressors, software compressors, rack compressors, and whatever other compressors I’ve forgotten, you’re pretty much covered. But there may be a useful compressor that you haven’t used recently: one of the stompbox persuasion. With most DAWs including inserts so you can integrate external effects easily, interfacing stompboxes isn’t very difficult. Yes, you’ll need to match levels (likely attenuating on the way into the compressor and amplifying on the way out), but that’s not really a big deal. But why bother? Unlike studio compressors, which are a variation on limiters and whose main purpose is to control peaks, guitar compressors were generally designed to increase sustain by raising the level as a string decayed (Fig. 1). Fig. 1: The upper waveform is an uncompressed guitar signal, while the lower one adds compression to increase the sustain. Both waveforms have the same peak level, but the compressed guitar’s decay has a much higher level. In fact some compressors were called “sustainers,” and used designs based on the Automatic Level Control (ALC) circuitry used to keep mic signals at a constant level for CB and ham radio. The gain control elements were typically field-effect transistors (FET) or photoresistors, and had minimal controls—usually sustain, which was either a threshold control or input level that “slammed” the compressor input harder—and output level. Some guitar players felt that compressors made the sound “duller,” so a few designs tuned the compressor feedback to compress lower-frequency signals more than higher-frequency signals—the opposite of a de-esser. Many guitarists patched a preamp between the guitar and compressor to give even more sustain because higher input levels increased the amount of compression. Putting compressors before octave dividers often caused them to work more reliably, and adding a little compression before an envelope-controller filter (like the Mutron III) gave less variation between the low and high filter frequencies. Some legendary compressors include the Dan Armstrong Orange Squeezer (Fig, 2), MXR Dyna-Comp, and BOSS CS-1. But many companies produced compressors, and continue to do so. Fig. 2: Several years ago the classic Dan Armstrong Orange Squeezer was re-issued. Although it has since been discontinued, schematics for Dan’s original design exist on the web. APPLICATIONS RE-LOADED Bass. Not all compressors designed for guitar could handle bass frequencies, especially not a synthesizer set for sub-bass. So, it’s usually best to patch the compressor in parallel with your bass signal. With a hardware synthesizer or bass, split the output and feed two interface (or amp) inputs, one with the compressor inserted. With a virtual synthesizer or recorded track, send a bus output to a spare audio interface output, patch that to the compressor input, then patch the compressor output to a spare audio interface input. Use the bass channel’s send control to send signal into the bus that feeds the compressor. Synthesizers are particularly good with vintage compressors because you can edit the amplitude envelope for a fast attack and quick decay before the sustain. Turn the bass output way up to hit the compressor hard, and you’ll get the aggressive kind of attack you hear with guitar. Drums. Guitar compressors can give a punchy, “trashy” sound that’s good for punk and some metal. As with synth bass, parallel compression is usually best to keep the kick drum sound intact (Fig. 3). Adding midrange filtering before or after the compression can give an even funkier sound. Fig. 3: This setup provides parallel compression. The channel on the left is the drum track; the one on the right is a bus with an “external insert” plug-in. This plug-in routes the insert effect to your audio interface, which allows patching in a hardware compressor as if it was a plug-in. The drum channel has a send control to feed some drum signal to the compressor bus, whose output goes to the master bus. Bus compression. You wouldn’t want to compress a master bus with a stompbox compressor (well, maybe you would!), but try sending bass and drums to am additional bus, then compressing that bus and patching it in parallel with the unprocessed bass and drums sound. This makes for a fatter sound, and “glues” the two instruments together. What’s more, many older compressors had some degree of distortion, which adds even more character to any processing. Vintage compressors with relatively short decay times (most stompbox compressors had fixed attack or decay times) give a “pumping” sound to rhythm sections. EMULATING STOMPBOX COMPRESSION WITH MODERN GEAR Don’t have an old compressor around? There are ways to come close with modern gear. If your compressor has a lookahead option, turn it off. Set the attack to the absolute minimum time possible. Decay time varied depending on the designer; a shorter release (around 100ms) gives a “rougher” sound with chords, but some compressors had quite long release times—over 250ms—to smooth out the decaying string sound. Set a high compression ratio, like 20:1, and a low threshold, as older compressors had low thresholds to pick up weak string vibrations. Finally, try overloading the compressor input to create distortion, which also gives a harder attack. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Whether you're quantizing sequences, programming drum machines, creating beats, or synching to tempo, it helps to know rhythmic notation by Craig Anderton As we all know, lots of great musicians have been able to create an impressive body of work without knowing how to read music. But regardless of whether you expect to be able to read lead sheets on the fly—or even will need to do so—there are some real advantage to “knowing the language.” In particular, it’s hard not to run into references to rhythmic notation. Today’s DAWs quantize to particular rhythmic values, and effects often sync to particular rhythms as well. And if you want to program your own beats, it also helps to know how rhythm works. So let’s forget the tough stuff and take some baby steps into the world of rhythmic notation. This brief overview of rhythmic notation provides the basics; but if you’re new to all this, you’ll probably need to read this section over several times and fool around a bit with something like a drum machine before it all falls into place. Measures. A piece of music is divided into smaller units called measures (also called bars), and each measure is divided into beats. The number of beats per measure, and the rhythmic value of the beats, depends on both the composition and the time signature. Time Signatures. A time signature (also called metric signature) defines a piece of music’s rhythmic nature by describing a measure’s rhythmic framework. The time signature is notated at the beginning of the music (and whenever there’s a change) with two numbers, one on top of the other. The top number indicates the number of beats in each measure, while the bottom number indicates the rhythmic value of the beat (e.g. 4 is a quarter note, 8 is an eighth note, etc.). If that doesn’t make sense yet, let’s move on to some examples. Rhythmic Values for Notes. With a measure written in 4/4, there are four beats per measure, and each beat represents a quarter note. Thus, there are four quarter notes per measure of 4/4 music. Quarter note symbol With a 3/4 time signature, the numerator (upper number) indicates that there are three beats per measure, while the denominator indicates that each of these beats is a quarter note. There are two eighth notes per quarter note so there are eight eighth notes per measure of 4/4 music. Eighth note symbol There are four 16th notes per quarter note, which means there are 16 16th notes per measure of 4/4 music. 16th note symbol There are eight 32nd notes per quarter note. If you’ve been following along, you’ve probably already guessed there are 32 32nd notes per measure of 4/4 music. 32nd note symbol There are also notes that span a greater number of beats than quarter notes. A half note equals two quarter notes. Therefore, there are two half notes per measure of 4/4 music. Half note symbol A whole note equals four quarter notes, so there is one whole note per measure of 4/4 music. (We keep referring these notes to 4/4 music because that’s the most commonly used time signature in contemporary western music.) Whole note symbol Triplets The notes we’ve covered so far divide measures by factors of two. However, there are some cases where you want to divide a beat into thirds, giving three notes per beat. Dividing a quarter note by three results in eighth-note triplets. The reason we use the term “eighth-note triplets” is because the eighth note is closest to the actual rhythmic value. Dividing an eighth note by three results in 16th-note triplets. Dividing a 16th note by three results in 32nd-note triplets. Eighth-note triplet symbol Note the numeral 3 above the notes, which indicates triplets. Rests. You can also specify where notes should not be played; this is indicated by a rest, which can be the same length as any of the rhythmic values used for notes. Rest symbols (from left to right): whole note, half note, quarter note, eighth note, and 16th note Dotted Notes and Rests. Adding a dot next to a note or rest means that it should play one and a half times as long as the indicated value. For example, a dotted eighth would last as long as three 16th notes (since an eighth note is the same length as two 16th notes). A dotted eighth note lasts as long as three 16th notes Uncommon Time Signatures. 4/4 (and to a lesser extent 3/4) are the most common time signatures in our culture, but they are by no means the only ones. In jazz, both 5/4 (where each measure consists of five quarter notes) and 7/4 (where each measure consists of seven quarter notes) are somewhat common. In practice, complex time signatures are often played like a combination of simpler time signatures; for example, some 7/4 compositions would have you count each measure not as 1, 2, 3, 4, 5, 6, 7 but as 1, 2, 3, 4, 1, 2, 3. It’s often easier to think of 7/4 as a bar of 4/4 followed by a bar of 3/4 (or a bar of 3/4 followed by a bar of 4/4, depending upon the phrasing), since as we mentioned, 4/4 and 3/4 are extremely common time signatures. Other Symbols. There are many, many other symbols used in music notation. > indicates an accent; beams connect multiple consecutive notes to simplify sight reading; and so on. Any good book on music notation can fill you in on the details. Two 16th notes beamed together Drawing beams on notes makes them easier to sight-read compared to seeing each note drawn indivicually. FOR MORE INFORMATION These books can help acquaint you with the basics of music theory and music notation. Alfred’s Pocket Dictionary of Music is a concise but thorough explanation of music theory and terms for music students or teachers alike. Practical Theory Complete, by Sandy Feldstein is a self-instruction music theory course that begins with the basics—explanations of the staff and musical notes—and ends with lesson 84: “Composing a Melody in Minor.” Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), andSound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Get more emotion out of your ’boards by putting on the pressure By Craig Anderton Synthesizer keyboards are basically a series of on-off switches, so wresting expressiveness from them is hard. There’s velocity, which produces dynamics based on how fast the key goes from key up to key down; you also have mod wheel, footpedal, pitch bend, and usually one or two sustain switches, all of which can help with expressiveness. But some keyboards have an additional, and powerful, way to increase expressiveness: Aftertouch, also called Pressure. THE TWO KINDS OF AFTERTOUCH Aftertouch is a type of MIDI control signal. Like pitch bend, it’s not grouped in with MIDI Continuous Controller signals but is deemed important enough to be its own dedicated control signal. It produces an output based on how hard you press on the keys after they’re down. There are two types of aftertouch: Channel aftertouch (or pressure). This is the most common form of aftertouch, where the average pressure being applied to the keys produces a MIDI control signal. More pressure increases the value of the control signal. From a technical standpoint, the usual implementation places a force-sensing resistor under the keyboard keys. Pressing on this changes the resistance, which produces a voltage. Converting this voltage to a digital value produces MIDI aftertouch data. Key (or polyphonic) aftertouch (or pressure). Each key generates its own control signal, and the output value for each key corresponds to the pressure being applied to that key. AFTERTOUCH ISSUES Key aftertouch is extremely expressive, but with a few exceptions—notably Keith McMillen Instruments QuNexus (Fig. 1) and CME Xkey USB Mobile MIDI Keyboard—it’s not common in today’s keyboards. Fig. 1: Keith McMillen Instruments QuNexus is a compact keyboard with polyphonic aftertouch. The late, great synthesizer manufacturer Ensoniq made several keyboards with key aftertouch, but the company is no more. Another concern is that key aftertouch is data-intensive, because every key produces data. In the early days of MIDI, this much data often “choked” MIDI sequencers running on old computers that couldn’t keep up. Although many virtual synthesizers (and even hardware ones) can accept key aftertouch data, most likely you’ll be using a keyboard with channel aftertouch. Back then even channel aftertouch could produce too much data, so most MIDI sequencers included MIDI data filters that let you filter out aftertouch and prevent it from being recorded. Most DAWs that support MIDI still include filtering, and for aftertouch, this usually defaults to off. If you want to use aftertouch, make sure it’s not being filtered out (Fig. 2). Fig. 2: Apple Logic (left) and Cakewalk Sonar (right) are two examples of programs that let you filter out particular types of data, including aftertouch, from an incoming MIDI data stream. Depending on the keyboard, the smoothness of how the aftertouch data responds to your pressure can vary considerably. Some people refer to a keyboard as having “afterswitch” if it’s difficult to apply levels of pressure between full off and full on. However, most recent keyboards implement aftertouch reasonably well, and some allow for a very smooth response. A final issue is that many patches don’t incorporate aftertouch as an integral element because the sound designers have no idea whether the controller someone will be using has aftertouch. So, most sounds are designed to respond to mod wheel, velocity, and pitch bend because those are standard. If you want a patch to respond to aftertouch you’ll need to decide which parameter(s) you want to control, do your own programming to assign aftertouch to these parameters, and then save the edited patch. AFTERTOUCH APPLICATIONS Now that you know what aftertouch is and how it works, let’s consider some useful applications. Add “swells” to brass patches. Assign aftertouch to a lowpass filter cutoff, then press harder on the keys to make the sound brighter. You may need to lower the initial filter cutoff frequency slightly so the swell can be sufficiently dramatic. You could even assign aftertouch to both filter and to a lesser extent to level, so that the level increases as well as the brightness. Guitar string bends. Assign aftertouch to pitch so that pressing on the key raises pitch—just like bending a string on a guitar. However, there are two cautions: Don’t make the response too sensitive, or the pitch may vary when you don’t want it to; and this works best when applied to single-note melodies, unless you want more of a pedal steel-type effect. Introduce vibrato. This is a very popular aftertouch application. Assign aftetouch to pitch LFO depth, and you can bring vibrato in and out convincingly on string, guitar, and wind patches. The same concept applies to introducing tremolo to a signal. “Bend” percussion. Some percussion instruments become slighter sharp when first struck. Assign aftertouch to pitch; if you play the keys percussively and hit them hard, you’re bound to apply at least some pressure after the key is down, and bend the pitch up for a fraction of a second. This can add a degree of realism, even if the effect is mostly subliminal. Morph between waveforms. This may take more effort to program if you need to control multiple parameters to do morphing. For example, I use this technique with overdriven guitar sounds to create “feedback.” I’ll program a sine wave an octave or octave and fifth above the guitar note, and tie its level and the guitar note’s level to aftertouch so that pressing on a key fades out the guitar while fading in the “feedback.” This can create surprisingly effective lead guitar sounds. Control signal processors. Although not all synths expose signal processing parameters to MIDI control, if they do pressure can be very useful—mix in echoed sounds, increase delay feedback, change the rate of chorusing for a more randomized effect, increase feedback in a flanger patch, and the like. I’d venture a guess that few synthesists use aftertouch to its fullest—so do a little parameter tweaking, and find out what it can do for you. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Time for a quick trip down the disinformation superhighway by Craig Anderton Maybe it’s just the contentious nature of the human race, but as soon as digital audio appeared, the battle lines were drawn between proponents of analog and those who embraced digital. A lot of claims about the pros and cons of both technologies have been thrown back and forth; let’s look at what’s true and what isn’t. A device that uses 16-bit linear encoding with a 44.1 kHz sampling rate gives “CD quality” sound. All 16-bit/44.1 kHz systems do not exhibit the same audio quality. The problem is not with the digital audio per se, but interfacing to the analog world. The main variables are the A/D converter and output smoothing filter, and to a lesser extent, the D/A converter. Simply replacing a device’s internal A/D converter with an audiophile-quality outboard model that feeds an available AES/EBU or S/PDIF input can produce a noticeable (and sometimes dramatic) change. What’s more, one of digital audio’s dirty little secrets is that when the CD was introduced, some less expensive players used 12-bit D/A converters—so even though the CD provided 16 bits of resolution, it never made it past the output. I can’t help but think that some of the early negative reaction to the CD’s fidelity was about limitations in the playback systems rather than an inherent problem with CDs. 16 bits gives 96 dB of dynamic range, and 24 bits gives 144dB of dynamic range. There are two things wrong with this statement. First, it’s not really true that each bit gives 6dB of dynamic range; for reasons way too complex to go into here, the actual number is (6.02 X N) + 1.76, where “N” is the number of bits. Based on this equation, an ideal 16 bit system has a dynamic range of 98.08 dB. As a rule of thumb, though, 6 dB per bit is a close enough approximation for real-world applications. Going from theory to practice, though, many factors prevent a 16-bit system from reaching its full potential. Noise, calibration errors within the A/D converter, improper grounding techniques, and other factors can raise the noise floor and lower the available dynamic range. Many real-world 16-bit devices offer (at best) the performance of an ideal 14-bit device, and if you find a 24-bit converter that really delivers 24 bits of resolution...I want to buy one! Also note that for digital devices, dynamic range is not the same as signal-to-noise ratio. The AES has a recommended test procedure for testing noise performance of a digital converter; real-world devices spec out in the 87 to 92 dB range, not the 96 dB that’s usually assumed. (By the way, purists should note that all the above refers to undithered converters.) Digital has better dynamic range than analog. With quality components and engineering, analog circuits can give a dynamic range in excess of 120 dB — roughly equivalent to theoretically perfect 20-bit operation. Recording and playing back audio with that kind of dynamic range is problematic for either digital or analog technology, but when 16-bit linear digital recording was introduced and claimed to provide “perfect sound forever,” the reality was that quality analog tape running Dolby SR had betters specs. With digital data compression like MP3 encoding, even though the sound quality is degraded, you can re-save it at a higher bit rate to improve quality. Data compression programs for computers (as applied to graphics, text, samples, etc.) use an encoding/decoding process that restores a file to its original state upon decompression. However, the data compression used with MP3, Windows Media, AAC, etc. is very different; as engineer Laurie Spiegel says, it should be called “data omission” instead of “data compression.” This is because parts of the audio are judged as not important (usually because stronger sounds are masking weaker sounds), so the masked parts are simply omitted and are not available for playback. Once discarded, that data cannot be retrieved, so a copy of a compressed file can never exhibit higher quality than the source. Don’t ever go over 0 VU when recording digitally. The reason for this rule is that digital distortion is extremely ugly, and when you go over 0 VU, you’ve run out of headroom. And frankly, I do everything I can to avoid going over 0. However, as any guitarist can tell you, a little clipping can do wonders for increasing a signal’s “punch.” Sometimes when mixing, engineers will let a sound clip just a tiny bit—not enough to be audible, but enough to cut some extremely sharp, short transients down to size. It seems that as long as clipping doesn’t occur for more than about 10 ms or so, there is no subjective perception of distortion, but there can be a perception of punch (especially with drum sounds). Now, please note I am by no means advocating the use of digital distortion! But if a mix is perfect except for a couple clipped transients, you needn’t lose sleep over it unless you can hear that there’s distortion. And here’s one final hint: If something contains unintentional distortion that’s judged as not being a deal-breaker, it’s a good idea to include a note to let “downstream” engineers (e.g., those doing mastering) know it’s there, and supposed to stay there. You might also consider normalizing a track with distortion to -0.1dB, as some CD manufacturers will reject anything that hits 0 because they will assume it was unintentional. Digital recording sounds worse than vinyl or tape because it’s unnatural to convert sound waves into numbers. The answer to this depends a lot on what you consider “natural,” but consider tape. Magnetic particles are strewn about in plastic, and there’s inherent (and severe) distortion unless you add a bias in the form of an ultrasonic AC frequency to push the audio into the tape’s linear range. What’s more, there’s no truly ideal bias setting: you can raise the bias level to reduce distortion, or lower it to improve frequency response, but you can’t have both so any setting is by definition a compromise. There are also issues with the physics of the head that can produce response anomalies. Overall, the concept of using an ultrasonic signal to make magnetic particles line up in a way that represents the incoming audio doesn’t seem all that natural. Fig. 1: This is the equalization curve your vinyl record goes through before it reaches your ears. Vinyl doesn’t get along with low frequencies, so there’s a huge amount of pre-emphasis added during the cutting process, and equally huge de-emphasis on playback—the RIAA curve (Fig. 1) boosts the response by up to 20 dB at low frequencies and cuts by up to 20 dB at high frequencies, which hardly seems natural. We’re also talking about a playback medium that depends on dragging a rock through yards and yards of plastic. Which of these options is “most natural” is a matter of debate, but it doesn’t seem that any of them can make too strong a claim about being “natural”! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Anderton

    Re-Thinking Reverb

    Create ethereal, unusual reverb effects with phase reversal by Craig Anderton Reverb hasn’t changed a lot over the years: you emulate a gazillion sound waves bouncing off surfaces. But if you’re a gigging musician, one thing that may have changed is how you hear reverb. Back when you were in the audience, you heard reverb coming at you from all sides of a room. Then when you graduated to the stage, reverb started sounding different: you initially heard the sound of your amp or monitors, and then you heard the reverb as it reflected off the the walls, ceiling, and other surfaces. The effect is a little like pre-delay at first, but as the reverb waves continue to bounce, you hear a sort of “bloom” where the reverb level increases before decaying. Controlling reverb to give this kind of effect can produce some lovely, ethereal results. It also has the added bonus of not “stepping on” the original signal being reverberated, because the reverb doesn’t reach its full level until after the original signal has occured It’s not hard to set up this effect; here’s how (Fig. 1). CREATING ETHEREAL REVERB You’ll need two sends from the track to which you want to add reverb that go to two reverb effects buses. These should have the same settings for send level, pan, and pre-post. Insert your reverb of choice into one of the effects bus returns, and set the reverb parameters for the desired reverb sound. For starters, set a decay time of around 2 seconds. Next, insert the same reverb into the other effects bus return, with the same settings. If you can’t do something like drag/copy the existing reverb into another track, save the first reverb’s settings as a preset so you can call it up in the other reverb. The returns should have identical settings as well. Assuming the sends are pre-fader, turn down the original signal’s track fader so you hear only the reverb returns (Fig. 1). Fig. 1: The yellow lines represent sends from a guitar track to two send returns; each has a reverb inserted (in this example. One return also has a plug-in that reverses the phase. Now it’s time for the “secret sauce”: reverse one of the reverb return’s phase (also called polarity). Different DAWs handle this in different ways. Some may have a phase button, while others might have a phase button only for tracks but not for send returns. For situations like this, you can usually insert some kind of phase-switching plug-in like Cakewalk Sonar’s Channel Tools, PreSonus Studio One Pro’s Mixtool, or Ableton Live’s Phase. Reversing the phase should cause the reverb to disappear. If not, then there’s a mismatch somewhere with your settings—check the send control levels, reverb parameters, reverb return controls, etc. Another possibility is that the reverb has some kind of randomizing option to give more “motion.” For example, with Overloud’s Breverb 2, you’ll need to go into the Mod page and turn down the Depth control. In any event, find the cause of the problem and fix it before proceeding. Finally, decrease the reverb decay time on one of the reverbs (e.g., to around 1 second), and start playback. When a signal first hits the reverbs, they’ll be identical or at least very similar and cancel; as the reverb decays, the two reverbs will diverge more, so there will be less cancellation and the reverb tail will “bloom.” Because the cancellation reduces the overall level of the reverbs, you’ll likely need to compensate for this by increasing the reverb return levels. However, note that the two reverb returns need to remain identical with respect to each other. I find the easiest way to deal with this is to group the two faders so that adjusting one fader automatically adjusts the other one. If you’re using long reverb times and there’s not much difference between the two decay times, the volume will be considerably softer. In that case, you may need to send the bus outputs to another bus so you raise the overall level of the combined reverb sound, APPLICATIONS Because it takes a while for the reverb to develop, this technique probably isn’t something you’ll want to use on uptempo songs. It’s particularly evocative with vocals, especially ones where the phrasing has some “space,” as well as with languid, David Gilmour-type solo guitar lines. But I’ve also tried this ethereal reverb effect on individual snare hits and a variety of other signals, so feel free to experiment—maybe you’ll discover additional applications. Happy ambience! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Replace your pickup selector switch with a panpot by Craig Anderton I've tried several designs over the years to be able to do a continuous pan between the bridge and neck pickups, like how a mixer panpot sweeps between the left and right channels. This isn’t as easy as it sounds, but if you’re in an experimental mood, this mod gives you a wider range of colors from your axe without needing outboard boxes like equalizers. However, there are some tradeoffs. A pickup selector switch has no ambiguous positions: it’s either neck, bridge, or both—end of story. A panpot control has two unamibiguous positions at the extremes of rotation, but there's a whole range of possible sounds in between. These variations are subtle, but while it's more difficult to dial in an exact setting than with a standard pickup selector switch, in return there are more possibilities. ABOUT THE SCHEMATIC This circuit uses a standard potentiometer for volume, a dual-ganged potentiometer to do the panning, and an SPDT (single-pole, double-throw) switch with a third, center-off position. Although you won’t need to drill any extra holes if you guitar has a selector switch/volume/tone control combination, the dual gang pot is thicker than standard pots; this could be a problem with thinner-body guitars. Due to all the variables in this circuit, I recommend running a pair of wires (hot and ground) from each pickup to a test jig so you can experiment with different parts values. To avoid hum problems, make sure the metal cases of any pots or switches are grounded. If you end up deciding this mod’s for you, build the circuitry inside the guitar. The dual-ganged panpot (R3) provides the panning. Ideally, this would have a log taper for one element and an antilog taper for the other element but these kinds of pots are very difficult to find. A suitable workaround is use a standard dual-ganged linear taper pot and add "tapering" resistors R1 and R2. If these are 20% of the pot's total resistance, they’ll change the pot taper to a log/antilog curve. The panpot value can range between 100k and 1 Meg, which would require 22k and 220k tapering resistors respectively. Higher resistance values will provide a crisper, more accurate high end while lower values will reduce the highs and output somewhat. A 100k panpot with 22k tapering resistors will cause noticeable dulling and a loss of volume unless you use active pickups, in which case lower values are preferred to higher values; however, some people might prefer the reduced high end when playing through distortion, because this can warm up the sound. The volume control (R4) can be a 250K, 500K, or 1 Meg log (audio) taper control. The three-position switch provides a tone control designed specifically for this circuit, and connects a capacitor (C1) across one pickup, the other pickup, or neither pickup (the tone switch's center position). I was surprised at how switching in the capacitor can change the timbre at the panpot's mid position, and this definitely multiplies the number of tonal options. The optimum capacitor value will depend on the pickups and amp you use, but will probably range from 10 nF (0.01 uF; less bassy) to 50 nF (0.05 uF; more bassy). For even more versatility, you could connect the switch center terminal to ground, and wire different capacitor values from each switch terminal to its corresponding pickup. Two final notes: adjust the two pickups for the same relative output by adjusting their distance from the strings. If one pickup predominates, it will shift the panpot's apparent center off to one side. Finally, switching one pickup out of phrase provides yet another bunch of sounds; also note that removing the tapering resistors may produce a feel that you prefer, particularly if one of the pickups is out of phase. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Signal processing and cool effects aren't just for electric guitars by Craig Anderton Although the goal with acoustic guitar is often to create the most realistic, organic sound possible, a little electric-type processing can enhance an acoustic’s sound in many ways that open up new creative avenues. We’ll assume your acoustic has been electrified (presumably with a piezo pickup) and can produce a signal of sufficient level, and of the proper impedance, to drive contemporary effects units. If you're not sure about this, contact the manufacturer of the pickup assembly, or whoever did the installation. There are quite a few processors dedicated to acoustic guitar, like Zoom’s A3 (Fig. 1). Fig. 1: Zoom’s A3 packages acoustic guitar emulations and effects in a floor pedal format. While convenient and cost-effective, but this article takes more of an à la carte approach with conventional, individual effects. IMPROVING TONE Most electrified acoustics have frequency response anomalies—peaky midrange, boomy bass, and so on—caused primarily by the interaction among the guitar body, pickup, and strings. While some of these anomalies are desirable (classical guitars wouldn't sound as full without the bass resonance most instruments exhibit), some are unwanted. Smoothing out the response is a task for equalization. There are two main types of equalizers (EQ for short) used with acoustic guitar, graphic and parametric. A graphic EQ splits the audio spectrum into numerous frequency bands (Fig. 2). Fig. 2: Source Audio’s Programmable EQ is a graphic EQ that can save and recall custom settings. Depending on the model, the range of frequencies (bandwidth) covered by each band can be as wide as an octave to as narrow as 1/3 octave. The latter types are more expensive because of the extra resolution. The response of each band can be boosted to accent the frequency range covered by that band, or attenuated to make a frequency range less prominent. Graphic equalizers are excellent for general tone-shaping applications such as making the sound "brighter" (more treble), "warmer" (more lower midrange), "fuller" (more bass), etc. A parametric equalizer has fewer bands—typically two to four—but offers more precision since you can dial in a specific frequency and bandwidth for each band, as well as boost or cut the response. So, if your guitar is boomy at a particular frequency, you can reduce the response at that specific frequency only and set a narrow bandwidth to avoid altering the rest of the sound. Or, you can set a wider bandwidth if you want to affect more of the sound. Either type of equalization can help balance your guitar with the rest of the instruments in a band. For example, both the guitar and the male voice tend to fall into the midrange area, which means that they compete to a certain extent. Reducing the guitar's midrange response will leave more "space" for your voice. Another example: if your band has a bass player, you might want to trim back on the bass to avoid a cluttered low end. However, if your band is bassless, then try boosting the low end to help fill out the bottom a bit. Note that piezo pickups have response anomalies, and equalization is very helpful for evening out the response. For more information, check out the article “Make Acoustic Guitar Piezo Pickups Sound Great” at Gibson.com. BRIGHTNESS OR FULLNESS WITHOUT EQUALIZATION Many multieffects offer pitch transposition. I've found that transposing an acoustic guitar sound up an octave (for a brighter sound) or down an octave (for a fuller sound) can sound pretty good, providing that you mix the transposed signal way in the background of the straight sound—you don't want to overwhelm the straight sound, particularly since the processed sound will generally sound artificial anyway. BIGGER SOUNDS A delay line can simulate having another guitarist mimicking your part to create a bigger-than-life, ensemble sound. Run your guitar through a delay set for a short delay (30 to 50 milliseconds). Turn the feedback (or regeneration) and modulation controls to minimum; this produces a slapback echo effect, giving a tight doubling effect. Another option is chorusing, which creates more of a swirling, animated sound as opposed to a straight doubling. The settings are similar to slapback, except use a shorter delay (around 10 to 30 milliseconds) and add a little modulation to vary the delay time and produce the "swirling" effect. Note: with most delay effects, it's best to set the balance (mix) control so that the delayed sound is less prominent than the dry sound. INCREASED SUSTAIN Guitars are percussive instruments that produce a huge burst of energy when you first pluck a string, but then rapidly decays to a much lower level. Often this is what you want, but in some cases the decay occurs too quickly and you might prefer more sustain. A limiter is just the ticket. This device decreases the guitar's dynamic range by holding the peaks to a preset level called a threshold, then optionally amplifying the limited signal to bring the peaks back up to their original level (Fig. 3). Fig. 3: The signal with 4dB limiting (blue) has a higher average level than the original recording. Don't set the threshold too low, or the guitar will sound "squeezed" and unnatural. Also, although many people confuse limiters and compressors, these are not identical devices. A compressor tries to maintain a constant output in the face of varying input signals, which means that not only are high-level signals attenuated, but low-level signals may be subject to a lot of amplification. The above explanation of limiting is fairly basic, and there are several variations on this particular theme. Early model limiters would simply clamp the signal to the threshold; newer models can do that, but may also allow for a gentler limiting action to provide a more natural sound. PEDALLING YOUR WAY TO BIGGER SOUNDS If you have a two-channel amp or mixer, one trick that's applicable to all of the above options is to split your guitar signal into two paths with one split carrying the straight guitar sound, while the other goes through a volume pedal before feeding the desired signal processor. Use the volume pedal to go from a normal to processed acoustic guitar sound, and bring in as much of the processed sound as you want. The possibilities for processing acoustic guitar are just as exciting as for processing electric guitars. The best way to learn, though, is not just by reading this article—my intention is to get you inspired enough to experiment. You never know what sounds you'll discover as you plug your guitar output into various device inputs. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Get the most out of today’s digital wonderboxes by Craig Anderton Everyone’s always looking for a better guitar sound, and while the current infatuation with vintage boutique effects has stolen a bit of the spotlight from digital multieffects, don’t sell these processors short. When properly programmed, they can emulate a great many “vintage” timbres, as well as create sounds that are extremely difficult to achieve with analog technology. As with many other aspects of audio, there is no one “secret” that gives the ultimate sound; great sounds are often assembled, piece by piece. Following are ten tips to help you put together a better guitar sound using multieffects. Line 6's POD HD500 is one of today's most popular digital multieffects for guitar. 1. DON’T BELIEVE THE INPUT LEVEL METERS Unintentional digital distortion can be nasty, so minimize any distortion other than what’s created intentionally within the multieffects. The input level meters help you avoid input overload, but they may not tell you about the output. For example, a highly resonant filter sound (e.g.,wa) can increase the signal level internally so that even if the original signal doesn’t exceed the unit’s input headroom, it can nonetheless exceed the available headroom elsewhere. Some multieffects meters can monitor the post-processed signal, but this isn’t a given. If the distortion starts to “splatter” yet the meters don’t indicate overload, try reducing the input level. 2. USE PROPER GAIN-STAGING If a patch uses many effects then there are several level-altering parameters, and these should interact properly—just like gain-staging with a mixer. Suppose an equalizer follows distortion. The distortion will probably include input and output levels, and the filter will have level boost/cut controls for the selected frequency. As one illustration of gain-staging, suppose the output filter boosts the signal at a certain frequency by 6 dB. If the signal coming into the filter already uses up the available headroom, asking it to increase by 6 dB means crunch time. Reducing the distortion output level so that the signal hitting the filter is at least 6 dB below the maximum available headroom lets the filter do its work without distortion. 3. ADD EQ PEAKS AND DIPS FOR REALISM Speakers, pickups, and guitar bodies have anything but a flat response. Much of the characteristic difference between different devices is due to frequency response variations—peaks and dips that form a particular “sonic signature.” For example, I analyzed some patches David Torn programmed for a multieffects and found that he likes to add 1 kHz boosts. On the other hand I often add a slight boost around 3.5 kHz so guitars can cut through a mix even at lower volume levels. With 12-strings, I usually cut the low end to get more of a Rickenbacker sound. Parametric EQ is ideal for this type of processing. 4. CUT DELAY FEEDBACK LOOP HIGH FREQUENCIES Each successive repeat with tape echo and analog delay units has progressively fewer high frequencies, due to analog tape’s limited bandwidth. If your multieffects can reduce high frequencies in the delay line’s feedback path, the sound will resemble tape echo rather than straight digital delay. 5. A SOLUTION FOR THE TREMOLO-IMPAIRED If your pre-retro craze multieffects doesn’t have a tremolo, check for a stereo autopanner function. This shuttles the signal between the left and right channels at a variable rate (and sometimes with a choice of waveforms, such as square to switch the sound back and forth, or triangle for a smoother sweeping effect). To use the autopanner for tremolo, simply monitor one channel and turn down the other one. The signal in the remaining channel will fade in and out cyclically, just like a tremolo. 6. CABINET SIMULATORS ARE COOL, BUT… Many multieffects have speaker simulators, which supposedly recreate the frequency response of a typical guitar speaker in a cabinet. If you’re feeding the multieffects output directly into a mixer or PA instead of a guitar amp and this effect is not active, the timbre will often be objectionably buzzy. Inserting the speaker emulator in the signal chain should give a more realistic sound. However, if you go through a guitar amp and the emulator is on, the sound will probably be much duller, and possibly have a thin low end as well—so bypass it. You might be surprised how many people have thought a processor sounded bad because they plugged an emulated cabinet output designed for direct feeds to mixers into a guitar amp. 7. USE A MIDI PEDAL FOR MORE EXPRESSION A multieffects will generally let you assign at least one parameter per patch to a MIDI continuous controller number. For example, if you set echo feedback to receive continuous controller message 04, and set a MIDI pedal to transmit message 04, then moving the pedal will vary the amount of echo feedback. You can usually scale the response as well, so that moving the pedal from full off to full on creates a change that’s less than the maximum amount. This allows greater precision because the pedal covers a narrower range. Scaling can sometimes invert the “sense” of the pedal, so that pressing down creates less of an effect rather than more. 8. MAKE SURE STEREO OUTPUTS DON’T CANCEL Some cheapo effects, and a large number of “vintage” effects, create stereo with time delay effects by sending the processed signal to one channel, and an out-of-phase version of the processed signal to the other channel. While this can sound pretty dramatic with near-field monitoring, should the two outputs ever collapse to mono , the effect will cancel and leave only the dry sound. To test for this, plug the stereo outs into a two-channel mono amp or mixer (set the channel pans to center). Start with one channel at normal listening volume, and the second channel down full. Gradually turn up the second channel; if the effect level decreases, then the processed outputs are out of phase. If the effect level increases, all is well. 9. PARALLELING MULTIEFFECTS WITH GUITAR AMPS One way to enrich a sound is to double a multieffects with an amp, and mix the sounds together. Although you could simply split the guitar through a Y-cord and feed both, here’s a way that can work better. To supplement the multieffects sound with an amp sound, send the multieffects “loop send” (if available) to the amp input. This preserves the way the multieffects input stage alters your guitar. If you’d rather supplement the basic amp sound with a multieffects, feed the amp’s loop send to the multieffects signal input to preserve the amp’s preamp characteristics. 10. BE AWARE OF THE PROBLEMS WITH PRESETS Many musicians evaluate a multieffects by stepping through the presets, but you need to be aware of two very important issues. First, whoever designed the presets wasn’t you—it’s very doubtful they were using the same guitar, pickups, string gauge, pick, touch, etc. If a preset works with your playing style, it’s due to luck more than anything else. Second, presets are usually designed to sound impressive during demos, and will be loaded up with effects. Sometimes creating your own cool presets simply involves taking a factory preset and removing some selected effects, and adjusting an emulated amp’s drive control to match your playing style. Well, that covers the 10 tips. Have fun strumming those wires—and remember that the magic word for all guitar multieffects is “equalization.” Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. Prevent "tone suckage" with this simple test procedure by Craig Anderton Is your guitar sounding run down? Tired? Dull and anemic? It may not have the flu, but be feeding the wrong kind of input. A guitar pickup puts out relatively weak signals, and the input it feeds can either coddle those signals or stomp on them. It’s all a question of the input’s impedance, so lets look at a simple test for determining whether that amp or signal processor you’re feeding is a signal coddler or a signal stomper. You might think that testing for input impedance is pretty esoteric, and that you need an expensive impedance tester, or at least have to findone of those matchbooks that says “Learn Electronics at Home in Your Spare Time.” But in this case, testing for impedance is pretty simple. You’ll need a standard issue analog or digital volt-ohmmeter (VOM), as sold by Radio Shack and other electronics stores (a good digital model should cost less than $40). This is one piece of test equipment no guitarist should be without anyway, as you can test anything from whether your stage outlets are really putting out 117V to whether your cable is shorted. You’ll also need a steady test tone generator, which can be anything from an FM tuner emitting a stream of white noise to a synthesizer set for a constant tone (or even a genuine test oscillator). WHAT IS IMPEDANCE? If theory scares you, skip ahead to the next subhead. If you can, though, stay tuned since impedance crops up a lot if you work with electronic devices. Impedance is a pretty complex subject, but we can just hit the highlights for the purposes of this article. An amp or effect’s input impedance essentially drapes a resistance from the input to ground, thus shunting some of your signal to ground. The lower the resistance to ground, the greater the amount of signal that gets shunted. The guitar’s output impedance, which is equivalent to putting a resistance in series with your guitar and the amp input, works in conjunction with the input impedance to impede the signal. If you draw an equivalent circuit for these two resistances, it looks suspiciously like the schematic for a volume control (Fig. 1). Fig. 1: The rough equvalent of impedance, expressed as resistance. If the guitar’s output impedance is low and the amp input impedance is high, there’s very little loss. Conversely, a high guitar output impedance and low amp input impedance creates a lot of loss. The reason why a low input impedance "dulls" the sound is because a pickup’s output impedance changes with frequency—at higher frequencies, the guitar pickup exhibits a higher output impedance. Thus, low frequency signals may not be attenuated that much, but high frequencies could get clobbered. Buffer boards and on-board preamps can turn the guitar output into a low impedance output for all frequencies, but many devices are already designed to handle guitars, so adding anything else would be redundant. The trick is finding out which devices are guitar-friendly and which aren’t; you have to be particularly careful with processors designed for the studio, as there may be enough gain to kick the meters into the red but not a high enough input impedance to preserve your tone. Hence, the following test. IMPEDANCE TESTING This test takes advantage of the fact that impedance and resistance are, at least for this application, roughly equivalent. So, if we can determine the effect’s input resistance to ground, we’re covered. (Just clipping an ohmmeter across a dummy plug inserted in the input jack isn’t good enough; the input will usually be capacitor-coupled, making it impossible to measure resistance without taking the device’s cover off.) Wire up the test jig in Fig. 2, which consists of a 1 Meg linear taper pot and two 1/4" phone jacks. Plug in the signal generator and amplifier (or other device being tested), then perform the following steps. Fig. 2: The test jig for measuring impedance. Test points are marked in blue. 1. Set the VOM to the 10V AC range so it can measure audio signals. You may later need to switch to a more sensitive range (e.g., 2.5V or so) if the test oscillator signal isn’t strong enough for the meter to give a reliable reading. 2. Set R1 to zero ohms (no resistance). 3. Measure the signal generator level by clipping the VOM leads to test points 1 and 2. The polarity doesn’t matter since we’re measuring AC signals. Try for a signal generator level between 1 and 2 volts AC but be careful not to overload the effect and cause clipping. 4. Rotate R1 until the meter reads exactly 50% of what it did in step 3. 5. Be very careful not to disturb R1’s setting as you unplug the signal generator and amplifier input from the test jig. 6. Set the VOM to measure ohms, then clip the leads to test points 1 and 3. 7. Measure R1’s resistance. This will essentially equal the input impedance of the device being tested. INTERPRETING THE RESULTS If the impedance is under 100k, I’d highly recommend adding a preamp or buffer board between your guitar and amp or effect to eliminate dulling and signal loss. The range of 100k to 200k is acceptable although you may hear some dulling. An input impedance over 200k means the designer either knows what guitarists want, or got lucky. Note, however, that more is not always better. Input impedances above approximately 1 megohm are often more prone to picking up radio frequency interference and noise, without offering much of a sonic advantage. So there you have it: amaze your friends, impress your main squeeze (well, on second thought maybe not), and strike fear into the forces of evil with your new-found knowledge. A guitar that feeds the right input impedance comes alive, with a crispness and fidelity that’s a joy to hear. Happy picking—and testing. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Not quite sure how digital audio works? Here's your refresher course by Craig Anderton Digital technology—which brought us home computers, $5 calculators, cars you can't repair yourself, Netflix, and other modern miracles—has fundamentally re-shaped the way we record and listen to music. Yet there's still controversy over whether digital audio represents an improvement over analog audio. Is there some inherent aspect of digital audio that justifies this skepticism? Let's take a look at the basics of digital sound audio: why it’s different from analog sound, its benefits, and its potential drawbacks. Although digital audio continues to improve, the more you know about it, the more you can optimize your gear to take full advantage of what digital audio can offer. BASICS OF SOUND What we call “sound” is actually variations in air pressure (at least that’s the accepted explanation) that interact with our hearing mechanism. The information received by our ears is passed along to the brain, which processes this information. However, while acoustic instruments automatically generate changes in air pressure which we hear as sound, electronic instruments create their sound in the form of voltage variations. Hearing these voltage variations requires converting them into moving air. A transducer is a device that converts one form of energy into another; for example, a loudspeaker can convert voltage variations into changes in air pressure, while a microphone can change air pressure changes into voltage variations. Other transducers include guitar pickups (which convert mechanical energy to electrical energy), and tape recorder heads (which convert magnetic energy into electrical energy). If you look at audio on a piece of test equipment, it looks like a squiggly line, which graphically represents sound (Fig. 1). Fig. 1: An audio waveform. This could stand for air pressure changes, voltage changes, string motion, or whatever. A straight horizontal line represents a condition of no change (i.e. zero air pressure, zero voltage, etc.), and the squiggly line is referenced to this base line. For example, if the line is showing a speaker cone’s motion, excursions above the base line might indicate that the speaker cone is moving outward, while excursions below the base line might indicate that the speaker cone is moving inward. These excursions could just as easily represent a fluctuating voltage (such as what comes out of a synthesizer) that alternates between positive and negative, or even the air pressure changes that occur if you strike a piano key. The squiggly line is called a “waveform.” Let’s assume that striking a single piano note produces the waveform shown in Fig. 1. If we take that waveform and press an exact analogy of the waveform into a vinyl record, that record will contain the sound of a piano note. Now, suppose we play that record. As the stylus traces this waveform, the phono cartridge will send out voltage variations which are analogous to the original air pressure changes caused by the piano note. This low-level signal then passes through an amplifier, which augments the voltage enough to drive a speaker cone back and forth. The final result is that the speaker cone follows the waveform motion, thus producing the same air variations originally pressed into the vinyl record. Notice that each stage transfers a signal in its own medium (vinyl, wire, air, etc.) that is analogous to the input signal; hence the term, analog recording. Unfortunately, analog recording is not without its faults. First of all, if the record has pops, clicks, or other problems, these will be added on to the original sound and show up as undesirable “artifacts” in the output. Second, the cartridge will add its own coloration; if it can’t follow rapid changes due to mechanical inertia, distortion will result. Phono cartridge preamps also require massive equalization (changes in frequency reponse) to accommodate cartridge limitations. Amplifiers add noise and hum, and speakers are subject to all kinds of distortion and other problems. So, while the signal appearing at the speaker output may be very similar to what was originally recorded, it will not duplicate the original sound due to these types of errors. When you duplicate a master tape or press it into vinyl, other problems will occur due to the flawed nature of the transfer process. In fact, every time you dub an analog sound, or pass it through a transducer, the sound quality deteriorates. THE CONSISTENCY OF DIGITAL Digital audio removes some of the variables from the recording and playback process by converting audio into a string of numbers, and then passing these numbers through the audio chain (in a bit, we’ll see exactly why this improves the sound). Fig. 2 illustrates the conversion process from an analog signal into a number. Fig. 2: The digital conversion process. Fig. 2a represents a typical waveform which we want to record. A computer takes a “snapshot” of the signal every few microseconds (1/1,000,000th of a second) and notes the analog signal's level, then translates this “snapshot” into a number representing the signal's level. Taking additional samples creates the “digitized” signal shown in Fig. 2b. Note that the original signal has been converted into a series of samples, each of which has its own unique value. Let’s relate what we’ve discussed so far to a typical audio system. A traditional microphone picks up the audio signal, and sends it to an Analog-to-Digital Converter, or ADC for short. The computer takes this numerical information and optionally processes it—for example, delays it in the case of a digital delay or with a sampling keyboard, stores the information in memory. So far so good, but listening to a bunch of numbers does not exactly make for a wonderful audio experience. After all, this is an analog world, and our ears hear analog sound, so we need to convert this string of numbers back into an analog signal that can do something useful such as drive a loudspeaker. This is where the Digital-to-Analog Converter (DAC) comes into the picture; it takes each of the numerical samples and re-converts it to a voltage level, as shown in Fig. 2c. A lowpass filter works in conjunction with the DAC to filter the stair-step signal, thus “smoothing” the series of discrete voltages into a continuous waveform (Fig. 2d). We may then take this newly converted analog signal and do all of our familiar analog tricks like putting it through an amplifier/speaker combination. But what’s the point of going through all these elaborate transformations? And doesn’t it all affect the sound? Let’s examine each question individually. The main advantage of this approach is that a digitally-encoded signal is not subject to the deterioration an analog signal experiences. Consider the compact disc, the first example of mass-market digital audio; it stores digital information on a disc which is then read by a laser and converted back into analog. By taking this approach, if a scratch appears on the disc it doesn’t really matter—the laser recognizes only numbers, and will tend to ignore extraneous information. Even more importantly, using digital audio preserves quality as this audio goes through the signal chain. For example, a conventional analog multi-track tape gets mixed down to an analog two-track tape, which introduces some sound degradation due to limits of the two-track machine. It then gets mastered (another chance for error), converted into a metal stamper (where even more errors can occur), and finally gets pressed into a record (and we all know what kinds of problems that can cause, from pops to warpage). At each audio transfer stage, signal quality goes down. With digital recording, suppose you record a piece of music into a computer-based recording system that stores sounds as numbers. When it’s time to mix down, the numbers—not the actual signal—get mixed down to the final stereo or surround master (of course, the numbers are monitored in analog so you can tell what’s going on). Now, we can transfer that digitally-mixed signal directly to the compact disc; this is an exact duplicate (not just an analogy) of the mix, so there's no deterioration in the transfer process. Essentially, the Analog-to-Digital Converter at the beginning of the signal chain “freeze dries” the sound, which is not reconstituted until it hits the Digital-to-Analog Converter in the listener’s audio system. This is why digital audio can sound so clean; it hasn’t been subjected to the petty humiliations endured by an analog signal as it works its way from studio to home stereo speaker. LIMITATIONS OF DIGITAL AUDIO So is digital audio perfect? Unfortunately,digital audio introduces its own problems which are very different from those associated with analog sound. Let’s consider these one at a time. Insufficient sampling rate. Consider Fig. 3, which shows two different waveforms being sampled at the same sampling rate. Fig. 3: Sampling rate applied to two different waveforms. The original waveforms are the light lines, each sample is taken at the time indicated by the vertical dashed line, and the heavy black line indicates what the waveform looks like after sampling. Fig. 3a is a reasonably good approximation of the waveform, but Fig. 3b just happens to have each sample land on a peak of the waveform, so there is no amplitude difference between samples, and the resulting waveform looks nothing at all like the original. Thus, what comes out of the DAC can, in extreme cases, be transformed into an entirely different waveform from what went into the ADC. The solution to the above problems is to make sure that enough samples are taken to adequately represent the signal being sampled. According to the Nyquist theorem, the sampling frequency should be at least twice as high as the highest frequency being sampled. There is some controversy as to whether this really is enough, but that’s a controversy we won’t get into here. Filter coloration. As mentioned earlier, we need a filter after the DAC to convert the stair-step samples into something smooth and continuous. The only problem is that filters can add their own coloration, although over the years digital filtering has become much more sophisticated and transparent. Quantization. Another sampling problem relates to resolution. Suppose a digital audio system can resolve levels to 10 mv (1/100th of a volt). Therefore, a level of 10 mV would be assigned one number, a level of 20 mV another number, a level of 30 mV yet another number, and so on. Now suppose the computer is trying to sample a 15 mV signal—does it consider this a 10 mV or 20 mV signal? In either case, the sample does not correspond exactly to the original input level, thus producing a quantization error. Interestingly, note that digital audio has a harder time resolving lower levels (where each quantized level represents a large portion of the overall signal level) than higher levels (where each quantized level represents a small portion of the overall signal level). Thus, unlike analog gear where distortion increases at high amplitudes, digital systems tend to exhibit the greatest amount of distortion at lower levels. Dynamic range errors. A computer cannot resolve an infinite number of quantized levels; therefore, the number of levels it can resolve represents the systen's dynamic range. Computers express numbers in terms of binary digits (also called “bits”), and the greater the number of bits, the greater the number of voltage levels it can quantize. For example, a four-bit system can quantize 16 different levels, an eight-bit system 256 different levels, and a 16-bit system can resolve 65,536 different levels. Clearly, a 16 bit system offers far greater dynamic range and less quantization error than four or eight-bit systems, and 20 or 24 bits is even better. Incidentally, there’s a simple formula to determine the approximate dynamic range in dB based on the bits used in a digital audio system, where dynamic range = 6 X number of bits. Thus, a 16 bit system offers 96 dB of dynamic range—excellent by any standards. However, this is theoretical spec. In reality, factors like noise, circuit board layouts, and component limitations reduce the maximum potential dynamic range. THE DIGITAL AUDIO DIFFERENCE Despite any limitations, when the CD was introduced most consumers voted with their dollars and seemed to feel that despite any limitations, the CD's audio quality sure beat ticks, pops, and noise. Unfortunately, the first generation of CD players didn't always realize the full potential of the medium; the less expensive ones sometimes used 12-bit converters, which didn't do the sound quality any favors. Also, engineers re-mastering audio for the CD had to learn a new skill set, as what worked with tape and vinyl didn't always translate to digital media. While digital audio may not be perfect, it’s pretty close and besides, the whole field is still relatively young compared to the decades over which analog audio matured. An alternate digital technology, Direct Stream Digital, was introduced several years to a less-than-enthusiastic response from consumers yet many believe it sounds better than standard digital audio based on PCM technology; furthermore, as of this writing the industry is considering transitioning to 24-bit systems with a 96kHz sampling rate. While controversial (many feel any advantage is theoretical, not practical), this does indicate that efforts are being made to further digital audio's evolution. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. Lock your bass to the kick (or other drums) for a super-tight groove By Craig Anderton One of life’s better moments is when the bass/drum combination plays like a single person with four hands—and really tight hands at that. When the rhythm section is totally locked, everything else seems just that much tighter. When you’re in the studio, several tools can help lock the rhythm section together. While they’re no replacement for “the real thing” (i.e., humans that play well together!), they can provide some pretty cool effects. Following is one of my favorites: a technique that locks bass to the kick drum so that they hit at exactly the same time. BASS GATE Understanding this process requires using a noise gate, a signal processor designed to remove hiss from a signal. It typically has two inputs and one output. One of the inputs is for the audio signal you want to clean up, while the output is where the processed signal exits. In between, there’s a “gate” that either is open and lets the signal through, or is closed and blocks the signal. The second input is a “control” input that senses an incoming signal level and converts it into a control signal. If the signal level is above a user-settable threshold, then the gate opens and lets the signal through. If the signal level is below the threshold, then the gate closes, and there’s no output. Noise gates were very popular in the days of analog tape, which had a consistent level of background hiss. You’d set the gate threshold just above the hiss, so that (at least in theory) any “real” signal, which presumably was higher in level than the hiss, would open the gate. If the signal consisted of just noise, then the gate would close, blocking the hiss. Most noise gates can do more than just simply turn the signal on and off. Other controls include: Decay: Determines how long it takes the gate to fade out after the control signal goes below the threshold. Attack: Sets how long it takes for the gate to fade in after the control signal goes above the threshold (good for attack delay effects). Gating amount: his determines whether the "gate closed" condition blocks the signal completely, or only to a certain extent (e.g., 10 or 20dB below normal). Typically, the control input senses the signal present at the main audio input. However, some hardware noise gates bring this input to its own jack, called a “key” input. This allows some signal other than the main audio input (like a kick drum) to turn the gate on and off. In today’s computer-based recording system, noise gates typically have a “sidechain” input which acts like a key input. A send from a different audio track can feed the sidechain input as a destination, and thus control that gate independently of the signal going through it. CONNECTIONS Fig. 1 shows the basic setup. The kick track has a send bus, with one of the available destinations being the bass track’s gate sidechain input. Whenever the kick hits, the bass passes through the gate; if there’s no kick signal, the bass track’s gate closes and the bass signal becomes inaudible or reduced in level.. Fig. 1: The kick track’s Bus 1 feeds the PC4K Expander/Gate module’s sidechain input, which is shown as part of the Sonar ProChannel that's “flown out” from the Gated Bass track (track 3). Track 2 carries the unprocessed bass sound. However, note there are two copies of the bass track, although you don’t necessarily need this. You may want to vary the blend between the gated and “continuous” tracks, or process the gated track—for example, send the bass through some distortion, then gate it and mix this track in behind the main bass track. Every time the kick hits, it lets through the distorted bass burst (which can be kind of cool). Another example involves adding a significant treble or upper midrange boost to the gated track. Whenever the kick and bass hit simultaneously the bass will sound a little brighter, thus better differentiating the two sounds. Also note that the kick track send post-fader button is turned off, so the send signal is pre-fader. This means the send level is constant regardless of the channel’s fader setting. Having the bass gated on/off can be very dramatic, but don’t forget about using gating to bring in variations on the core sound. Also remember this technique isn’t exclusive to the studio—you can gate live as well. Sure, gating is a “trick”—but it can add some really rhythmic, useable effects. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. It's not as simple as just placing a mic up against a speaker by Craig Anderton Miking guitar cabinets may seem like a simple process, because all you really need to do is to pick up moving air with a mic. But there are many variables: the mic, its placement the room environment, the cabinet itself, and the amp settings. So, let’s consider some of the most important considerations when miking amp cabinets. MIC SELECTION Many guitarists record with the amp cranked to get “that” sound, so under these circumstances it’s important to choose a mic that can handle high sound pressure levels (SPL). Dynamic mics are ideal for these situations, and the inexpensive Shure SM57 is the classic guitar cabinet mic—many engineers choose it even when cost is no object (Fig. 1). Although dynamic mics sometimes seem deficient in terms of brightness, this doesn’t matter much with amp cabinets, which typically start losing response around 5kHz or so. A couple less dB at 17kHz isn’t going to make a lot of difference. That said, there are also more upscale dynamic mics, like the Electrovoice RE20 and Sennheiser MD421, which give excellent results. Fig. 1: Shure’s SM57 is the go-to cab mic in many pro and project studios. Condenser mics are often too sensitive for close miking of loud amps, but they can give a more “open” response. They also make good “auxiliary” mics—placing one further back from the amp adds definition to the dynamic primary mic, and picks up room ambience that can add character to the basic amp sound. For condenser mics, AKG’s C414B-ULS is a great, but pricey, choice; their C214 gives similar performance but at a much lower cost. Neumann’s U87 is beyond most budgets, but the more affordable Audio-Technica AT 4051 has a similar character and it’s also great for vocals. Then there’s the ribbon mic. Although ribbon mics used to be fragile, newer models use more modern construction techniques and are much more rugged. Ribbon mics have an inherently “warm” personality, and a polar pattern that picks up sounds from the front and back—but not the sides. This characteristic is very useful with multi-cab guitar setups; by choosing which sounds to accept or reject based on mic placement, ribbon mics let you do some pretty cool tricks. Royer’s R-121 and R-101 are popular for miking cabs, Beyer’s M160 is a classic ribbon mic that’s been used quite a bit with cabs. Regardless of what mic you use, check to see whether the mic has a switchable attenuator (called a “pad”) to reduce the mic’s sensitivity. For example, a -10dB pad will make the mic 10dB less sensitive. With loud amps, engage this to avoid distortion. MIC PLACEMENT First, remember that while each speaker in a cab should sound the same, that’s not always true. Try miking each speaker in exactly the same place, and listen for any significant differences. Start off with the mic an inch or two back from the cone, perpendicular to the speaker, and about half to two-thirds of the way toward the speaker’s edge. To capture more of the cabinet’s influence on the sound (as well as some room sound), try moving the mic a few inches further back from the speaker. Moving the mic closer to the speaker’s center tends to give a brighter sound, while angling the mic toward the speaker or moving it further away provides a tighter, warmer sound. Also, the amp interacts with the room: Placing the amp in a corner or against a wall increases bass. Raising it off the floor also changes the sound. The room’s ambience makes a difference as well. If the room is small and has hard surfaces, the odds are there will be quite a bit of ambient sound making its way into the mic, even if it’s close to the speaker. This isn’t necessarily a bad thing; I’m a fan of ambience, because I find it often adds a more lively feel to the overall sound. DIRECT VS. MIKED Some amps offer direct feeds (sometimes with cabinet simulation); combining this with the miked sound can give a “big” sound. However, the miked sound will be delayed compared to the direct sound—about 1ms per foot away from the speaker. This can result in comb filtering, which you can think of as a kind of sonic kryptonite because it weakens the sound. To counteract this, nudge the miked sound earlier in your recording program until the miked and direct sounds line up, and are in-phase (Fig. 2) Fig. 2: In the top pair of waveforms, the top waveform is the direct sound and the next one down is the miked signal. Note how it’s delayed compared to the direct sound. In the bottom pair, the miked signal (bottom waveform) has been “nudged” forward so it lines up with the direct sound. THE MIC PLACEMENT “FLIGHT SIMULATOR” IK Multimedia’s AmpliTube 3 (Fig. 3) lets you move four “virtual mics” around in relation to the virtual amp. The results parallel what you’d hear in the “real world,” and you can learn a lot about how mic placement affects the overall sound by moving these virtuals mics. While this doesn’t substitute for going into the studio, moving mics around various amps, and monitoring the results, it’s a great introduction. Nor is AmpliTube alone; Softibe’s Metal Room offers two cabs and mics (Fig. 4), Overloud’s TH2 has two moveable mics for their cabinets (Fig. 5), and MOTU’s Live Room G plug-in for Digital Performer 8 (Fig. 6) also allows various mic positions for three difference mics. Fig. 3: IK Multimedia’s AmpliTube offers four mics you can place in various positions. Fig. 4: Softube’s Metal Room has two cabs, each with two mics you can position as desired. Fig. 5: Overloud’s TH2 has two mics for covering their cabs. Fig. 6: MOTU’s Digital Performer 8 has two “live room” plug-is, one for guitar and one for bass, that provides for various miking options. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Amp sims aren't only about distortion... By Craig Anderton I’ve seen several comments online that amp sims are okay for distorted sounds, but not clean ones. However, it’s very easy to get good, clean guitar sounds, sometimes even with that “tube sparkle”...you just have to know these six secrets. 1. Record at an 88.2 or 96kHz sample rate. The lack of “cleanliness” you hear might not be due to excessive levels that cause clipping, but aliasing or foldover distortion. Recording at a higher sample rate can minimize the odds of this happening (note that several guitar amp sims offer an “oversampling” option that accomplishes the same basic result, even if the project’s base sampling rate is 44.1 or 48kHz). 2. Choose the right amp model. This may seem obvious, but not all clean models are as expected. For example, many “clean” emulations have a little bit of crunch, just like the original. Some sim manufacturers create clean amps that aren’t designed to emulate classic amps (Fig. 1); try these first. Fig. 1: AmpliTube 3’s Custom Solid State Clean model doesn’t have to emulate anything, so it’s designed to be as clean as possible. 3. Turn down the drive, turn up the master. It’s possible to get cleaner sounds with some amp models by dialing back dramatically on the input drive control, and boosting the output level to compensate (Fig. 2). Fig. 2: POD Farm 2’s Blackface Lux model can give clean sounds that ooze character. Here’s how: Turn down the amp Drive and input gain, turn the amp Volume all the way up, and set the output gain high enough to give a suitable output level. 4. Compress or limit on the way into the amp. Building on the previous tip, if you’re pulling down the level, then the guitar might sound wimpoid. Insert some compression or limiting between the guitar and amp model to keep peaks under control, and allow getting a higher average level to the amp without distortion. 5. Watch your headroom. Guitars have a huge dynamic range, so don’t let the peaks go much above -6 to -10dB if you want to stay clean. Yes, we’re used to making those little red overload LEDs wink at us, but that’s not a good strategy with digital audio—especially these days, when 24-bit resolution gives you plenty of dynamic range. 6. Beware of inter-sample clipping. With most DAWs, you can go well into the red on individual channels because their audio engines have virtually unlimited headroom (thanks to 32-bit floating-point math or better, in case your inner geek wondered). However, when those signals hit the output converters to become audio, headroom goes back to the real world of 16 or 24 bits, and any overloads may turn into distortion. So if the meters don’t show clipping you’re okay, right? Not so fast. Most meters measure the actual values of the digital waveform’s samples, prior to reconstruction into analog. But that reconstruction process might create signal peaks that are higher than the samples themselves, and which don’t register on your meters (Fig. 3). Fortunately, you can download SSL’s free metering plug-in that shows inter-sample clipping from the Solid State Logic web site. Fig. 3: Waves’ G|T|R is set to a clean amp. The DAW’s master output meter (left) shows that the signal is just below clipping, but SSL’s X-ISM meter that measures inter-sample distortion shows that clipping has actually occurred. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. It's a stereo world, so it's time for your guitar to join in by Craig Anderton Since the dawn of time, with a very few exceptions electric guitar outputs have been mono. This made sense for when the main purpose of guitar players (aside from picking up members of the opposite sex) was to take an amp to a gig and plug in to it. But with more guitar players opting for stereo in the studio, and sometimes even for live use, it’s natural to want to turn that mono output into something with a wider soundstage. So, here are six tips (one for each string, of course) about how to obtain stereo from mono guitars. But first, our most important tip: Don’t automatically assume a guitar part needs to be stereo; sometimes a focused, mono guitar part will contribute more to a mix than stereo. On occasion, I even end up converting the output from a stereo effect back into mono because it ends up making a major improvement. 1 EFFECTS THAT SYNTHESIZE STEREO Reverb, chorusing, stereo delay, and other effects can often synthesize a stereo field from a mono input. This is particularly effective with reverb, as the dry guitar maintains its mono focus while reverb billows around it in stereo. Some delays offer choices for handling stereo—like ping-pong delay, where each delay bounces between the left and right channels, LCR (left/center/right, with three separate taps for left, center, and right delay times), and the ability to set different delay sounds for the two channels. 2 EQUALIZATION I wrote an article for Harmony Central regarding “virtual miking” for acoustic guitar parts (particularly nylon string guitar), which uses EQ to split a mono guitar part into highs on the right, lows on the left, and the rest in between. As this needs only one mic there are no phase cancellation issues, yet you still hear a stereo image. Another EQ-based option uses a stereo graphic EQ plug-in. In one channel, set every other band to full cut and the remaining bands to full boost; in the other channel, set the same bands oppositely (Fig. 1). For a less drastic effect, don’t cut/boost as much (e.g., try -6dB and +6dB respectively). Fig. 1: A graphic equalizer plug-in can provide pseudo-stereo effects. 3 DOUBLE DOWN ON THE CABS With hardware amps, split the guitar into two separate cabinets and mic them separately to create two channels. Doing so “live” will usually create leakage issues unless you have two isolated spaces, but re-amping takes care of that problem because you can create the other channel during mixdown. Remember to align the two tracks so that they don’t go out of phase with each other. 4 CREATE A VIRTUAL ROOM Speaking of amp sims, many of them include “virtual rooms” (Fig. 2) with a choice of virtual mics and mic placements. These can produce a sophisticated stereo field, and are great for experimentation. Fig. 2: MOTU’s Digital Performer includes several guitar-oriented effects, as well as virtual rooms for both guitar and bass with multiple miking options and cabinets. 5 PARALLEL PROGRAM PATHS Amp sims often create stereo paths from a mono input. For example IK’s AmpliTube has several stereo routing options, while Native Instruments’ Guitar Rig includes a “split mix” module that divides a mono path into stereo. You can then insert amps and effects as desired into each path, and at the splitter’s output, set the balance between them and pan them in the stereo field (Fig. 3). Fig. 3: Although you can use Guitar Rig to create mono effects, its signal path is inherently stereo. This makes it easy to convert mono sounds to stereo. 6 DELAY My favorite plug-in for this is the old standby Sonitus fx: Delay, because it has crossfeed as well as feedback parameters. Crossfeed can help create a more complex sound by sending some of one channel’s signal into the other (Fig. 4). Fig. 4: The ancient Sonitus fx: Delay is excellent for create a stereo spread from a mono input. Here it’s used as part of a custom FX chain in Cakewalk Sonar to add width to guitar parts. However, there are plenty of other options. One is to duplicate a mono guitar track, then process the copy through about 15-40 ms of delay sound only (no dry). Pan the two tracks oppositely for a wide stereo image. Make sure you check the mix in mono; if the guitar sounds thinner, re-adjust the delay setting until the sound regains its fullness. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. If you want analog sounds in a digital age, try these simple techniques by Craig Anderton Fancy signal processors aren’t always necessary to emulate some favorite guitar sounds and effects. In today’s digital world, a variety of programs and effects can be made to do your bidding. Want proof? Check out these five examples. For example... VINTAGE WA-WA EFFECTS Many people try to obtain a vintage wa sound simply by sweeping a highly resonant parametric EQ set to bandpass response. This isn’t possible because vintage analog wa pedals have steep response rolloffs that reduce both high and low frequencies, but there is a way to use modern parametric EQs to re-create this effect (Fig. 1). Copy the guitar track so you have two “cloned” tracks set to the same level. In track 1, insert a parametric EQ set to bandpass (peak/dip) mode with about 6dB gain and Q (resonance) of around 8. Flip track 2 out of phase. Sweep the EQ over a range of about 200Hz – 2.2kHz. Fig. 1: The mixer channel on the left is going through a parametric stage of EQ. The channel on the right doesn’t go through an equalizer, but is flipped out of phase (the phase button is circled in red). Throwing one track out of phase causes the high and low frequencies to cancel, so all you hear is the filtered midrange sound—just like a real wa-wa. ADDING AMBIENT “AIR” Recording guitar direct and be simple and produce a clean sound, but sometimes it’s too clean because there aren’t any mics to pick up the room reflections that give a sense of realism. To model these reflections, feed your guitar track through a multi-tap delay plug-in, or send it to at least two stereo buses with stereo delays where you can set independent delay times for the two channels. Next, set the delay times for short, prime number delays (e.g., 3, 5, 7, 11, 13, 17, 19, and 23 milliseconds) to avoid resonant build-ups. Four delays is often all you need; I generally use 7, 11, 13, and 17ms, or 13, 17, 19, and 23 ms, depending on the desired room size (Fig. 2). Fig. 2: Finding delay lines that can give short, precise delays isn’t that easy, but Native Instruments’ Guitar Rig—shown here using two splits, each with its own stereo delay—can do the job. More delays provide a more complex ambience, but sometimes a simple ambience effect actually works better. If you want more “air,” try adding some feedback within the delay, but make sure it’s not enough to hear individual echoes. Experiment with the delay levels and pans, then mix the delayed sound in at a low level. THE CLOSED-BACK TO OPEN-BACK CABINET TRANSFORMATION With open-back cabinets, low-frequency waveforms exiting through the cabinet back partially cancel the low-frequency waveforms coming out the front. Emulate this effect by reducing bass somewhat; a low-frequency shelving filter works well, as does a high-pass filter. OUT-OF-PHASE PICKUP EMULATION Don’t have an out-of-phase switch? You can come close with a studio-type EQ (Fig. 3). Select both pickups at the guitar itself, and feed its output into a mixer channel. For the EQ, dial in a notch filter around 1,200Hz with a fairly broad Q (0.6 or so) and severe cut—around -15 to -18dB. Use a high shelf to boost about 8dB starting at 2kHz, and a low shelf to cut by -18dB starting at 140Hz. Tweak as needed for your particular guitar and pickups. Boost the level—like a real out-of-phase switch, this thins out the sound. Fig. 3: The Sonitus EQ set for a sound that emulates an out-of-phase sound with guitar pickups. THE BIG BASS ROOM BUILD-UP When a cabinet’s close to a wall, bass waves bouncing off the wall reinforce the waves coming out the cab’s front. This can produce a “rumble” due to walls and objects resonating, which EQ can’t imitate. For a killer rumble, split your guitar signal through an octave divider, then follow the octave divider with a lowpass EQ set to cut highs starting at 120Hz; this muddies the bass frequencies further. Then, mix the octave sound about -15dB below the main signal—just enough to give a “feel” of super-low bass. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Whether for mixing or synth programming, touchscreens are having a major impact by Craig Anderton Mixers used to be so predictable: Sheet metal top, faders, knobs, switches, and often, a pretty hefty price tag. Sure, DAWs started including virtual mixers, but unless you wanted to mix with a mouse (you don’t), then you needed a control surface with . . . a sheet metal top, faders, knobs, switches, and a slightly less hefty price tag. Enter the touchscreen—and the paradigm changed. Costly and noisy moving faders have been replaced by the touch of finger on screen, and the controller’s digital soul provides more functionality at lower cost. And if your application can talk to a a wireless network, iOS devices can provide wireless control. GENERAL CONTROL SURFACES iPads now replace expensive mechanical control surfaces. For example, Far Out Labs’ ProRemote for the iPad is Mackie Control Universal-compatible, and offers up to 32 channels (16 simultaneous on an iPad) with metering and 100mm “virtual moving faders.” Ableton Live fans can use Liine’s Griid, a control surface for Live’s clip grid, while Neyrinck’s V-Control Pro serves Pro Tools users but is compatible with several other programs as well. The cross-platform DAW Remote HD from EUM Lab supports the Mackie Control and HUI control surface protocols, and handles pretty much any DAW that can respond to those protocols. MIXER-SPECIFIC CONTROL SURFACES PreSonus is big on straddling the hardware/software worlds with their StudioLive mixers. First came Virtual Studio Live software for computer control; then SL Remote (Fig. 1), which links the computer to iOS devices for wireless mixer remote control. Yes, you can play a CD over your sound system, and tweak your mixer to optimize the sound as you walk around the venue—or control your own monitor mix, EQ, compression, and a lot more from onstage. Fig. 1: PreSonus provides extensive software support for their StudioLive mixers, including an iPad remote and personal monitoring app for all iOS devices. PreSonus also introduced QMix, an iPhone/iPod touch app that basically replaces personal monitoring systems by letting you monitor from the mixer itself through their ingenious “wheel of me”—dial in the proportion of your channel to the rest of the mix (“more me!”). Lots of companies, including high-end ones, like iPad control—Yamaha, Allen & Heath, Behringer, Soundcraft, MIDAS, and others provide remotes for their digital mixers. iPAD ASSISTANCE Some mixers use the iPad as an accessory. Behringer’s XENYX USB Series mixers include an iPad dock; the mixer can send signal both to and from the iPad—use effects processing apps, spectrum analyzers, record into GarageBand, and the like. Alto Professional’s MasterLink Live mixer also has an iPad dock, with the iPad used for mix analysis, recording, and replacing a bunch of rack gear with iPad-controlled signal processing. MIXER MEETS RECORDING Why stop with mixing? The Alesis iO Mix looks like a dock, but it’s a four-channel recorder with an iPad control surface. Take the concept even further with WaveMachine Labs’ brilliant Auria, which packs a full-function 48-track recorder, with a complete mixer interface and plug-ins from PSP Audioware, into an iPad. It works with several tested interfaces; this sounds like science fiction, but it really works. Windows 8 enabled multi-touch for compatible laptops and touch monitors, and Cakewalk’s SONAR adapted the technology to a DAW environment (Fig. 2). Mixing with a touchscreen monitor is an interesting experience—I found it worked best if I laid the monitor on my desk, titled it up at a slight angle like a regular mixer surface, and combined “swiping” for general moves and mixing along with a mouse for precise changes. Fig. 2: Starting with Windows 8, Cakewalk SONAR supported touchscreen control. In the “huge and not exactly cheap” touchscreen category there’s Slate Pro Audio’s Raven MTX (available exclusively from GC Pro) that has not only the same functionality of the big hardware mixers of old, but pretty much the same size as well. And for DJs, SmithsonMartin’s Emulator ELITE is a tour de force of touch control for programs like Native Instruments’ Traktor and Ableton Live. TOUCHSCREEN “SUPERMIXERS” Mackie’s DL1608 (Fig. 3) builds a rugged, pro-level hardware mixer exoskeleton around an iPad brain—although you can also slip out the iPad for wireless remote control. Fig. 3: Mackie’s DL1608 builds a hardware exoskeleton around an iPad brain, with the hardware handling all I/O and audio mixing/processing. It’s a serious mixer with the Mackie pedigree: 16 Onyx preamps with +48V phantom power, balanced outs (XLR mains, 1/4” TRS for the six auxes), and hardware DSP for the mixing and hardware effects—the iPad is solely about control. Each input has 4-band EQ, gate, and compression; the outputs have a 31-band graphic EQ and compressor/limiter, along with global reverb and delay. If you don’t need as many inputs, the 8-channel DL806 also offers iPad control. Line 6’s StageScape M20d (Fig. 4) uses a custom 7” touchscreen for visual mixing based on a graphic, stage-friendly paradigm with icons representing performers or inputs; touching an icon opens up the channel parameters and DSP (including parametric EQs, multi-band compressors, feedback suppression on every input, and more). Fig. 4: The Line M20d uses a custom touch screen whose icons represent an actual stage setup rather than simply showing conventional channel strips. There are also four master stereo effects engines with reverbs, delays and a vocal doubler. You can even do multi-channel recording to a computer, SD card, or USB drive, and it accepts an iPad for remote control. Like the Mackie, it’s serious: 12 mic/line ins (with automatic mic gain setting), four additional mic ins, and balanced XLR connectors for the auto-sensing main and monitor outputs. But the M20d also incorporates the L6 LINK digital networking protocol, so the mixer can communicate with Line 6’s StageSource speakers for additional setup and configuration options. ARE WE THERE YET? Although touch control hasn’t quite taken over the world yet, it’s making rapid strides in numerous areas. Of course smart phones and iPads started the trend, but we’re now seeing applications from those consumer items creeping into our recording- and live performance-oriented world. Granted, sometimes touch isn’t the perfect solution—there’s something about grabbing and moving a hardware fader that’s tough to beat—so the future will likely be a continuing combination of tactile hardware and virtual software. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Don’t give up on that garage sale special yet! by Craig Anderton So you finally tracked down an ultra-rare, ultra-retro Phase Warper stomp box manufactured back in the mid-’70s. Not surprisingly, it doesn’t seem to work very well (if at all); sitting unused in someone’s garage for over a decade has taken its toll. But if you know a few basic procedures, you can often restore that antique and give it a new life. Here are some ways that have worked well for me to restore vintage effects. OXIDATION ISSUES One of your biggest problems will likely be oxidation, here metal surfaces become corroded due to stuff in the air (whether pollution in LA or salt spray in Maine). Oxidation shows up as scratchy sounds in pots, intermittent problems with switches, and occasional circuit failure. Fortunately, chemicals called contact cleaners can solve a lot of these problems. I’ve had good luck with DeoxIT from Caig Laboratories; they also make an Audio Survival Kit with cleaners for plastic faders and contact restoration as well as cleaning. but there are many other types (such as “Blue Shower” contact cleaner). Here are some ways you’d typically use contact cleaners. Scratchy pots. Pots work by having a metal wiper rub across a resistive strip, so the pot can become an “open circuit” if oxidation or film prevents these from making contact. To solve this, spray a small amount of contact cleaner into the pot’s case. With unsealed rotary pots, there’s usually an opening next to the pot’s three terminals (Fig. 1). Fig. 1: The red line points to an opening in the pot where you can squirt contact cleaner (photo by Petteri Aimonen). Slider (fader) pots have an obvious opening. Sealed pots are more difficult to spray; sometimes the pot can be disassembled, sprayed, and reassembled, and sometimes you can dribble contact cleaner down the side of the pot’s shaft, and hope some of it makes it to the innards. Once sprayed, you have to rotate the pot several times to “smear” the cleaner, and also flush away the gunk it’s dissolving. After rotating it about 20 times or so, spray in a little more contact cleaner. If the problem returns, spray again and see if that solves things. However, at some point a pot’s resistive element becomes so worn that no contact cleaner can restore it—you then need to replace the pot with one of equivalent value. Incidentally, people often forget that trimpots need attention too—even more so, given that they’re more exposed than regular pots. Spray them the way you would regular pots, but be very careful not to spray any trimpots that adjust internal voltages. If you have any doubts, it’s probably best to leave trimpots alone. IC sockets. IC sockets are also subject to oxidation. A quick fix is to simply take an IC extractor (these cost about $3), clamp its sides around the chip, and pull up very slightly on the chip (Fig. 2; just enough to loosen it—about 1/16”). Fig. 2: An IC extractor can pull an IC out of its socket, but that’s not what you want to do—just pull up very slightly. This picture shows a digital chip so it’s easier to see the pins; older effects boxes will likely have smaller analog chips. Spray some contact cleaner sparingly on the IC’s pins. Now push the IC back into its socket. Repeat this pull-push routine one more time, and the scraping of the chip pins against the socket in conjunction with the cleaner should have cleaned things enough to make good electrical contact. Afterward, it’s important to check that all the IC pins are not bent and go straight into the socket (Fig. 3). Fig. 3: Verify that the pins are not bent or compromised before re-applying power. However, use extreme caution—IC pins are fragile, which is why you don’t want to pull the chip out too far, nor do this procedure too often. If you destroy an ancient IC, you may not be able to find a replacement. Toggle switches. Rotary and pushbutton switches respond best to contact cleaners, but toggle switches are often sealed. These are not worth attempting to disassemble, but you may luck out and find a switch that does have some openings where you can squirt some contact cleaner. As with pots, work the switch several times to spread the cleaner. Other connectors. Some effects used nylon “Molex” connectors or similar multipin connectors. Connector pins in general can develop oxidation, and are also candidates for spraying. Sometimes they lift right up from their sockets, but often there are little plastic hooks or tabs to hold the connector in place. If you encounter resistance while trying to remove the connector, don’t force it—look for whatever might be impeding its movment. Battery connectors. Because these connectors carry the most current of anything in the effect, any oxidation here can be a real problem. Spray the connector, and snap/unsnap a battery several times. Two other battery tips: Check the battery connector tabs that mate with the battery’s positive terminal; if it doesn’t make good contact with the battery, push inward on the connector tabs with a pliers or screwdriver to encourage firmer contact. And if the battery has leaked over the connector, forget about trying to salvage it—solder in a new connector. BLOW IT AWAY Most older effects usually come free with large amounts of dust. Take the effect outside, plug a vacuum cleaner’s hose into the exhaust end, let the vacuum blow for a minute or so to clear out any dust stuck in the hose, then blow air on the effect to get rid of as much dust as possible. If you don’t do this, cleaning your pots and connectors may end up being a short-term solution as dust shakes loose over time and works its way back into various components. LOOSE SCREWS While you still have the unit apart, check whether any internal screws are loose—especially if they’re holding circuit boards in place. Enough vibration can loosen screws, and that could mean bad ground connections (many vintage effects use screws to provide an electrical path between circuit board and ground, or panel and ground). Try to turn each screw to determine if there’s any play. If there is, before tightening the screw check to see if there’s a lockwasher between the nut and the panel or other surface. If not, add a lockwasher before tightening the screw—providing the lockwasher teeth don’t contact something they shouldn’t. THOSE #@$$#^ FOOTSWITCHES Many old stomp boxes used push-on, push-off DPDT footswitches that were expensive then, and are even more expensive (and difficult to find) now. One source for replacements is Stewart-McDonald’s Guitar Shop Supply. ELECTROLYTIC CAPACITORS Electrolytic capacitors (Fig. 4), which tend to have a blue or black “jacket” and are polarized (i.e., they have a + and – end, like a battery) contain a chemical that dries up over time. Fig. 4: The two capacitors on the right are typical electrolytic capacitors. The three on the left are variations on ceramic capacitors. With very old effects, or ones that have been subject to environmental extremes (e.g., being on the road with a rock and roll band), it can make a major sonic difference to replace old electrolytic capacitors with newer ones of the same value and voltage rating. Note that ceramic capacitors (which are usually disc-shaped), tantalum caps (like electrolytics, but generally smaller for a given value and with a lower voltage rating), and polyester caps like Orange Drops or mylar capacitors don’t dry up and last a long time. SAFER POWER Many older AC-powered boxes did not use fuses or three-conductor AC cords. Although I’m loathe to modify a vintage box too much, making a concession to safety is a different matter. Fig. 5 shows wiring for a two-wire cord compared to a fused, three-wire type. A qualified technician should be able to modify your effect to use a three-wire power cord. Fig. 5: The 3-wire cord’s ground typically connects to the effect’s main ground point (usually located near the power supply). Good luck! Your toughest task will be finding obsolete parts such as old analog delay chips, custom-made optoisolators, and dealing with effects where they sanded off the IC identification (a primitive form of copy protection). But once you restore an effect, it’s a great feeling…and when it’s closer to like-new condition, it will probably sound better as well. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. By Craig Anderton Cakewalk’s cross-platform, step-sequencing-oriented synthesizer has a ton of hidden features and shortcuts. Here are some favorites; the numbers correspond to the numbers in the screen shot. 1 BETTER SOUND QUALITY Each Element has a Quality parameter that defaults to Std. If the patch uses pitch sweeps, change this to Hi to minimize aliasing. To further minimize aliasing, click on the Options button (the Screwdriver icon toward the upper right) and check “Use sinc interpolation when freezing/rendering.” 2 RAPTURE MEETS MIDI GUITAR Click on the Options button. Check “Set Program as Multitimbral” so Rapture elements 1-6 receive MIDI channels 1-6, which can correspond to guitar strings 1-6. For the most realistic feel where playing a new note cuts off an existing note sounding on the same string, set each element’s Polyphony to 0 (monophonic with legato mode), and Porta Time to 0.0. 3 ENABLING PORTAMENTO Portamento is available only if an Element’s Polyphony = 0. If Polyphony = 1, only one voice can sound (monophonic mode), but without legato or the option to add portamento. 4 MULTI OPTION DETAILS An Element’s Multi option can thicken an oscillator without using up polyphony. However, it works only with short wavetables, not longer samples or SFZ files. 5 ACCEPTABLE FILE FORMATS Each Element can consist of a WAV, AIF, or SFZ multisample definition file. SFZ files can use WAV, AIF, or OGG files. Samples can be virtually any bit depth or sample rate, mono or stereo, and looped or one-shot. 6 MELODIC SEQUENCES When step sequencing Pitch, quantize to semitones by snapping to 12 levels or 24 levels (right-click in the sequencer to select). If you simply click within the step sequencer, each time you type “N” it generates a new random pattern. 7 CHAINING ELEMENTS FOR COMMON FX You can route an oscillator (with its own DSP settings) through the next-higher-numbered Element’s EQ and Effects by right-clicking on the lower-numbered Element number and selecting “Chain to Next Element.” (You can’t do this with Element 6 because there is no higher-numbered element.) 8 KNOB DEFAULT VALUES Double-click on a knob to return it to its default value. 9 THE PROGRAMMER’S FRIEND: THE LIMITER When programming sounds with high resonance or distortion, enable the Limiter to prevent unpleasant sonic surprises. 10 FIT ENVELOPE TO WINDOW If the envelope goes out of range of the window, click on the strip just above the envelope graph, and choose Fit. 11 SET ENVELOPE LOOP START POINT Place the mouse over the desired node and type “L” on your QWERTY keyboard. Similarly, to set the Loop End/Sustain point, place the mouse over a node and type “S.” 12 CHANGE AN ENVELOPE LINE TO A CURVE Click on an envelope line segment, and drag to change the curve. 13 CHANGE LFO PHASE Hold down the Shift key, click on the LFO waveform, and drag left or right. 14 CHOOSING THE LFO WAVEFORM Click to choose the next higher-numbered waveform or right-click to choose the next lower-numbered waveform. But it’s faster to right-click above the LFO waveform display, and choose the desired LFO waveform from a pop-up menu. 15 PARAMETER KEYTRACKING The Keytracking window under the LFO graph affects a selected parameter (Pitch, Cut 1, Res 1, etc.) based on the keyboard note. Adjust keytracking by dragging the starting and ending nodes. Example: If Cut 1 is selected and the keytracking line starts low and goes high, the cutoff will be lower on lower keys and higher with higher keys. If the line starts high and goes low, the cutoff will be higher on lower keys and lower with higher keys. 16 CHANGE KEYTRACKING CURVE Click on the Keytrack line and drag up or down to change the shape. 17 CHOOSE AN ALTERNATE TUNING Click on the Pitch button for the Element you want to tune. Click in the Keytrack window and select the desire Scala tuning file. ADDING CUSTOM LFO WAVEFORMS Store WAV files (8 to 32-bit, any sample rate or length) in the LFO Waveforms folder (located in the Rapture program folder). Name each WAV consecutively, starting with LfoWaveform020.wav, then LfoWaveform021.wav, etc. SMOOTHER HALL REVERB If you select Large Hall as a Master FX, create a smoother sound by loading the Small Hall into Global FX 1 and the Mid Hall into Global FX 2. Trim the reverb filter cutoffs to “soften” the overall reverb timbre. THE MOUSE WHEEL The wheel can turn a selected knob up or down, change the level of all steps in a step sequence, scroll quickly through LFO waveforms, zoom in and out on envelopes, and more. Hold the Shift key for finer resolution, or the Ctrl key for larger jumps. FINEST KNOB RESOLUTION Use the left/right arrow keys to edit a knob setting with fives times the resolution of just click/dragging with the mouse. NEW LOOK WITH NEW SKINS In the Rapture folder under Program Files, the Resources folder has bit-mapped files for Rapture graphic elements (e.g., background, knobs, etc.). Modify these to give Rapture a different look. COLLABORATING ON SOUNDS To exchange files with someone who doesn’t have the same audio files used for an SFZ definition file, send the audio files separately and have your collaborator install them in Rapture’s Sample Pool library. This is where Rapture looks for “missing” SFZ files. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Get hands-on control over your DAW by Craig Anderton The Mackie Control became such a common hardware controller that most DAWs included “hooks” to allow for them to be controlled by Mackie’s hardware. But that also created another trend—other hardware controllers emulating the Mackie protocol so that non-Mackie controllers could work with these same DAWs because, from the DAW’s standpoint, they appeared identical to the Mackie Control. These controllers hook up through MIDI. So, the basic procedure for having a DAW work with a Mackie-compatible device is: Assign a MIDI input to receive messages from the controller. If the controller is bi-directional (e.g., it has moving faders so they need to receive position data from the DAW), you’ll need to assign a MIDI output as well; this may also be the case if the DAW expects to see a bi-directional controller. Choose Mackie Control as a control surface within the DAW itself. If a program says there’s no Mackie Control connected (e.g., Acid Pro), there will often be an option to tell the program it’s an emulated Mackie Control. Any controller faders usually control channel level, while rotaries control panpots. Buttons typically handle mute or solo, but may handle other functions, like record enable; this depends on how the DAW interprets the Mackie Control data. Also, there are typically Bank shift up/down and Track (also called Channel) shift up/down buttons (labeled Page and Data respectively in the Graphite 49). The Bank buttons change the group of 8 channels being controlled (e.g., from 1-8 to 9-16), while the Track buttons move the group one channel at a time (e.g., from 1-8 to 2-9). Many controllers have transport buttons as well (play, stop, rewind, etc.). This article tells how to set up a basic Mackie Control that doesn’t use motorized faders. The Mackie Control protocol is actually quite deep, and some programs allow for custom assignments for various controller controls. That requires much more elaboration, so we’ll just concentrate on the basics here. We’ll use Samson’s Graphite 49 controller as our typical Mackie Control, but these same procedures work with pretty much any Mackie Control-compatible device. Note that the Graphic 49 has five virtual MIDI ports, and all remote control data is transmitted over Graphite’s virtual MIDI port #5. This allows the other ports to carry data like keyboard notes and controller positions to instruments and other MIDI-aware software. We’ll assume you’ve loaded the preset that corresponds to the programs listed below. However, note that you may be able to call up a different preset for slightly different functionality. For example, if a preset’s upper row of buttons controls solo, they can often control record enable if you call up a preset where the upper row of buttons controls record enable (e.g., the Logic preset). APPLE LOGIC PRO Graphite 49 looks like a Logic Control; as that’s the default controller, you usually won’t have to do any setup. However if this has been changed for some reason, go Logic Pro > Preferences > Control Surfaces > Setup. In the Setup window, click the New pop-up menu button and choose Install. Click on the Mackie Logic Control entry, click on the Add button, click OK, and you’re done. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Record Enable, and the lower switches control Mute. AVID PRO TOOLS Go Setup > MIDI > Input Devices. Make sure MIDIIN5 (Samson Graphite 49) is checked, then click OK. Then go Setup > Peripherals. Click the MIDI Controllers tab. For Type, choose HUI. Set Receive From to MIDIIN5 (Samson Graphite 49). Send To must be set to something, so choose MIDIOUT2 (Samson Graphite 49). The faders, rotaries, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. However, the Bank and Channel buttons don’t work with the HUI protocol. ABLETON LIVE In Options > Preferences, choose MackieControl for Control Surface, and set Input to MIDIIN5 (Samson Graphite 49); Output doesn’t need to be assigned. In the MIDI Ports section, turn Remote On for the input that says MackieControl Input MIDIIN5 (Samson Graphite 49). The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Track Activator buttons. CAKEWALK SONAR In Edit > Preferences > MIDI Devices, set the MIDI In port to MIDIIN5 (Samson Graphite 49) and the MIDI Out port to MIDIOUT2 (Samson Graphite 49). Click Apply. Click on Control Surfaces under MIDI, then click the Add New Controller button in the upper right. For Controller/Surface, choose Mackie Control and verify that the Input and Output Ports match your previous MIDI port selections. Click OK, click Apply, click Close. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. MOTU DIGITAL PERFORMER Go Setup > Control Surface Setup. Click the + sign to add a driver, and select Mackie Control. Under Input Port, choose Samson Graphite 49 Controller (channel 1). Click OK. The faders, rotaries, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. PRESONUS STUDIO ONE PRO Under Studio One > Options > External Devices, choose Add. Select Mackie Control. Set Receive From to MIDIIN5 (SAMSON Graphite 49). Send To can be set to None. Click on Okm then click on OK again. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. PROPELLERHEAD REASON Mackie Control works somewhat differently with Reason from a conceptual standpoint, because until Record was integrated with Reason in Version 6, Reason was not a traditional DAW. As a result, Graphite sends out specific control signals that apply to whatever device has the focus. It’s easiest if you also use Graphite 49 as the master keyboard controller, and go Options > Surface Locking and for Lock to Device, select Follow Master Keyboard. Also, create a track for any device you want to control, including processors or devices like the Mixer 14:2. When you click on that track, Graphite 49 will control the associated device. If you choose an Audio Track, slider S1 controls level, the F1 button controls solo, F9 controls mute, and rotary E8 controls pan. For example, if the 14:2 Mixer has the focus, the faders, rotaries, and buttons work as expected. (as does the transport) although Bank and Channel Shift commands aren’t recognized. If SubTractor has the focus, the controls affect various SubTractor parameters. There’s a bit of trial and error involved with the various devices to find which Graphite 49 controls affect which parameters; you can always create custom presets to control specific instruments, but this goes beyond the scope of this article, as it involves delving into Reason’s documentation and assigning specific controls to specific MIDI channels and controller numbers. Go Edit > Preferences and click the Control Surfaces tab. Click the Add button; select Mackie as the manufacturer, and Control for the model. Under input, select MIDIIN5 (Samson Graphite 49). For output, select MIDIOUT2 (Samson Graphite 49). Click OK, and make sure Standard is checked. Note that you can also lock the Graphite 49 to a specific device so that it will control that device, regardless of which track is selected. Go Options > Surface Locking and choose the device to be locked. SONY ACID PRO Under Options, check External Control. Under Options > Preferences, click the MIDI tab, check the MIDIIN5 (Samson Graphite 49) box under “Make these devices available for MIDI input,” then click Apply. In the External Control and Automation tab, under Available Devices choose Mackie Control and click on Add. Double-click in the Status field and in the dialog box that opens, in the Device Type field choose Emulated Mackie Control Device. Select MIDIIN5 (Samson Graphite 49) for the MIDI input if it is not already selected. Click on OK, then click on OK in the next dialog box. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. The faders, rotaries, and Transport buttons work as expected but only the first eight channels can be controlled and it is not possible to do Bank or Track shifting. Graphite 49’s upper switches control Solo, and the lower switches control Track Activator buttons. SONY VEGAS PRO The procedure is identical to Acid Pro, except that the Status field in the External Control and Automation page updates correctly after selecting Emulated Mackie Control Device instead of saying “No Mackie Devices Detected.” Note that only audio channels are controlled. STEINBERG CUBASE Go Devices > Device Setup. Click the + sign in the upper left corner and select Mackie Control from the pop-up menu. Under MIDI Input, select MIDIIN5 (Samson Graphite 49) then click on Apply. Click OK. The faders, rotaries, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. However, I couldn’t figure out how to get Cubase to recognize Graphite 49’s Bank and Channel buttons; if anyone knows, please add a comment, and I’ll modify this article. Cubase offers a very cool feature: If you check Enable Auto Select, when you move a Graphite 49 fader it automatically selects that channel. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Is your monitoring setup honest with you about your music? by Craig Anderton All the effort you put into recording, overdubbing, and mixing is for nothing if your monitoring system isn’t honest about the sounds you hear. The issue isn’t simply the speakers; the process of monitoring is deceptively complex, as it involves your ears, the acoustics of the room in which you monitor, the amp and cables that drive your monitors, and the speakers themselves. All of these elements work together to determine the accuracy of what you hear. If you’ve ever done a mix that sounded great on your system but fell apart when played elsewhere, you’ve experienced what can go wrong with the monitoring process - so let's find out how to make things right. HEARING VARIABLES Ears are the most important components of your monitoring system. Even healthy, young ears aren’t perfect, thanks to a phenomenon quantified by the Fletcher-Munson curve (Fig. 1). Fig. 1: The Fletcher-Munson curve indicates how the ear responds to different frequencies. Simply stated, the ear has a midrange peak around 3-4kHz that’s associated with the auditory canal’s resonance, and does not respond as well to low and high frequencies, particularly at lower volumes. The response comes closest to flat response at relatively high levels. The “loudness” control on hi-fi amps attempts to compensate for this by boosting the highs and lows at lower levels, then flattening out the response as you turn up the volume. Another limitation is that a variety of factors can damage your ears — not just loud music, but excessive alcohol intake, deep sea diving, and just plain aging. I’ve noticed that flying temporarily affects high frequency response, so I wait at least 24 hours after getting off a plane before doing anything that involves critical listening. The few times I’ve broken that rule, mixes that seemed perfectly fine at the time played back too bright the next day. It’s crucial to take care of your hearing so at least your ears aren’t the biggest detriment to monitoring accuracy. Always carry the kind of cylindrical foam ear plugs you can buy at sporting good stores so you’re ready for concerts, using tools (the impulse noise of a hammer hitting a nail is major!), or being anywhere your ears are going to get more abuse than someone talking at a conversational level. (Note that you should not wear tight-fitting earplugs on planes. A sudden change in cabin pressure could cause serious damage to your eardrums.) You make your living with your ears; care for them. ROOM VARIABLES As sound bounces around off walls, the reflections become part of the overall sound, creating cancellations and additions depending on whether the reflections are in-phase or out-of-phase compared to the source signal reaching your ears. These frequency response anomalies affect how you hear the music (Fig. 2). Fig. 2: If a reflection is out of phase with the original signal, there will be some degree of cancellation. Also, placing a speaker against a wall seems to increase bass. This is because any sounds emanating from the rear of the speaker, or leaking from the front (bass frequencies are very non-directional), bounce off the wall. Because a bass note’s wavelength is so long, the reflection will tend to reinforce the main wave (Fig. 3). Fig. 3: Most anomalies with room acoustics happen at low frequencies. As the walls, floors, and ceilings all interact with speakers, it’s important that speakers be placed symmetrically within a room. Otherwise, if (for example) one speaker is 3 feet from a wall and another 10 feet from a wall, any reflections will be wildly different and affect the response. The subject of acoustically treating a room is beyond the scope of this article. Hiring a professional consultant to “tune” your room with bass traps and similar mechanical devices could be the best investment you ever make in your music. WHAT ABOUT TUNING A ROOM WITH GRAPHIC EQUALIZATION? Some studios use graphic equalizers to “tune” rooms, but this is not necessarily a cure-all. Equalizer-based room tuning involves placing a mic where you would normally mix, feeding pink noise or test tones through a system, and tuning an equalizer (which patches in as the last device before the power amp) for flat response. Several companies make products to expedite this process, such as RTAs (Real Time Analyzers) that include the noise generator, along with calibrated mic and readout. You then diddle the sliders on a 1/3 octave graphic EQ to compensate for anomalies that show up on the readout. Some devices combine the RTA and EQ for one-stop analysis and equalization. While this sounds good in theory, there are two main problems: If you deviate from the “sweet spot” where the microphone was placed, the frequency response will change. Heavily equalizing a poor acoustical space simply gives you a heavily-equalized, poor acoustical space. However, newer methods of room tuning have been developed that take advantage of computer power, such as JBL’s MSC-1 and IK Multimedia’s ARC (Fig. 4). Fig. 4: IK Multimedia’s ARC is a more evolved version of standard room tuning; it’s effective over a wider listening area than older methods, and has a more sophisticated feature set. The most important point to remember about any kind of electronic room tuning is that like noise reduction, which works best on signals that don’t have a lot of noise, room tuning works best on rooms that don’t have serious response anomalies. It’s best to make corrections acoustically to minimize standing waves, check for phase problems, experiment with speaker placement, and learn your speaker’s frequency response. Once you have your room as close to ideal as possible, a device like ARC can make it even better. NEAR-FIELD MONITORS Traditional studios have large monitors mounted at a considerable distance (6 to 10 ft. or so) from the mixer, with the front flush to the wall, and an acoustically-treated control room to minimize response variations. The “sweet spot” — the place where room acoustics are most favorable — is designed to be where the mixing engineer sits at the console. However in smaller studios, where space and budget are at a premium, near-field monitors have become the standard way to monitor (Fig. 5). Fig. 5: There are tons of options for near-field monitors; KRK’s Rockit series monitors have been very popular for project studios. With this technique, small speakers sit around 3 to 6 feet from the mixer’s ears, with the head and speakers forming a triangle (Fig. 6). The speakers should point toward the ears and be at ear level; if slightly above ear level, they should point downward toward the ears. Fig. 6: Near-field monitor placement is important to achieve the most accurate monitoring. Near-field monitors minimize the impact of room acoustics on the overall sound, as the speakers’ direct sound is far louder than the reflections coming off the room surfaces. They also do not have to produce a lot of power because of their proximity to your ears, which also relaxes the requirements for the amps feeding them. However, placement in the room is still an issue. If placed too close to the walls, there will be a bass build-up. Although you can compensate with EQ (or possibly controls on the speakers themselves), the build-up will be different at different frequencies. High frequencies are not as affected because they are more directional. If the speakers are free-standing and placed away from the wall, back reflections from the speakers bouncing off the wall could affect the sound. You’re pretty safe if the speakers are more than 6 ft. away from the wall in a fairly large listening space (this places the first frequency null point below the normally audible range), but not everyone has that much room. My crude solution is to mount the speakers a bit away from the wall on the same table holding the mixer, and pad the walls behind the speakers with as much sound-deadening material as possible. Nor are room reflections the only problem; with speakers placed on top of a console, reflections from the console itself can cause inaccuracies. To get around this problem, I use a relatively small main mixer, so the near-fields fit to the side of the mixer, and are slightly elevated. This makes as direct a path as possible from speaker to eardrum. ANATOMY OF A NEAR-FIELD MONITOR Near-field monitors are available in a variety of sizes and at numerous price points. Most are two-way designs, with (typically) a 6” or 8” woofer and smaller tweeter. While a 3-way design that adds a separate midrange driver might seem like a good idea, adding another crossover and speaker can complicate matters. A well-designed two-way system is better than a so-so 3-way system. Although larger speaker sizes may be harder to fit in a small studio, the increase in low-frequency accuracy can be substantial. If you can afford (and your speaker can accommodate) an 8” speaker, it’s worth the stretch. There are two main monitor types, active and passive. Passive monitors consist of only the speakers and crossovers, and require outboard amplifiers. Active monitors incorporate any amps needed to drive the speakers from a line level signal. With powered monitors, the power amp and speaker have hopefully been tweaked into a smooth, efficient team. Issues such as speaker cable resistance become moot, and protection can be built into the amp to prevent blowouts. Powered monitors are often bi-amped (e.g., a separate amp for the woofer and tweeter), which minimizes intermodulation distortion and allows for tailoring the crossover points and frequency response for the speakers being used. If you hook up passive monitors to your own amps, make sure they have adequate headroom. Any clipping generates gobs of high-frequency harmonics, and sustained clipping can burn out tweeters. SO WHICH MONITOR IS BEST? You’ll see endless discussions on the net as to which near-fields are best. In truth, the answer may rest more on which near-field works best with your listening space and imperfect hearing response. How many times have you seen a review of a speaker where the person notes with amazement that some new speaker “revealed sounds not heard before with other speakers”? This is to be expected. The frequency response of even the best speakers differs sufficiently that some speakers will indeed emphasize different frequencies compared to other speakers, essentially creating a different mix. Although it’s a cliché that you should audition several speakers and choose the model you like best, you can’t choose the perfect speaker, because such an animal doesn’t exist. Instead, you choose the one that colors the sound in the way you prefer. Choosing a speaker is an art. I’ve been fortunate enough to hear my music over some hugely expensive systems in mastering labs and high-end studios, so my criterion for choosing a speaker is simple: whatever makes my “test” CD sound the most like it did over the big-bucks speakers wins. If you haven’t had the same kind of listening experiences, book 30 minutes or so at some really good studio (you can probably get a price break since you’re not asking to use a lot of the facilities) and bring along one of your favorite CDs. Listen to the CD and get to know what it should sound like, then compare any speakers you audition to that standard. One caution: if you’re comparing two sets of speakers and one set is even slightly louder than the other, you’ll likely choose the louder one as sounding better. To make a valid comparison, match the speaker levels as closely as possible. A final point worth mentioning is that speakers have magnets which, if placed close to CRT screens, can distort the display. Magnetically shielded speakers solve this problem, although this has become much less of an issue as LCD screens have pretty much taken over from CRTs. LEARNING YOUR SPEAKER AND ROOM Ultimately, because your own listening situation is imperfect, you need to “learn” your system’s response. For example, suppose you mix something in your studio that sounds fine, but sounds bass-heavy in a high-end studio with accurate monitoring. That means your monitoring environment is shy on the bass, so you boosted the bass to compensate (this is a common problem in project studios with small rooms). With future mixes, you’ll know to mix the bass lighter than normal. Compare midrange and treble as well. If vocals jump out of your system but lay back in others, then your speakers might be “midrangey.” Again, compensate by mixing midrange-heavy parts back a little bit. You also need to decide on a standardized listening level to help combat the influence of the Fletcher-Munson curve. Many pros monitor at low levels when mixing, not just to save one’s ears, but also because if something sounds good at low volume, it will sound great when you really crank it up. However, this also means that the bass and treble might be mixed up a bit more than they should be to compensate for the Fletcher-Munson curve. So, before signing off on a mix, check the sound at a variety of levels. If at loud levels it sounds just a hair too bright and boomy, and if at low levels it sounds just a bit bass- and treble-light, that’s probably about right. WHAT ABOUT HEADPHONES? Musicians on a budget often wonder about mixing over headphones, as $100 will buy a quality set of headphones, but not much in the way of speakers. Although mixing exclusively on headphones isn’t recommended by most pros, keep a good set of headphones around as a reality check (not the open-air type that sits on your ear, but the circumaural kind that totally surrounds your ear). Sometimes you can get a more accurate bass reading using headphones than you can with near-fields, and when “proofing” your tracks, phones will show up imperfections you might miss with speakers. Careful, though: it’s easy to blast your ears with headphones and not know it. SATELLITE SYSTEMS “Satellite” systems use tiny monitors that can’t really produce adequate bass in conjunction with a subwoofer, a fairly large speaker that’s fed from a frequency crossover so that it reproduces only the bass region. This speaker usually mounts on the floor, against a wall; placement isn’t overly critical because bass frequencies are relatively non-directional. Although satellite-based systems can make your computer audio sound great or allow a less intrusive hi-fi setup with tight living space, I wouldn’t mix a major label project over them. Perhaps you could learn these systems over time as well, but I personally have difficulty with the disembodied bass for critical mixes. However, using subwoofers with monitors that have decent bass response is another matter (Fig. 7). Fig. 7: The PreSonus Temblor T10 active subwoofer has a crossover that’s adjustable from 50 to 300Hz. The response of near-field monitors often starts to roll off around 50-100 Hz, which diminishes the strength of sub-bass sounds. Sounds in this region are a big part of a lot of dance music, and it’s important to know what’s going on down there. In this case, the subwoofer simply gives a more accurate indication of the bass region sound, STRENGTH IN NUMBERS Before signing off on a mix, listen through a variety of systems — car stereo speakers, hi-fi bookshelf speakers, big-bucks studio speakers, boom boxes, headphones, etc. This gives an idea of how well the mix will translate over a variety of systems. If the mix works, great — mission accomplished. But if it sounds overly bright on 5 out of 8 systems, pull back the brightness just a bit. The mastering process can compensate for some of this, but mastering works best with mixes that are already good. Many “pro” studios will have big, expensive speakers, a pair of near-fields for reality testing, and some “junk” speakers sitting around to check what something will sound like over something like a cheap TV. Switching back and forth among the various systems can help “zero in” on the ultimate mix that translates well over any system. The more you monitor, the more educated your ears will become. Also, the more dependent they will become on the speakers you use (some producers carry their favorite monitor speakers to sessions so they can compare the studio’s speakers to speakers they already know well). But even if you can’t afford the ultimate monitoring setup, with a bit of practice you can learn your system well enough to produce a good-sounding mix that translates well over a variety of systems – which is what the process is all about. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Just because you're faking it doesn't mean you have to sound fake... by Craig Anderton I travel, so I stay in a lot of hotels. This means that in the last decade, I’ve seen 9,562 musicians singing/playing to a drum machine, and 3,885 synth duos where a couple of musicians play along with a sequencer or sampler. I’ve even been in that position myself a few times. Audiences have come to accept drum machines, but one person on stage being backed up by strings, horns, pianos, and ethereal choirs rings false, and the crowd knows it. Yet you don’t want to lose the audience due to monotony. Unless you’re a spellbinding performer, hearing the same voice and guitar or keyboard for an entire evening can wear out your welcome. In the process of playing live, I’ve learned a bit about what does — and doesn’t — work when doing a MIDI-based act. Hopefully some of the following ideas will apply to your situation too. SEQUENCERS: NOT JUST NOTES One way to avoid resorting to “fake” sounds is to maximize the “real” sounds you already have. As a guitar player, that involves processing my guitar sound. Switching between a variety of timbres helps keep interest up without having to introduce new instruments. However, this creates a problem: using footswitches and pedals to change sounds diverts your attention from your playing, since you now have to worry about hitting the right button at the right time. For me, the solution is using amp sims that can accept MIDI continuous controllers to change several parameters independently. This is where a sequencer really shines — in addition to driving instrument parts, it can generate MIDI messages that change your sound automatically, with no pedal-pushing required. Amp sims running on a laptop are often ideal for this application because they tend to have very complete MIDI implementations, but many processors (Fig. 1) also accept continuous controller commands. If not, they will likely be able to handle program changes, which can still be useful. Fig. 1: Line 6’s POD HD500 can accept MIDI continuous controller commands that change selected parameters in real time. For example, on one of my tunes the sequencer sends continuous controller data to a single program to vary delay feedback, delay mix, distortion drive, distortion output, and upper midrange EQ. As the song progresses, the various settings “morph” from one setting to another — rhythm guitar with no delay, low distortion drive, and flat EQ all the way to lead guitar with delay, lots of distortion, and a slight upper midrange boost. Within the main guitar solo itself, the delay feedback increases until the solo’s last note, at which point it goes to maximum so the echo “spills” over into the following rhythm part. Not only does this sound cool, it adds an interactive element. It’s not human beings, but still, I can play off some changes. What’s more, it doesn’t seem fake to the audience because all the sounds have a direct correlation to what’s being played. It’s true that using a sequencer ties you to a set arrangement, with very few exceptions. However, although sections of the song are limited to a certain number of measures, you can nonetheless play whatever you want within those measures, so solos can still be different each time you play them. THE VOCAL ANGLE I really like the DigiTech and TC-Helicon series of processors for live vocals. Being able to generate harmonies is cool enough, but there’s a lot of MIDI power in some of these boxes (Fig. 2), and you can do the same type of MIDI program or continuous controller tricks as those mentioned above for guitar. Fig. 2: DigiTech’s Vocalist Live Pro can use MIDI continuous controller and program changes to alter a wide range of parameters. Once again, even though you’re generating a big sound it’s all derived from your voice, so the audience can correlate what it hears to what’s seen on stage. THE SAMPLER CONNECTION A decent sampler (or workstation with sampling capabilities; see Fig. 3) that includes a built-in MIDI sequencer is ideal as a live music backup companion. It can hold any kind of drum sounds, hook up to external storage for fast loading and saving of sounds and songs, and generate the continuous controller data needed to control signal processors with its sequencer. Fig. 3: Yamaha’s Motif XF isn’t just a fine synthesizer/workstation, but includes flash memory for storing and playing back custom samples. Samplers are also great because you can toss in some crowd-pleasing samples when the natives get restless. A few notes from a TV theme song, a politician making a fool of himself, a bit from a 50s movie — they’re all fun. And to conserve memory you can usually get away with sampling them at a pretty low sampling frequency. When sampling bass parts for live use, it’s often best to avoid tones that draw a lot of attention to themselves, like highly resonant synth bass or slap bass. A round, full line humming along in the background fills the space just fine. PLAYING WITH MYSELF When I switch over to playing a lead after playing rhythm guitar, it leaves a pretty big hole. To fill the space without resorting to sequencing other instruments, I sample some power chords and rhythm licks from my guitar, and sequence them behind solos. This doesn’t sound too fake because the audience has already heard these sounds, so they just blend right in. Furthermore, the background sounds don’t have to be mixed very high. Adding just a bit creates a texture that fills out the sound nicely. MULTI-INSTRUMENTALISTS One of my favorite solo acts is a multi-instrumentalist in Vancouver named Tim Brecht who plays guitar, keyboards, drums, flute, and several percussion instruments during the course of his act (he also does some interesting things with hand puppets, but that’s another story). So when the sequenced drums play, people can accept it because they know he can play drums. Similarly, on some songs I’ll play a keyboard part instead of guitar. This not only provids a welcome break, but when I sequence the same keyboard sound as a background part later on, it’s no big deal because the audience has already been exposed to it and seen me play it. FOR BETTER DRUMS, USE A DRUMMER Okay, maybe you can’t convince your favorite drummer friend to come along to the gig. But if can have a real drummer program your drum sequences, it really does make a difference. MIDI GUITAR? I’m seeing more people using MIDI guitar live (Fig. 4), but not in heavy-metal or techno bands: these are typically solo acts in places like restaurants. Fig. 4: Fishman's TriplePlay retrofits existing guitars for MIDI, and transmits the signals wirelessly to a computer. They use MIDI guitar because again, it reduces the fake factor. Even if you’re playing other instrument sounds, people can see that what you’re playing is creating the sound. Some changes can be more subtle, like triggering a sampler with a variety of different guitar samples so you can go from acoustic, to electric, to 12-string, just by calling up different patches. Being able to layer straight guitar and synthesized sounds is a real bonus, as it reinforces the fact that the synth sounds relate to the guitar. IT’S THE MUSIC THAT MATTERS All of these tips have one goal: to make it easier to play live (in spite of the technology!), and to avoid sounding overly fake. People want to see you jumping around and having a good time, not loading sequences and fiddling with buttons. The less equipment you have to lug around, the better — both for reliability and minimal setup hassles. When MIDI came out, it changed my performance habits forever. If nothing else, I haven’t done a footswitch tap dance while balancing on a volume pedal in years — and I hope never to do one again! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Create your own drum loops, and you don't have to settle for what other people think would work well with your music by Craig Anderton Sure, there are some great sample libraries and virtual instruments available with fabulous drum loops. But it always seems that I want to customize them, or do something like add effects to everything but the kick drum. Fortunately, many libraries also include samples of the individual drums used to create the loops, so you can always stick ’em in your sampler and overdub some variations. But frankly, I find that process a little tedious, and decided that it would be easier (and more fun) in the long run just to make my own customizable drum loops. However, there’s more than one way to accomplish that task; I tried several approaches, and here are some that worked for me. ASSEMBLING FROM SAMPLES For loops you can edit easily, you can import samples into a multitrack hard disk recording program, arrange them as desired at whatever tempo you’d like, bounce them together to create loops, then save the bounced tracks as WAV or AIFF file types for use in tunes. Although you can create loops at any tempo, if you plan to turn them into “stretchable” loops with Acidization or REX techniques, I recommend a tempo of 100BPM (see the article “How to Create Your Own Loops from an Audio File”). Let’s go through the process, step-by-step. 1. Collect the drum samples for your loop (Fig. 1) and create the tracks to hold them. Before bringing in any samples, consider saving this project as a template to make life easier if you want to make more loops in the future. (In addition to sample libraries, there’s an ancient, free Windows program called Stomper that can generate some very cool analog drum sounds.) Fig. 1: A template set up in Cakewalk Sonar for one-measure loops. Samples (from the Discrete Drums Series 1 library) can be dragged from the browser (right pane) into the track view. 2. In your DAW, set the desired tempo and “snap” value (typically 16th notes, but your mileage may vary). Even if you plan to “humanize” drum hits instead of having them snap to a grid, I find it’s easier to start with them snapped, and then add variations later. 3. Import and place samples (Fig. 2). Fig. 2: The samples have all been placed to create the desired loop. Volume and pan settings have also been set appropriately. I prefer to place each sound on its own track, although sometimes it’s helpful to spread the same sound on different tracks if specific sounds need to be processed together. For example, if you have a techno-type loop with a 16th note high-hat part and want to accent the hats that fall on each quarter note, place these on their own track while the other hats go on a separate track. That way it’s easy to lower the level of the non-accented hats without affecting the ones on the quarter notes. 4. Bounce and save. This is the final part of the process. One option is to simply bounce all parts together into a mono or stereo track that you can save as a WAV or AIFF file. But I also make a stereo mix of all sounds except kick in case I want to replace the kick in some applications, or add reverb to all drums except the kick. I’ll often save a separate file for each drum sound as well, and all these variations go into a dedicated folder for that loop. THE VALUE OF VARIATIONS The advantage of giving each sound its own file is that it allows lots of flexibility when creating variation loops. Here are a few examples: Slide a track back and forth a bit in time for “feel factor” applications. For example, move the snare ahead in time for a more “nervous” feel, or behind the beat for a more “laid back” effect. Change pitch in a digital audio editor (this assumes you can maintain the original duration) to create timbral variations. Copy and paste to create new parts. For example, a common electronica fill is to have a snare drum on every 16th note that increases linearly in level over one or two measures. If a snare is on 2 and 4, you can copy and offset the track until you have a snare on every 16th note. Premix the tracks together, fade in the level over the required number of measures, and there's your fill. Drop out individual tracks to create remix variations. Having each sound on its own track makes it easy to drop out and add in parts during the remix process. Create “virtual aux busses” by bouncing together only the sounds you want to process. Suppose you want to add ring modulation to the toms and snare, but nothing else. Mute all tracks except toms and snare, premix them together, import the file into a digital audio editing program capable of doing ring modulation, save the processed file, then import it in place of the existing tom and snare tracks. TRICKS WITH COMPLETE LOOPS After you have a collection of loops, it’s time to string them together and create the rhythm track. Here are some suggested variations. Copy a loop, then transpose it down an octave while preserving duration. This really fattens up the sound if you mix the transposed loop behind the main loop. When trying to match loops that aren’t at exactly the same tempo, I generally prefer to shift pitch to change the overall length rather than use time compression/expansion, which usually messes with the sound more (especially with program material). This only works if the tempo variation isn’t too huge. Take a percussion loop (e.g., tambourines, shakers, etc.) that's more accent-oriented than rhythmic, then truncate an eighth-note or quarter note from the beginning. Because the loop duration will be shorter than the main loop, it repeats a little sooner each time the main loop goes around, thus adding variations. If you can't loop individual tracks differently, then copy and paste the truncated loop and place the beginning of the next loop up against the end of the previous loop. Copy, offset, and change levels of loops to create echo effects. Eighth and sixteenth-note echoes work well, but sometimes triplets are the right tool for the job. APPLIED LOOPOLOGY Of course, using drum loops can get a lot more involved than this, such as mixing and matching loops from different sample libraries and such. However, one problem is that loops from different sources are often equalized differently. Now’s a good time to use your digital audio editor or DAW’s spectrum analysis option to check the overall spectral content of each loop, so you can quickly compensate with equalization. Sure, you can do it by ear too, but spectrum analysis can sometimes save you some time by pointing out where the biggest differences lie. Well, those are enough tips for now. The more creative you get with your loops, the more fun you (and your listeners) will have. Happy looping! looping! looping! looping! looping! looping! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  • Create New...