Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Anderton

  1. Can't get your bass to fit right in the mix? Then follow these tips By Craig Anderton If there’s one instrument that messes with people’s minds while mixing, it’s bass. Often the sound is either too tubby, too thin, interferes too much with other instruments, or isn’t prominent enough . . . yet getting a bass to sit right in a mix is essential. So, here are ten tips on how to make your bass “play nice with others” during the mixing process. 1 CHECK YOUR ACOUSTICS Small project studio rooms reveal their biggest weaknesses below a couple hundred Hz, because the length of the bass waves can be longer than your room dimensions—which leads to bass cancelations and additions that don’t tell the truth about the bass sound. Your first acoustic fix should be putting bass traps in the corners, but the better you can treat your room, the closer your speakers will be to telling the truth. If acoustic treatment isn’t possible, then do a reality check with quality headphones. 2 MUCH OF THE SOUND IS IN THE FINGERS Granted, by the time you start mixing, it’s too late to fix the part—so as you record, listen to the part with mixing in mind. As just one example, fretted notes can give a tighter, more defined sound than open strings (which are often favored for live playing because they give a big bottom—but can overwhelm a recording). Also, the more a player can damp unused strings to keep them from vibrating, the “tighter” the part. 3 COMPRESSION IS YOUR FRIEND Normally you don’t want to compress the daylights out of everything, but bass is an exception, particularly if you’re miking it. Mics, speakers, and rooms tend to have really uneven responses in the bass range—and all those anomalies add up. Universal Audio’s LA-2A emulation is just one of many compressors that can help smooth our response issues in a bass setup. Compression can help even out the response giving a smoother, rounder sound. Also, try using parallel compression—i.e., duplicate the bass track, but compress only one of the tracks. Squash one track with the compressor, then add in the dry signal for dynamics. Some compressors include a dry/wet control to make it easy to adjust a blend of dry and compressed sounds. 4 THE RIGHT EQ IS CRUCIAL Accenting the pick/pluck sound can make the bass seem louder. Trying boosting a bit around 1kHz, then work upward to about 2kHz to find the “magic” boost frequency for your particular bass and bassist. Also consider trimming the low end on either the kick or the bass, depending on which one you want to emphasize, so that they don’t fight. Finally, many mixes have a lot of lower midrange buildup around 200-400Hz because so many instruments have energy in that part of the spectrum. It’s usually safe to cut bass a bit in that range to leave space for the other instruments, thus providing a less muddy overall sound; sometimes cutting just below 1kHz, like around 750-900Hz, can also give more definition. 5 TUNING IS KEY If the bass foundation is out of tune, the beat frequencies when the harmonics combine with other instruments are like audio kryptonite, weakening the entire mix. Beats within the bass itself are even worse. Tune, baby, tune! This can’t be emphasized enough. If you get to mixdown and find the bass has notes that are out of tune, cheat: Many pitch correction tools intended for vocals will work with single-note bass lines. 6 PUT HIGHPASS FILTERS ON OTHER INSTRUMENTS To make for a tighter, more defined low end overall, clean up subsonics and low frequencies on instruments that don’t really have any significant low end (e.g., guitars, drums other than kick, etc.). The QuadCurve EQ in Cakewalk Sonar’s ProChannel has a 48dB/octave highpass filter that’s useful for cleaning up low frequencies in non-bass tracks. A low cut filter, as used for mics, is a good place to start. By carving out more room on the low end, there will be more space for the bass to fit comfortably in the mix. The steeper the slope, the better. 7 TWEAK THE BASS IN CONTEXT Because bass is such an important element of a song, what sounds right when soloed may not mesh properly with the other tracks. Work on bass and drums as a pair—that’s why they’re called the “rhythm section”—so that you figure out the right relationship between kick and bass. But also have the other instruments up at some point to make sure the bass supports the mix as a whole. 8 BEWARE OF PHASE ISSUES It’s common to take a direct out along with a miked or amp out, then run them to separate tracks. Be careful, though: The signal going to the mic will hit later than the direct out, because the sound has to travel through the air to get to the mic. If you use two bass tracks, bring up one track, monitor in mono (not stereo), then bring up the other track. If the volume dips, or the sound gets thinner, you have a phase issue. If you’re recording into a DAW, simply slide the later track so it lines up with the earlier track. The timing difference will only be a few milliseconds (i.e., one millisecond for every foot of distance from the speaker), so you’ll probably need to zoom way in in order to align the tracks properly. 9 RESPECT VINYL’S SPECIAL REQUIREMENTS Vinyl represents a tiny amount of market share, but it’s growing and you never know when something you mix will be released on vinyl. So, if your project has even a slight chance of ending up on vinyl, pan bass to the precise center. Bass is one frequency range where there should be no stereo imaging. 10 DON’T FORGET ABOUT BASS AMP SIMS You’ll find some excellent bass amp sims in Native Instrument’s Guitar Rig, Waves GTR, Live 6 POD Farm, and Peavey’s ReValver, as well as the dedicated Ampeg SVX plug-in (from the AmpliTube family) offered by IK Multimedia. IK Multimedia’s Ampeg SVX gives solid bass sounds in stand-alone mode, but when used as a plug-in, can also “re-amp” signals recorded direct. This shows the Cabinet page, where you set up your “virtual mic.” These open up the option of recording direct, but then “re-amping” during the mix to get more of a live sound. You’ll also have more control compared to using a “real” bass amp. Even if you don’t want to use a bass sim as your primary bass sound, don’t overlook the many ways they can enhance a physical bass sound. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered hundreds of tracks), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. It's Not Just about Notes, but about Emotion by Craig Anderton Vocals are the emotional focus of most popular music, yet many self-produced songs don't pay enough attention to the voice's crucial importance. Part of this is due to the difficulty in being objective enough to produce your own vocals; luckily, I've been fortunate to work with some great producers over the years, and have picked up some points to remember when producing myself. So, let's look at a way to step back and put more EDR (Emotional Dynamic Range) into your vocals. WHAT IS EDR? Dynamics isn't just about level variations, but also emotional variations. No matter how well you know the words to a song, begin by printing out or writing a copy of the lyrics. This will become a road map that guides your delivery through the piece. Reviewing a song and showing where to add emphasis can help guide a vocal performance. Grab two different colored pens, and analyze the lyrics. Underline words or phrases that should be emphasized in one color (e.g., blue), and words that are crucial to the point of the song in the other color (e.g., red). For example, here are notes on the second verse for a song I recorded a couple years ago. In the first line, "hot" is an attention-getting word and rhymes with "got," so it receives emphasis. As the song concerns a relationship that revs up because of dancing and music, "music" is crucial to the point of the song and gets added emphasis. In line 2, "feel" and "heat" get emphasis, especially because "heat" refers back to "hot," and is foreshadowing to "Miami" in the fourth line. Line 3 doesn't get a huge emphasis, as it provides the "breather" before hitting the payoff line, which includes the title of the song ("The Miami Beat"). "Dancing" has major emphasis, "Miami beat" gets less because it re-appears several times in the tune . . . no point in wearing out its welcome. By going through a song line by line, you'll have a better idea of where/how to make the song tell a story, create a flow from beginning to end, and emphasize the most important elements. Also, going over the lyrics with a fine-tooth comb is good quality control to make sure every word counts. TYPES OF EMPHASIS Emphasis is not just about singing louder. Other ways to emphasize a word or phrase are: Bend pitch. Words with bent pitch will stand out compared to notes sung "straight." For example, in line 4 above, "dancing" slides around the pitch to add more emphasis. Clipped vs. sustained. Following a clipped series of notes with sustained sounds tends to raise the emotional level. Think of Sam and Dave's song "Soul Man": The verses are pretty clipped, but when they go into "I'm a soul man," they really draw out "soul man." The contrast with the more percussive singing in the verses is dramatic. Throat vs. lungs. Pushing air from the throat sounds very different compared to drawing air from the lungs. The breathier throat sound is good for setting up a fuller, louder, lung-driven sound. Abba's "Dancing Queen" highlights some of these techniques: the section of the song starting with "Friday night and the lights are low" is breathier and more clipped (although the ends of lines tend to be more sustained). As the song moves toward the "Dancing Queen" and "You can dance" climax, the notes are more sustained and less breathy. Timbre changes. Changing your voice's timbre draws attention to it (David Bowie uses this technique a lot). Doubling a vocal line can make a voice seem stronger, but I suggest placing the doubled vocal back in the mix compared to the main vocal—enough to support, not compete. Vibrato. Vibrato is often overused to add emphasis. You don't need to add much; think of Miles Davis, who almost never used vibrato, electing instead to use well-placed pitch-bending. (Okay, so he wasn't a singer...but he used his trumpet in a very vocal manner.) Generally, vibrato "fades out" just before the note ends, like pulling back the mod wheel on a synthesizer. This adds a sense of closure that completes a phrase. "Better" is not always better. Paradoxically, really good vocalists can find it difficult to hit a wide emotional dynamic range because they have the chops to sing at full steam all the time. This is particularly true with singers who come from a stage background, where they're used to singing for the back row. Lesser vocalists often make up for a lack of technical skill by craftier performances, and fully exploiting the tools they have. If you have a great voice, fine—but don't end up like the guitarist who can play a zillion notes a second, but ultimately has nothing to say. Pull back and let your performance "breathe." As vocals are the primary human-to-human connection in a great deal of popular music, reflect on every word, because every word is important. If some words simply don't work, it's better to rewrite the song than rely on vocal technique or artifice to carry you through. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. A Cable Is Not Just a Piece of Wire . . . By Craig Anderton If a guitar player hears something that an engineer says is impossible, lay your bets on the guitarist. For example, some guitarists can hear differences between different cords. Although some would ridicule that idea—wire is wire, right?—different cords can affect your sound, and in some cases, the difference can be drastic. What's more, there's a solid, repeatable, technically valid reason why this is so. However, cords that sound very different with one amp may sound identical with a different amp, or when using different pickups. No wonder guitarists verge on the superstitious about using a particular pickup, cord, and amp. But you needn't be subjected to this kind of uncertainty if you learn why these differences occur, and how to compensate for them. THE CORDAL TRINITY Even before your axe hits its first effect or amp input, much of its sound is already locked in due to three factors: Pickup output impedance (we assume you're using standard pickups, not active types) Cable capacitance Amplifier input impedance We'll start with cable capacitance, as that's a fairly easy concept to understand. In fact, cable capacitance is really nothing more than a second tone control applied across your pickup. A standard tone control places a capacitor from your "hot" signal line to ground. A capacitor is a frequency-sensitive component that passes high frequencies more readily than low frequencies. Placing the capacitor across the signal line shunts high frequencies to ground, which reduces the treble. However the capacitor blocks lower frequencies , so they are not shunted to ground and instead shuffle along to the output. (For the technically-minded, a capacitor consists of two conductors separated by an insulator—a definition which just happens to describe shielded cable as well.) Any cable exhibits some capacitance—not nearly as much as a tone control, but enough to be significant in some situations. However, whether this has a major effect or not depends on the two other factors (guitar output impedance and amp input impedance) mentioned earlier. AMP INPUT IMPEDANCE When sending a signal to an amplifier, some of the signal gets lost along the way—sort of like having a leak in a pipe that's transferring water from one place to another. Whether this leak is a pinhole or gaping chasm depends on the amp's input impedance. With stock guitar pickups, lower input impedances load down the guitar and produce a "duller" sound (interestingly, tubes have an inherently high input impedance, which might account for one aspect of the tube's enduring popularity with guitarists). Impedance affects not only level, but the tone control action as well. The capacitor itself is only one piece of the tone control puzzle, because it's influenced by the amp's input impedance. The higher the impedance, the greater the effect of the tone control. This is why a tone control can seem very effective with some amps and not with others. Although a high amp input impedance keeps the level up and provides smooth tone control action (the downside is that high impedances are more susceptible to picking up noise, RF, and other types of interference), it also accentuates the effects of cable capacitance. A cable that robs highs when used with a high input impedance amp can have no audible effect with a low input impedance amp. THE FINAL PIECE OF THE PUZZLE Our final interactive component of this whole mess is the guitar's output impedance. This impedance is equivalent to sticking a resistor in series with the guitar that lowers volume somewhat. Almost all stock pickups have a relatively high output impedance, while active pickups have a low output impedance. As with amp input impedance, this interacts with your cable to alter the sound. Any cable capacitance will be accented if the guitar has a high output impedance, and have less effect if the output impedance is low. There's one other consideration: the guitar output impedance and amp input impedance interact. Generally, you want a very high amplifier input impedance if you're using stock pickups, as this minimizes loss (in particular, high frequency loss). However, active pickups with low output impedances are relatively immune to an amp's input impedance. THE BOTTOM LINE So what does all this mean? Here are a few guidelines. Low guitar output impedance + low amp input impedance. Cable capacitance won't make much difference, and the capacitor used with a standard tone control may not appear to have much of an effect. Increasing the tone control's capacitor value will give a more pronounced high frequency cut. (Note: if you replace stock pickups with active pickups, keep this in mind if the tone control doesn't seem as effective as it had been.) Bottom line: you can use just about any cord, and it won't make much difference. Low guitar output impedance + high amp input impedance. With the guitar's volume control up full, the guitar output connects directly to the amp input, so the same basic comments as above (low guitar output Z with low amp input Z) applies. However, turning down the volume control isolates the guitar output from the amp input. At this point, cable capacitance has more of an effect, especially of the control is a high-resistance type (greater than 250k). High guitar output impedance + low amp input impedance. Just say no. This maims your guitar's level and high frequency response, and is not recommended. High guitar output impedance + high amp input impedance. This is the common, 50s/60s setup scenario with a passive guitar and tube amp. In this case, cable capacitance can have a major effect. In particular, coil cords have a lot more capacitance than standard cords, and can make a huge sonic difference. However, the amp provides minimum loading on the guitar, which with a quality cord, helps to preserve high end "sheen" and overall level. Taking all the above into account, if you want a more consistent guitar setup that sounds pretty much the same regardless of what cable you use (and is also relatively immune to amplifier loading), consider replacing your stock pickups with active types. Alternately, you can add an impedance converter ("buffer board") right after the guitar output (or for that matter, any effect such as a compressor, distortion box, etc. that has a high input impedance and low output impedance). This will isolate your guitar from any negative effects of high-capacitance cables or low impedance amp inputs. If you're committed to using a stock guitar and high impedance amp, there are still a few things you can do to preserve your sound: Keep the guitar cord as short as possible. The longer the cable, the greater the accumulated cable capacitance. Cable specs will include a figure for capacitance (usually specified in "picofarads per foot"). If you make your own cables, choose cable with the lowest pF per foot, consistent with cable strength. (Paradoxically, strong, macho cables often have more capacitance, whereas light weight cables have less.) Avoid coil cords, and keep your volume control as high up as possible. Don't believe the hype about "audiophile cords." They may make a difference; they may not. If you don't hear any difference with your setup, then save your money and go with something less expensive. Before closing, I should mention that this article does simplify matters somewhat because there's also the issue of reactance, and that too interacts with the guitar cable capacitance. However, I feel that the issues covered here are primarily what influence the sound, so let's leave how reactance factors into this for a later day. Remember, if you axe doesn't sound quite right, don't immediately reach for the amp: There's a lot going on even before your signal hits the amp's input jack. And if a guitarist swears that one cord sounds different from another, that could very well be the case—however, now you know why that is, and what to do about it. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Vocoders Used to Be Expensive and Super-Complex - But No More by Craig Anderton Heard any robot voices lately? Of course you have, because vocoded vocals are all over the place, from commercials to dance tracks. Vocoders have been on hits before, like Styx’s “Mr. Roboto” and Lipps Inc.’s “Funky Town,” but today they’re just as likely to be woven in the fabric of a song (Daft Punk, Air) as being applied as a novelty effect. So, let's take a look at vocoder basics, and how to make them work for you. VOCODER BASICS Vocoders are best known for giving robot voice sounds, but they have plenty of other uses. A vocoder, whether hardware or virtual, has two inputs: instrument (the “carrier” input), and mic (the “modulation” input). As you talk into the mic, the vocoder analyzes the frequency bands where there’s energy, and opens up corresponding filters that process the carrier input. This impresses your speech characteristics onto the carrier’s signal. Clockwise from top: Reason BV512, Waves Morphoder, Ableton Live Vocoder, Apple Logic Evoc 20 Some programs, including Cubase, Logic, Sonar, Reason, and Ableton Live bundle in vocoders. However, until recently, the ability to sidechain a second input to provide the modulator (or carrier) was difficult to implement. Two common workarounds are to include a sound generator within the plug-in and use the input for the mic, which is the approach taken by Waves’ Morphoder; or, insert the plug-in in an existing audio track, and use what’s on the track as the carrier. VOCODER APPLICATIONS Talking instruments. To create convincing “talking instrument” effects, use a carrier signal rich in harmonics, with a complex, sustained waveform. Remember, even though a vocoder is loaded with filters, if nothing’s happening in the range of a given filter, then that filter will not affect the sound. Vocoding an instrument such as flute gives very poor results; a guitar will produce acceptable vocoding, but a distorted guitar or big string pad will work best. Synthesizers generate complex sounds that are excellent candidates for vocoding. Choir effects. To obtain a convincing choir effect, call up a voice-like program (e.g, pulse waveform with some low pass filtering and moderate resonance, or sampled choirs) with a polyphonic keyboard, and use this for the carrier. Saying “la-la,” “ooooh,” “ahhh,” and similar sounds into the mic input, while playing fairly complex chords on the synthesizer, imparts these vocal characteristics to the keyboard sound. Adding a chorus unit to the overall output can give an even stronger choir effect. Backup vocals. Having more than one singer in a song adds variety, but if you don’t have another singer at a session to create “call-and-response” type harmonies, a vocoder might be able to do the job. Use a similar setup to the one described above for choir effects, but instead of playing chords and saying “ooohs” and “ahhhhs” to create choirs, play simpler melody or harmony lines and speak the words for the back-up vocal. Singing the words (instead of speaking them) and mixing in some of the original mic sound creates a richer effect. Cross-synthesis. No law says you have to use voice with vocoder. For a really cool effect, use a sustained sound like a pad for the carrier, and drums for the modulator. The drums will impart a rhythmic, pulsing effect to the pad. Crowd sounds. Create the sound of a chanting crowd (think political rally) by using white noise as the carrier. This multiplies your voice into what sounds like dozens of voices. This technique also works for making nasty horror movie sounds, because the voice adds an organic quality, while the white noise contributes an otherworldly, ghostly component. Don’t forget to tweak. Some vocoders let you change the number of filters (bands) used for analysis; more filters (e.g.,16 and above) give higher intelligibility, whereas fewer filters create a more “impressionistic” sound. Also, many speech components that contribute to intelligibility are in the upper midrange and higher frequencies, yet few instruments have significant amounts of energy in these parts of the frequency spectrum. Some vocoders include a provision to inject white noise (a primary component of unpitched speech sounds) into the instrument signal to allow “S” and similar sounds to appear at the output. Different vocoders handle this situation in different ways. The days when vocoders were noisy, complicated, expensive, and difficult-to-adjust hardware boxes are over. If you haven't experimented with a software vocoder lately, you jut might be in for a very pleasant surprise. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Yes, you really can use multiple audio interfaces simultaneously with a single computer by Craig Anderton You have a go-to interface that’s great, but then one day you run out of mic inputs. Too bad your computer can’t address more than one interface at a time . . . Or can it? Actually, both Macintosh and Windows computer can let you use more than one interface at a time, if you know the rules. For Windows, although with rare exceptions you can’t aggregate ASIO devices, you can aggregate interfaces that work with WDM/KS, WASAPI, or WaveRT drivers. Just select one of these drivers in your host software, and all the I/O will appear as available inputs and outputs in your application (Fig. 1). Fig. 1: Sonar X1 is set to WDM/KS, so all the I/O from a Roland Octa-Capture and DigiTech’s iPB-10 effects processor become available. With the Mac, you can aggregate Core Audio interfaces. Open Audio MIDI Setup (located in Applications/Utilities), and choose Show Audio Window. Click the little + sign in the lower left corner; an Aggregate Device box appears. Double-click it to change its name ("Apollo+MBobMini" in Fig. 2). You'll see a list of available I/O. Check the interfaces you want to aggregate, then check "Resample" for the secondary interface or interfaces (Fig. 2); this tells the computer to treat your primary, or unchecked, interface as the clock source. Now all input and output options will be available in your host program. Fig. 2: Universal Audio's Apollo is being supplemented by an Avid Mbox Mini. If you encounter any problems, just go to the Audio MIDI Setup program’s Help, and search on Aggregation. Choose Combining Audio Devices, and follow the directions. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Sometimes Little Improvements Add Up To Big Improvements By Craig Anderton The whole is equal to the sum of its parts…as anyone who ever used analog tape will attest. Who can forget that feeling of hearing yet another contribution to the noise floor whenever you brought up a fader, as one more track of tape hiss worked its way to the output? With digital recording, tape hiss isn’t an issue any more. But our standards are now more stringent, too. We expect 24-bit resolution, and noise floors that hit theoretical minimums. As a result, every little extra dB of noise, distortion, or coloration adds up, especially if you’re into using lots of tracks. A cheapo mic pre’s hiss might not make a big difference if it’s used only to capture a track of the lead singer in the punk band Snot Puppies of Doom, but if you’re using it to record twelve tracks of acoustic instruments, you will hear a difference. I’ve often stated that all the matters in music is the emotional impact, but still, it’s even better when that emotional impact is married with pristine sound quality. So, let’s get out the "audio magnifying glass" (even though they don’t work for mixing, headphones are great when you need to really pay attention to details on a track), and clean up our tracks … one dB at a time. PREVENTING THE NOISE PROBLEM Even in today’s digital world, there’s hiss from converters, guitar amps, preamps, direct boxes, instrument outputs, and more. The individual contribution in one track may not be much, but when low level signals aren’t masked by noise, you’ll hear a much more "open" sound and improved soundstage. (And if you don’t think extremely low levels of noise make that much of a difference, consider dithering—it’s very low level, but has a significant effect on our perception of sound.) The first way to reduce noise is prevention. Maybe it’s worth spending the bucks on a better mic pre if it’s going to shave a few dB off your noise figure. And what about your direct box? If it’s active, it might be time for an upgrade there as well. If it’s not active but transformer-based instead, then that’s an issue in itself as the transformer may pick up hum (first line of defense: re-orient it). Here are some additional tips: Gain-staging (the process of setting levels as a signal travels from one stage to the next stage, so that one stage neither overloads the next stage, nor feeds it too little signal) is vital to minimizing noise, as you want to send the maximum level short of distortion to the next stage. But be careful. Personally, I’d rather lose a few dB of noise figure than experience distortion caused by an unintentional overload. Crackles can be even more problematic than hiss. Use contact cleaner on your patch cord plugs, jack contacts, and controls. Tiny crackles can be masked during the recording process by everything else that’s making noises, but may show up under scrutiny during playback. In a worst-case situation, the surfaces of dissimilar metals may have actually started to crystallize. Not only can that generate noise, but these crystals are all potential miniature crystal radios, which can turn RFI into audio that gets pumped audio into the connection. Not good. Make sure any unnecessary mixer channels are muted when you record. Every unmuted channel is another potential source of noise. Unless you have a high-end sound card like the Lynx line, avoid sending any analog signals into your computer. Use digital I/O and a separate, remote converter. Although most people use LCD monitors these days, if there's a CRT on while you’re recording, don’t forget that it’s pumping out a high frequency signal (around 15kHz). This can get into your mics. Turn it off while recording. When recording electric guitar, pickups are prone to picking up hum and other interference. Try various guitar positions until you find the one that generates the minimum amount of noise. If you have a Line 6 Variax, consider yourself fortunate —it won’t pick up hum due to using a piezo pickup. No matter how hard you try, though, some noise is going to make it into your recorded tracks. That’s when it’s time to bring out the heavy artillery: noise removal, noise gating, and noise reduction. DEALING WITH NOISE AFTER THE FACT With a typical hard disk-based DAW, you have three main ways to get rid of constant noise (hiss and some types of hum): noise gating, noise removal, and noise reduction. Noise gating is the crudest method of removing noise. As a refresher, a noise gate has a particular threshold level. Signals above this level pass through unimpeded to the gate out. Signals below this threshold (e.g., hiss, low level hum, etc.) cause the gate to switch off, so it doesn’t pass any audio and mutes the output. Early noise gates were subject to a variety of problems, like "chattering" (i.e., as a signal decayed, its output level would criss-cross over the threshold, thus switching the gate on and off rapidly). Newer gates (Fig. 1) have controls that can specify attack time so that the gate ramps up instead of slamming on, decay time controls so the gate shuts off more smoothly, and a "look-ahead" function so you can set a bit of attack time yet not cut off initial transients. Fig. 1: The Gate section of Cubase’s VST Dynamics module (the compressor is toward the right) includes all traditional functions, but also offers gating based on frequency so that only particular frequencies open the gate. This makes it useful as a special effect as well as for reducing noise. In this case, the kick is being isolated and gated. Noise gates are effective with very low level signals and tracks with defined "blocks" of sound with noise inbetween, but the noise remains when signal is present—it’s just masked. (For more about noise gates, check out the article "Noise Gates Don't Have to Be Boring.") Manual noise removal is the manual version of noise gating (Fig. 2). It’s a far more tedious process, but can lead to better results with "problem" material. Fig. 2: The upper vocal track (shown in Cakewalk Sonar) has had the noise between phrases removed manually, with fades added; the lower track hasn't been processed yet. With noise removal, you cut the quiet spaces between the audio you want to keep, adding fades as desired to fade in or out of the silence, thus making any transitions less noticeable. However, doing this for all the tracks in a tune can be pretty time-consuming; in most cases, noise gating will do an equally satisfactory job. Noise reduction subtracts the noise from a track, rather than simply masking it. Because noise reduction is a complex process, you’ll usually need to use a stand-alone application like Adobe Audition (Fig. 3), Steinberg Wavelab, Sony Sound Forge, iZotope RX2, and various Waves plug-ins. Fig. 3: Sound Forge's Noise Reduction tools have been around for years, but remain both effective and easy to use. With stand-alone programs, you’ll likely have to export the track in your DAW as a separate audio file, process it in the noise reduction program, then import it back into your project. Also, you'll generally need a sample of the noise you’re trying to remove (called a "noise print," in the same sense as a fingerprint). It need only be a few hundred milliseconds, but should consist solely of the signal you’re trying to remove, and nothing else. Once you have this sample, the program can mathematically subtract it from the waveform, thus leaving a de-noised waveform. However, some noise reduction algorithms don’t need a noise print; instead, they use filtering to remove high frequencies when only hiss is present. This is related to how a noise gate works, except that it’s a more evolved way to remove noise as (hopefully) only the frequencies containing noise are affected. "Surgical" removal makes it possible to remove specific artifacts, like a finger squeak on a guitar string, or a cough in the middle of a live performance. The main way to do this is with a spectral view that shows not only amplitude and time, but also, frequency. This makes it easy to pick out something like a squeak or cough from the music, then remove it (Fig. 4). Fig. 4: Adobe Audition's spectral view and "Spot Healing Brush Tool" makes it easy to remove extraneous sounds. Here, a cough has been isolated and selected for removal. Audition does elaborate background copying and crossfading to "fill in" the space caused by the removal. While this all sounds good in theory—and 90\% of the time, it’s good in practice too—there are a few cautions. Noise reduction works best on signals that don’t have a lot of noise. Trying to take out large chunks of noise will inevitably remove some of the audio you want to keep. Use the minimum amount of noise reduction needed to achieve the desired result. 6 to 10dB is usually pretty safe. Larger values may work, but this may also add some artifacts to the audio. Let your ears be the judge; like distortion, I find audible artifacts more objectionable than a little bit of noise. You can sometimes save presets of particular noise prints, for example, of a preamp you always use. This lets you apply noise reduction to signals even if you can’t find a section with noise only. In some cases you may obtain better results by running the noise reduction twice with light noise removal rather than once with more extensive removal. So is all this effort worth it? I think you’ll be pretty surprised when you hear what happens to a mix when the noise contributed by each track is gone. Granted, it’s not the biggest difference in the world, and we’re talking about something that happens at a very low level. But minimizing even low-level noise can lead to a major improvement to the final sound … like removing the dust from a fine piece of art. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. When it comes to recording, let’s get physical By Craig Anderton Until digital recording appeared, every function in analog gear had an associated control: Whether you were tweaking levels, changing the amount of EQ gain, or switching a channel to a particular bus, a physical device controlled that function. Digital technology changed that, because functions were no longer tied to physical circuits, but virtualized as a string of numbers. This gave several advantages: Controls are more expensive than numbers, so virtualizing multiple parameters and controlling them with fewer controls lowered costs. Virtualization also saved space, because mixers no longer had to have one control per function; they could use a small collection of channel strips—say, eight—that could bank-switch to control eight channels at a time. But you don’t get something for nothing, and virtualization broke the physical connection between gear and the person operating the gear. While people debate the importance of that physical connection, to me there’s no question that having a direct, physical link between a sound you’re trying to create and the method of creating that sound is vital—for several reasons. THE ZEN OF CONTROLLERS If you’re a guitar player, here’s a test: Quick—play an A#7 chord. Okay, now list the notes that make up the chord, lowest pitch to highest. Chances are you grabbed the A#7 instantly, because your fingers—your “muscle memory”—knew exactly where to go. But you probably had to think, even if only for a second, to name all the notes making up the chord. Muscle memory is like the DMA (Direct Memory Access) process in computers, where an operation can pull data directly from memory without having to go through the CPU. This saves time, and lets the CPU concentrate on other tasks where it truly is needed. So it is with controllers: When you learn one well enough so that your fingers know where to go and you don’t have to parse a screen, look for a particular control, click it with your mouse, then adjust it, the recording process become faster and more efficient. IMPROVING DAW WORKFLOW Would you rather hit a physical button labeled “Record” when it was time to record, or move your mouse around onscreen until you find the transport button and click on it? Yeah, I thought so. The mouse/keyboard combination was never designed for recording music, but for data entry. For starters, the keyboard is switches-only—no faders. The role of changing a value over a range falls to the mouse, but a mouse can do only one thing at a time—and when recording, you often want to do something like fade one instrument down while you fade up another. Sure, there are workarounds: You can group channels and offset them, or set up one channel to increase while the other decreases, and bind them to a single mouse motion. But who wants to do that kind of housekeeping when you’re trying to be creative? Wouldn’t you rather just have a bunch of faders in front of you, and control the parameters directly? Another important consideration is that your ears do not exist in a vacuum; people refer to how we hear as the “ear/brain combination,” and with good reason. Your brain needs to process whatever enters your ears, so the simple act of critical listening requires concentration. Do you really want to squander your brain’s resources trying to figure out workarounds to tasks that would be easy to do if you only had physical control? No, you don’t. But . . . PROBLEM 1: JUST BECAUSE SOMETHING HAS KNOBS DOESN’T GUARANTEE BETTER WORKFLOW Some controllers try to squeeze too much functionality into too few controls, and you might actually be better off assigning lots of functions to keyboard shortcuts, learning those shortcuts, then using a mouse to change values. I once used a controller for editing synth parameters (the controller was not intended specifically for synths, which was part of the problem), and it was a nightmare: I’d have to remember that, say, pulse width resided somewhere on page 6, then remember which knob (which of course didn’t have a label) controlled that parameter. It was easier just to grab a parameter with a mouse, and tweak. On the other hand, a system like Native Instruments’ Kore is designed specifically for controlling plug-ins, and arranges parameters in a logical fashion. As a result, it’s always easy to find the most important parameters, like level or filter cutoff. PROBLEM 2: IT GETS WORSE BEFORE IT GETS BETTER So do you just get a controller, plug it in, and attain instant software/hardware nirvana? No. You have to learn hardware controllers, or you’ll get few benefits. If you haven’t been using a controller, you’ve probably developed certain physical moves that work for you. Once you start using a controller, those all go out the window, and you have to start from scratch. If you’re used to, say, hitting a spacebar to begin playback, it takes some mental acclimation to switch over to a dedicated transport control button. Which begs the question: So why use the transport control, anyway? Well, odds are the transport controls will have not just play but stop, record, rewind, etc. Once you become familiar with the layout, you’ll be able to bounce around from one transport function to another far more easily than you would with a QWERTY keyboard set up with keyboard shortcuts. Think of a hardware controller as a musical instrument. Like an instrument, you need to build up some “muscle memory” before you can use it efficiently. I believe that the best way to learn a controller is to go “cold turkey”: Forget you have a mouse and QWERTY keyboard, and use the controller as often as possible. Over time, using it will become second nature, and you’ll wonder how you got along without it. But realistically, that process could take days or even months; think of spending this time as an investment that will pay off later. DIFFERENT CONTROLLER TYPES There are not just many different controllers, but different controller product “families.” The following will help you sort out the options, and choose a controller that will aid your workflow rather than hinder it. Custom controllers. These are designed to fit specific programs or software like a glove; examples include Ableton's Push controller, Roland’s V-Studio series (including the 700, 100, and 20 controllers), Steinberg’s Cubase-friendly series of CMC controllers, and the like. The text labels are usually program-specific, the knobs and switches have (hopefully) been laid out ergonomically, and the integration between hardware and software is as tight as Tower of Power’s rhythm section. If a control surface was made for a certain piece of software, it’s likely that will be the optimum hardware/software combination. Ableton's Push controller is an ideal match for Live 9 A different type of controller, Softube's Console 1, is a different type of animal—it has software that emulates an analog channel strip and inserts in a DAW, with a hardware controller that provides a traditional, analog-style one-function-per-control paradigm. The control surface itself provides visual feedback, but if you want more detail, you can also see the parameters on-screen. Softube's Control 1 General-purpose DAW controllers. While designed to be as general-purpose as possible, these usually include templates for specific programs. They typically include hardware functions that are assumed to be “givens,” like tape transport-style navigation controls, channel level faders, channel pan pots, solo and mute, etc. A controller with tons of knobs/switches and good templates can give very fluid operation. Good examples of this are the Mackie Control Universal Pro (which has become a standard—many programs are designed to work with a Mackie Control and many hardware controllers can emulate the way a Mackie Control works), Avid Euphonix Artist series controllers (shown in the opening of this article), and Behringer BCF2000. Mackie Control Universal Pro There are also “single fader” hardware controllers (e.g., PreSonus FaderPort and Frontier Design Group AlphaTrack) which while compact and inexpensive, take care of many of the most important control functions you’ll use. Digital mixers. For recording, a digital mixer can make a great hands-on controller if both it and your audio interface have a multi-channel digital audio port (e.g., ADAT optical “light pipe”). You route signals out digitally from the DAW, into the mixer, then back into two DAW tracks for recording the stereo mix. Rather than using the digital mixer to control functions within the program, it actually replaces some of those functions (particularly panning, fader-riding, EQ, and channel dynamics). As a bonus, some digital mixers include a layer that converts the faders into MIDI controllers suitable for controlling virtual synths, effects boxes, etc. Synthesizers/master keyboards. Many keyboards, like the Yamaha Motif series and Korg Kronos, as well as master controllers from M-Audio, Novation, CME, and others build in control surface support. But even those without explicit control functions can sometimes serve as useful controllers, thanks to the wheels, data slider(s), footswitch, sustain switch, note number, and so on. As some sequencers allow controlling functions via MIDI notes, the keyboard can provide those while the knobs control parameters such as level, EQ, etc. Arturai's KeyLab 49 is part of a family of three keyboard controllers that also serve as control surfaces. Really inexpensive controllers. Korg's nanoKONTROL2 is a lot of controller for the money; it's basic, with volume, pan, mute, solo, and transport controls, but it's also Mackie-compatible. But if you're on an even tighter budget, remember that old drum machine sitting in the corner that hasn’t been used in the last decade? Dust it off, find out what MIDI notes the pads generate, and use those notes to control transport functions—maybe even arm record, or mute particular track(s). A drum machine can make a compact little remote if, for example, you like recording guitar far away from the computer monitor. The “recession special” controller. Most programs offer a way to customize QWERTY keyboard commands, and some can even create macros. While these options aren’t as elegant as using dedicated hardware controllers, tying common functions to key commands can save time and improve work flow. Overall, the hardware controllers designed for specific software programs will almost certainly be your best bet, followed by those with templates for your favorite software. But there are exceptions: While Yamaha’s Motif XS and XF series keyboards can’t compete with something like a Mackie Control, they serve as fine custom controllers for Cubase AI—which might be ideal if Cubase is your fave DAW. Now, let’s look at some specific issues involving control surfaces. MIDI CONTROL BASICS Most hardware control surfaces use MIDI as their control protocol. Controlling DAWs, soft synths, processors, etc. is very similar to the process of using automation in sequencing programs: In the studio, physical control motions are recorded as MIDI-based automation data, which upon playback, control mixer parameters, soft synths, and signal processors. If you’re not familiar with continuous controller messages, they’re part of the MIDI spec and alter parameters that respond to continuous control (level, panning, EQ frequency, filter cutoff, etc.). Switch controller messages have two states, and cover functions like mute on/off. There are 128 numbered controllers per MIDI channel. Some are recommended for specific functions (e.g., controller #7 affects master volume), while others are general-purpose controllers. Controller data is quantized into 128 steps, which gives reasonably refined control for most parameters. But for something like a highly resonant filter, you might hear a distinct change as a parameter changes from one value to another. Some devices interpolate values for a smoother response. MAPPING CONTROLS TO PARAMETERS With MIDI control, the process of assigning hardware controllers to software parameters is called mapping. There are four common methods: Novation's low-cost Nocturn controller features their Automap protocol, which identifies plug-in parameters, then maps them automatically. In this screen shot, the controls are being mapped to Solid State Logic's Drumstrip processor for drums. “Transparent” mapping. This happens with controllers dedicated to specific programs or protocols: They’re already set up and ready to go, so you don’t have to do any mapping yourself. Templates. This is the next easiest option. The software being controlled will have default controller settings (e.g., controller 7 affects volume, 10 controls panning, 72 edits filter cutoff, etc.), and loading a template into the hardware controller maps the controls to particular parameters. MIDI learn. This is almost as easy, but requires some setup effort. At the software, you select a parameter and enable “MIDI learn” (typically by clicking on a knob or switch—ctrl-click on the Mac, right-click with Windows). Twiddle the knob you want to have control the parameter; the software recognizes what’s sent and maps it. Fixed assignments. In this case, either the controller generates a fixed set of controllers, and you need to edit the target program to accept this particular set of controllers; or, the target software will have specific assignments it wants to see, and you need to program your controller to send these controllers. THE “STAIR-STEPPING” ISSUE Rotating a “virtual front panel” knob in a soft synth may have higher resolution than controlling it externally via MIDI, which is limited to 128 steps of resolution. In practical terms, this means a filter sweep that sounds totally smooth when done within the instrument may sound “stair-stepped” when controlled with an external hardware controller. While there’s no universal workaround, some synthesizers have a “slew” or “lag” control that rounds off the square edges caused by transitioning from one level to another. RECONCILING PHYSICAL AND VIRTUAL CONTROLS Controllers with motorized faders offer the advantage of having the physical control always track what the corresponding virtual control is doing. But with any controller that doesn’t use motorized faders, one of the big issues is punching in when a track already contains control data. If the physical position of the knob matches the value of the existing data, no problem: Punch in, grab the knob, and go. But what happens if the parameter is set to its minimum value, and the knob controlling it is full up? There are several ways to handle this. Instant jump. Turn the knob, and the parameter jumps immediately to the knob’s value. This can be disconcerting if there’s a sudden and unintended change—particularly live, where you don’t have a chance to re-do the take! Match-then-change. Nothing happens when you change the physical knob until its value matches the existing parameter value. Once they match, the hardware control takes over. For example, suppose a parameter is at half its maximum value, but the knob controlling the parameter is set to minimum. As you turn up the knob, nothing happens until the knob matches the parameter value. Then as you continue to move the knob, the parameter value follows along. This provides a smooth transition, but there may be a lag between the time you start to change the knob and when it matches the parameter value. Add/subtract. This technique requires continuous knobs (i.e., data encoder knobs that have no beginning or end, but rotate continuously). When you call up a preset, regardless of the knob position, turning it clockwise adds to the preset value, while turning it counter-clockwise subtracts from the value. Motorized faders. This requires bi-directional communication between the control surface and software, as the faders move in response to existing automation values—so there’s always a correspondance between physical control settings and parameter values. This is the great: Just grab the fader and punch. The transition will be both smooth and instantaneous. Parameter nulling. This is becoming less common as motorized faders become more economical. With nulling, there are indicators (typically LEDs) that show whether a controller’s value is above or below the existing value. Once the indicators show that the value matches (e.g., both LEDs light at the same time), punching in will give a smooth transition. IS THERE A CONTROLLER IN YOUR FUTURE? Many musicians have been raised with computers, and are perfectly comfortable using a mouse for mixing. However, it’s often the case that when you sit that person down in front of a controller, and they start learning how to actually use it, they can’t go back to the mouse. In some ways, we’re talking about the same kind of difference as there is between a serial and parallel interface: The mouse can only control one parameter at a time, whereas a control surface lets you move groups of controls, essentially turning your mix from a data-entry task into a performance. And I can certainly tell you which one I prefer! Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Improve your mixes by avoiding these seven mixing taboos By Craig Anderton If you listen to a lot of mixes coming out of home and project studios, after a while you notice a definite dividing line between the people who know what they’re doing, and the people who commit one of more of the Seven Deadly Sins of Mixing. You don’t want to be a mixing sinner, do you? Of course not! So, check out these tips. 1. The Disorienting Room Space This comes from using too many reverbs: A silky plate on the voice, a big room on the snare, shorter delays on guitar . . . concert hall, or concert hell? Even if the listener can’t identify the problem, they’ll know that something doesn’t sound quite right because we’ve all logged a lifetime of hearing sounds in acoustical spaces, so we inherently know what sounds “right.” Solution: Choose one reverb as your main reverb that defines the characteristics of your imaginary “room.” Insert this in an aux bus. If you do use additional reverb on, say, voice, use this second reverb as a channel insert effect but don’t rely on it for all your vocal reverb; make up the difference by sending the vocal to the reverb aux bus to add in a bit of the common room reverb. The end result will sound much more realistic. 2. Failure to Mute All those little pops, snorks, hisses, and hums can interfere with a mix’s transparency. Even a few glitches here and there add up when multiplied over several tracks. Solution: Automate mutes for when vocalists aren’t singing, during the spaces between lead guitar solos, and the like. Automating mutes independently of fader-style level automation lets you use each for what it does best. Your DAW may even have some kind of DSP option that, like a noise gate, strips away all signals below a certain level and deletes these regions from your track (Fig. 1). Fig. 1: Sonar’s “Remove Silence” DSP has been applied to the vocal track along the bottom of the window. 3. "Pre-Mastering" a Mix You want your mix to “pop” a little more, so you throw a limiter into your stereo bus, along with some EQ, a high-frequency exciter, a stereo widener, and maybe even more . . . thus guaranteeing your mastering engineer can’t do the best possible job with a fantastic set of mastering processors (Fig. 2). Fig. 2: I was given this file to master, but what could possibly be done with a file that had already been compressed into oblivion? Solution: Unless you really know what you’re doing, resist the temptation to “master” your mix before it goes to the mastering engineer. If you want to listen with processors inserted to get an idea of what the mix will sound like when compressed, go ahead—but hit the bypass switch before you mix down to stereo (or surround, if that’s your thing). 4. Not Giving the Lead Instrument Enough Attention This tends to be more of a problem with those who mix their own music, because they fall in love with their parts and want them all to be heard. But the listener is going to focus on the lead part, and pay attention to the rest of the tracks mostly in the context of supporting the lead. Solution: Take a cue from your listeners. 5. Too Much Mud A lot of instruments have energy in the lower midrange, which tends to build up during mixdown. As a result, the lows and high seem less prominent, and the mix sounds muddy. Solution: Try a gentle, relatively low-bandwidth cut of a dB or two around 300-500Hz on those instruments that contribute the most lower midrange energy (Fig. 3). Or, try the famous “smile” curve that accentuates lows and highs, which by definition causes the midrange to be less prominent. Fig. 3: Reducing some lower midrange energy in one or more tracks (in this case, using SSL’s X-EQ equalizer) can help toward creating a less muddy, more defined low end. 6. Dynamics Control Issues We’ve already mentioned why you don’t want to compress the entire mix, but pay attention to how individual tracks are compressed as well. Generally, a miked bass amp track needs a lot of compression to make up for variations in amp/cabinet frequency response; compression smoothes out those anomalies. You also want vocals to stand out in the mix and sound intimate, so they’re good candidates for compression as well. Solution: Be careful not to apply too much compression, but too little compression can be a problem, too. Try increasing the compression (i.e., lower threshold and/or higher ratio) until you can “hear” the effect, then back off until you don’t hear the compression any more. The optimum position is often within these two extremes: Enough to make a difference, but not enough to be heard as an “effect.” 7. Mixing in an Acoustically Untreated Room If you’re not getting an accurate read on your sound, then you can’t mix it properly. And it won’t sound right on other systems, either. Solution: Even a little treatment, like bass traps, “clouds” that sit above the mix position, and placing near-field speakers properly so you’re hearing primarily their direct sound rather than any reflected sound can help. Also consider using really good headphones as a reality check. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Compressors are Essential Recording Tools - Here's How They Work By Craig Anderton Compressors are some of the most used, and most misunderstood, signal processors. While people use compression in an attempt to make a recording "punchier," it often ends up dulling the sound instead because the controls aren't set optimally. Besides, compression was supposed to become an antique when the digital age, with its wide dynamic range, appeared. Yet the compressor is more popular than ever, with more variations on the basic concept than ever before. Let's look at what's available, pros and cons of the different types, and applications. THE BIG SQUEEZE Compression was originally invented to shoehorn the dynamics of live music (which can exceed 100 dB) into the restricted dynamic range of radio and TV broadcasts (around 40-50 dB), vinyl (50-60 dB), and analog tape (40dB to 105 dB, depending on type, speed, and type of noise reduction used). As shown in Fig. 1, this process lowers signal peaks while leaving lower levels unchanged, then boosts the overall level to bring the signal peaks back up to maximum. (Bringing up the level also brings up any noise as well, but you can't have everything.) Fig. 1: The first, black section shows the original audio. The middle, green section shows the same audio after compression; the third, blue section shows the same audio after compression and turning up the output control. Note how softer parts ot the first section have much higher levels in the third section, yet the peak values are the same. Even though digital media have a decent dynamic range, people are accustomed to compressed sound. Compression has been standard practice to help soft signals overcome the ambient noise in typical listening environments; furthermore, analog tape has an inherent, natural compression that engineers have used (consciously or not) for well over half a century. There are other reasons for compression. With digital encoding, higher levels have less distortion than lower levels—the opposite of analog technology. So, when recording into digital systems (tape or hard disk), compression can shift most of the signal to a higher overall average level to maximize resolution. Compression can create greater apparent loudness (commercials on TV sound so much louder than the programs because of compression). Furthermore, given a choice between two roughly equivalent signal sources, people will often prefer the louder one. And of course, compression can smooth out a sound—from increasing piano sustain to compensating for a singer's poor mic technique. COMPRESSOR BASICS Compression is often misapplied because of the way we hear. Our ear/brain combination can differentiate among very fine pitch changes, but not amplitude. So, there is a tendency to overcompress until you can "hear the effect," giving an unnatural sound. Until you've trained your ears to recognize subtle amounts of compression, keep an eye on the compressor's gain reduction meter, which shows how much the signal is being compressed. You may be surprised to find that even with 6dB of compression, you don't hear much apparent difference—but bypass the sucker, and you'll hear a change. Compressors, whether software- or hardware-based, have these general controls (Fig. 2): Fig. 2: The compressor bundled with Ableton Live has a comprehensive set of controls. Threshold sets the level at which compression begins. Above this level, the output increases at a lesser rate than the corresponding input change. As a result, with lower thresholds, more of the signal gets compressed. Ratio defines how much the output signal changes for a given input signal change. For example, with 2:1 compression, a 2dB increase at the input yields a 1dB increase at the output. With 4:1 compression, a 16dB increase at the input gives a 4dB increase at the output. With "infinite" compression, the output remains constant no matter how much you pump up the input. Bottom line: Higher ratios increase the effect of the compression. Fig. 3 shows how input, output, ratio, and threshold relate. Fig. 3: The threshold is set at -8. If the input increases by 8dB (e.g., from -8 to 0), the output only increases by 2dB (from -8 to -6). This indicates a compression ratio of 4:1. Attack determines how long it takes for the compression to take effect once the compressor senses an input level change. Longer attack times let through more of a signal's natural dynamics, but those signals are not being compressed. In the days of analog recording, the tape would absorb any overload caused by sudden transients. With digital technology, those transients clip as soon as they exceed 0 VU. Some compressors include a "saturation" option that mimics the way tape works, while others "soft-clip" the signal to avoid overloading subsequent stages. Yet another option is to include a limiter section in the compressor, so that any transients are "clamped" to, say, 0dB. Decay (also called Release) sets the time required for the compressor to give up its grip on the signal once the input passes below the threshold. Short decay settings are great for special effects, like those psychedelic '60s drum sounds where hitting the cymbal would create a giant sucking sound on the whole kit. Longer settings work well with program material, as the level changes are more gradual and produce a less noticeable effect. Note that many compressors have an "automatic" option for the Attack and/or Decay parameters. This analyzes the signal at any given moment and optimizes attack and decay on-the-fly. It's not only helpful for those who haven't quite mastered how to set the Attack and Decay parameters, but often speeds up the adjustment process for veteran compressor users. Output control. As we're squashing peaks, we're actually reducing the overall peak level. This opens up some headroom, so increasing the output level compensates for any volume drop. The usual way to adjust the output control is to turn this control up until the compressed signal's peak levels match the bypassed signal's peak levels. Some compressors include an "auto-gain" or "auto makeup" feature that increases the output gain automatically. Metering. Compressors often have an input meter, output meter for matching levels between the input and output, and most importantly, a gain reduction meter. (In Fig. 1, the orange bar to the left of the output meter is showing the amount of gain reduction.) If the meter indicates a lot of gain reduction, you're probably adding too much compression. The input meter in Fig. 1 shows the threshold with a small arrow, so you can see at a glance how much of the input signal is above the threshold. ADDITIONAL FEATURES You'll find the above functions on many compressors. The following features tend to be somewhat less common, but you'll still find them on plenty of products. Sidechain jacks are available on many hardware compressors, and some virtual compressors include this feature as well (sidechaining became formalized in the VST 3 specification, but it was possible to do in prior VST versions. A sidechain option lets you insert filters in the compressor's feedback loop to restrict compression to a specific frequency range. For example, if you insert a high pass filter, only high frequecies are compressed—perfect for "de-essing" vocals. The hard knee/soft knee option controls how rapidly the compression kicks in. With a soft knee response, when the input exceeds the threshold, the compression ratio is less at first, then increases up to the specified ratio as the input increases. With a hard knee curve, as soon as the input signal crosses the threshold, it's subject to the full amount of compression. Sometimes this is a variable control from hard to soft, and sometimes it's a toggle choice between the two. Bottom line: use hard knee when you want to clamp levels down tight, and soft when you want a gentler, less audible compression effect. The link switch in stereo compressors switches the mode of operation from dual mono to stereo. Linking the two channels together allows changes in one channel to affect the other channel, which is necessary to preserve the stereo image. Lookahead. A compressor cannot, by definition, react instantly to a signal because it has to measure the signal before it can decide how much to reduce the gain. As a result, the lookahead feature delays the audio path somewhat so the compressor can "look ahead" and see what kind of signal it will be processing, and therefore, react in time when the actual signal hits. Response or Envelope. The compressor can react to a signal based on its peak or average level, but its compression curve can follow different characteristics as well—a standard linear response, or one that more closely resembles the response of vintage, opto-isolator-based compressors. COMPRESSOR TYPES: THUMBNAIL DESCRIPTIONS Compressors are available in hardware (usually a rack mount design or for guitarists, a "stomp box") and as software plug-ins for existing digital audio-based programs. Following is a description of various compressor types. "Old faithful." Whether rack-mount or software-based, typical features include two channels with gain reduction amount meters that show how much your signal is being compressed, and most of the controls mentioned above (FIg. 4). Fig. 4: Native Instruments' Vintage Compressor bundle includes three different compressors modeled after vintage units. Multiband compressors. These divide the audio spectrum into multiple bands, with each one compressed individually (Fig. 5). This allows for a less "effected" sound (for example, low frequencies don't end up compressing high frequencies), and some models let you compress only the frequency ranges that need to be compressed. Fig. 5: Universal Audio's Precision Multiband is a multiband compressor, expander, and gate. Vintage and specialty compressors. Some swear that only the compressor in an SSL console will do the job. Others find the ultimate squeeze to be a big bucks tube compressor. And some guitarists can't live without their vintage Dan Armstrong Orange Squeezer, considered by many to be the finest guitar sustainer ever made. Fact is, all compressors have a distinctive sound, and what might work for one sound source might not work for another. If you don't have that cool, tube-based compressor from the 50s of which engineers are enamored, don't lose too much sleep over it: Many software plug-ins emulate vintage gear with an astonishing degree of accuracy (Fig. 6). Fig. 6: Cakewalk's PC2A, a compressor/limiter for Sonar's ProChannel module, emulates vintage compression characteristics. Whatever kind of audio work you do, there's a compressor somewhere in your future. Just don't overcompress—in fact, avoid using compression as a "fix" for bad mic technique or dead strings on a guitar. I wouldn't go as far as those who diss all kinds of compression, but it is an effect that needs to be used subtly to do its best. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Who's stealing your headroom? It may be the archenemy of good audio - DC Offset By Craig Anderton It was a dark and stormy night. I was rudely awakened at 3 AM by the ringing of a phone, pounding my brain like a jackhammer that spent way too much time chowing down at Starbucks. The voice on the other end was Pinky the engineer, and he sounded as panicked as a banana slug in a salt mine. "Anderton, some headroom's missing. Vanished. I can't master one track as hot as the others on the Kiss of Death CD. Checked out the usual suspects, but they're all clean. You gotta help." Like an escort service at a Las Vegas trade show, my brain went into overdrive. Pinky knew his stuff...how to gain-stage, when not to compress, how to master. If headroom was stolen right out from under his nose, it had to be someone stealthy. Someone you didn't notice unless you had your waveform Y-axis magnification up. Someone like...DC Offset. Okay, so despite my best efforts to add a little interest, DC offset isn't a particularly sexy topic. But it can be the culprit behind problems such as lowered headroom, mastering oddities, pops and clicks, effects that don't process properly, and other gremlins. DC OFFSET IN THE ANALOG ERA We'll jump into the DC offset story during the 70s, when op amps became popular. These analog integrated circuits pack a tremendous amount of gain in a small, inexpensive package with (typically) two inputs and one output. Theoretically, in its quiescent state (no input signal), the ins and out are at exactly 0.00000 volts. But due to imperfections within the op amp itself, sometimes there can be several millivolts of DC present at one of the inputs. Normally this wouldn't matter, but if the op amp is providing a gain of 1000 (60dB), a typical 5 mV input offset signal would get amplified up to 5000mV (5 volts). If the offset appeared at the inverting (out of phase) input, then the output would have a DC offset of –5.0 volts. A 5mV offset at the non-inverting input would cause a +5.0 DC offset. There are two main reasons why this is a problem. Reduced dynamic range and headroom. An op amp's power supply isbipolar (i.e., there are positive and negative supply voltages with respect to ground). Suppose the op amp's maximum undistorted voltage swing is ±15V. If the output is already sitting at, say, +5V, the maximum voltage swing is now +10/-20V. However, as most audio signals are usually symmetrical around ground and you don't want either side to clip, the maximum voltage swing is really down to ±10V—a 33\% loss of available headroom. Problems with DC-coupled circuits. In a DC-coupled circuit (sometimes preferred by audiophiles due to superior low frequency response), any DC gets passed along to the next stage. Suppose the op amp mentioned earlier with a +5V output offset now feeds a DC-coupled circuit with a gain of 5. That +5V offset becomes a +25V offset—definitely not acceptable! ANALOG SOLUTIONS With capacitor-coupled analog circuits, any DC offset supposedly won't pass from one stage to the next because the capacitor that couples the two stages together can pass AC but not DC. Still, any DC offset limits dynamic range in the stage in which it occurs. (However, if the coupling capacitor is leaky or otherwise defective, some DC may make it through anyway.) There are traditionally two ways to deal with op amp offsets. Use premium op amps that have been laser-trimmed to provide minimum offset. Include a trimpot that injects a voltage equal and opposite to the inherent input offset. In other words, with no signal present, you measure the op amp output voltage while adjusting the trimpot until the voltage is exactly zero. Some op amps even provide pins for offset control so you don't have to hook directly into one of the inputs. (Note: As trimpot settings can drift over time, if you have analog gear with op amps, sometimes it's worth having a tech check for offsets and re-adjust the trimpot setting if needed.) DIGITAL DC OFFSET In digital-land, there are two main ways DC offset can get into a signal. Recording an analog signal with a DC offset into a DC-coupled system More commonly, inaccuracies in the A/D converter or conversion subsystem that produce a slight output offset voltage. As with analog circuits, a processor that provides lots of gain (like a distortion plug-in) can turn a small amount of offset into something major. In either case, offset appears as a signal baseline that doesn't match up with the "true" 0 volt baseline (Fig. 1). Fig. 1: With these two drum hits, the first one has a significant amount of DC offset. The second has been corrected to get rid of DC offset, and as more headroom is available, it can now be normalized for more level if desired. Digital technology has also brought about a new type of offset issue that's technically more of a subsonic problem than "genuine" DC offset, but nonetheless causes some of the same negative effects. As one example, once I transposed a sliding oscillator tone so far down it added what looked like a slowly-varying DC offset to the signal, which drastically limited the headroom (Fig. 2). Fig. 2: The top signal is the original normalized version, while the lower one has been processed by a steep low-cut filter at 20Hz, then re-normalized. Note how the level for the lower waveform is much "hotter." In addition to reduced headroom, there are two other major problems associated with DC offset in digitally-based systems. When transitioning between two pieces of digital audio, one with an offset and one without (or with a different amount of offset), there will be a pop or click at the transition point. Effects or processes requiring a signal that's symmetrical about ground will not work as effectively. For example, a distortion plug-in that clips positive and negative peaks will clip them unevenly if there's a DC offset. More seriously, a noise gate or "strip silence" function will need a higher (or lower) threshold than normal in order to be higher than not just the noise, but the noise plus the offset value. DIGITAL SOLUTIONS There are three main ways to solve DC offset problems with software-based digital audio editing programs. Most pro-level digital audio editing software includes a DC offset correction function, generally found under a "processing" menu along with functions like change gain, reverse, flip phase, etc. This function analyzes the signal, and adds or subtracts the required amount of correction to make sure that 0 really is 0. Many sequencing programs also include DC offset correction as part of a set of editing options (Fig. 3). Fig. 3. Like many programs, Sonar's audio processing includes the option to remove DC offset from audio clips. Apply a steep high-pass filter that cuts off everything below 20Hz or so. (Even with a comparatively gentle 12dB/octave filter, a signal at 0.5Hz will still be down more than 60dB). In practice, it's not a bad idea anyway to nuke the subsonic part of the spectrum, as some processing can interact with a signal to produce modulation in the below 20Hz zone. Your speakers can't reproduce signals this low and they just use up bandwidth, so nuke 'em. Select a 2—10 millisecond or so region at the beginning and end of the file or segment with the offset, and apply a fadein and fadeout. This will create an envelope that starts and ends at 0, respectively. It won't get rid of the DC offset component within the file (so you still have the restricted headroom problem), but at least you won't hear a pop at transitions. CASE CLOSED Granted, DC offset usually isn't a killer problem, like a hard disk crash. In fact, usually there's not enough to worry about. But every now and then, DC offset will rear its ugly head in a way that you do notice. And now, you know what to do about it. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. When it’s time to mix a recording, you need a strategy By Craig Anderton Mixing is not only an art, it’s the crucial step that turns a collection of tracks into a finished piece of music. A good mix can bring out the best in your music—it spotlights a composition’s most important elements, adds a few surprises to excite the listener, and sounds good on anything from a portable MP3 player with nasty earbuds to an audiophile’s dream setup. Theoretically, mixing should be easy: you just adjust the knobs until everything sounds great. But this doesn’t happen by accident. Mixing is as difficult to master as playing a musical instrument, so let’s take a look at what goes into the mixing process. POINTS OF REFERENCE Start by analyzing well-mixed recordings by top-notch engineers and producers such as Bruce Swedien, Roger Nichols, Shelly Yakus, Steve Albini, Bob Clearmountain, and others. Don’t focus on the music, just the mix. Notice how—even with a "wall of sound"—you can pick out every instrument because each element of the music has its own space. Also note that the frequency response balance will be uniform throughout the audio spectrum, with enough highs to sound sparkly but not screechy, sufficient bass to give a satisfying bottom end without turning the mix into mud, and a midrange that adds presence and definition. One of the best mixing tools is a CD player and a really well-mixed reference CD. Patch the CD player into your mixer, and A-B your mix to the reference CD periodically. If your mix sounds substantially duller, harsher, or less interesting, listen carefully and try to isolate the source of any differences. A reference CD also provides a guideline to the correct relative levels of drums, vocals, etc. Match the CD’s level to the overall level of your mix by matching the peak levels of both signals. If your mix sounds a lot quieter even though its peaks match the reference CD’s peak levels, that probably means that the reference has been compressed or limited a fair amount to restrict the dynamic range. Compression is something that can always be done at the mastering stage—in fact, it probably should be, because a good mastering suite will have top-of-the-line compressors and someone who is an ace at applying them. PROPER MONITORING LEVELS Loud, extended mixing sessions are tough on the ears. Mixing at low levels keeps your ears "fresher" and minimizes ear fatigue; loud mixes may get your juices flowing, but they make it more difficult to hear subtle level variations. Many project studios have noise constraints, so mixing through headphones might seem like a good idea. Although headphones are excellent for catching details that you might not hear over speakers, they are not necessarily good for general mixing because they magnify some details out of proportion. It’s better to use headphones for reality checks. THE ARRANGEMENT Scrutinize the arrangement prior to mixing. Solo project studio arrangements are particularly prone to "clutter" because as you lay down the early tracks, there’s a tendency to overplay to fill up all the empty space. As the arrangement progresses, there’s not a lot of room for overdubs. Remember: the fewer the number of notes, the greater the impact of each note. As Sun Ra once said, "Space is the place." MIXING: THE 12-STEP PROGRAM Although there aren’t any rules to recording or mixing, until you develop your own mixing "style" it’s helpful to at least have a point of departure. So, here’s what has worked for me. You "build" a mix over time by making a variety of adjustments. There are (at least!) twelve major steps involved in creating a mix, but what makes mixing so difficult is that these steps interact. Change the equalization, and you also change the level because you’re boosting or cutting some element of the sound. In fact, you can think of a mix as an "audio combination lock" since when all the elements hit the right combination, you end up with a good mix. Let’s look at these twelve steps, but remember, this is just one person’s way of mixing—you might discover a totally different approach that works better for you. Step 1: Mental Preparation Mixing can be tedious, so set up an efficient workspace. If you don’t have a really good office chair with lumbar support, consider a trip to the local office supply store. Keep paper and a log book handy for taking notes, dim the lighting a little bit so that your ears become more sensitive than your eyes, and in general, psych yourself up for an interesting journey. Take periodic breaks (every 45-60 minutes or so) to "rest" your ears and gain a fresher outlook on your return. This may seem like a luxury if you’re paying for studio time, but even a couple minutes of down time can restore your objectivity and, paradoxically, complete a mix much faster. Step 2: Review The Tracks Listen at low volume to scope out what’s on the multitrack; write down track information, and use removable stick-on labels or erasable markers to indicate which sounds correspond to which mixer channels. Group sounds logically, such as having all the drum parts on consecutive channels. Step 3: Put On Headphones and Fix Glitches Fixing glitches is a "left brain" activity, as opposed to the "right brain" creativity involved in doing a mix. Switching back and forth between these two modes can hamper creativity, so do as much cleaning up as possible—erase glitches, bad notes, and the like—before you get involved in the mix. Listen on headphones to catch details, and solo each track. If you’re sequencing virtual tracks, this is the time to thin out excessive controller information, check for duplicate notes, and avoid overlapping notes on single-note lines (such as bass and horn parts). Fig. 1: Sony's Sound Forge can clean up a mix by "de-noising" tracks. Also consider using a digital audio editor to do some digital editing and noise reduction (although you may need to export these for editing, then re-import the edited version into your project). Fig. 1 shows a file being "de-noised" in Sony's Sound Forge prior to being re-imported. Low-level artifacts may not seem that audible, but multiply them by a couple dozen tracks and they can definitely muddy things up. Step 4: Optimize Any Sequenced MIDI Sound Generators With sequenced virtual tracks, optimize the various sound generators. For example, for more brightness, try increasing the lowpass filter cutoff instead of adding equalization at the console. Step 5: Set Up a Relative Level Balance Between the Tracks Avoid adding any processing yet; concentrate on the overall sound of the tracks—don’t become distracted by left-brain-oriented detail work. With a good mix, the tracks sound good by themselves, but sound their best when interacting with the other tracks. Try setting levels in mono at first, because if the instruments sound distinct and separate in mono, they’ll open up even more in stereo. Also, you may not notice parts that "fight" with others if you start off in stereo. Step 6: Adjust Equalization (EQ) EQ can help dramatize differences between instruments and create a more balanced overall sound. Fig. 2 shows the EQ in Cubase; in this case, it's being applied to a clean electric guitar sound. There's a slight lower midrange dip to avoid competing with other sounds in the region, and a lift around 3.7kHz to give more definition. Fig. 2: Proper use of EQ is essential to nailing a great mix. Work on the most important song elements first (vocals, drums, and bass) and once these all "lock" together, deal with the more supportive parts. The audio spectrum has only so much space; ideally, each instrument will stake out its own "turf" in the audio spectrum and when combined together, will fill up the spectrum in a satisfying way. (Of course, this is primarily a function of the tune’s arrangement, but you can think of EQ as being part of the arrangement.) One of the reasons for working on drums early on the mix is that a drum kit covers the audio spectrum pretty thoroughly, from the low thunk of the kick drum to the sizzle of the cymbal. Once that’s set up, you’ll have a better idea of how to integrate the other instruments. EQ added to one track may affect other tracks. For example, boosting a piano part’s midrange might interfere with vocals, guitar, or other midrange instruments. Sometimes boosting a frequency for one instrument implies cutting the same region in another instrument; to have vocals stand out more, try notching the vocal frequencies on other instruments instead of just boosting EQ on the voice. Think of the song as a spectrum, and decide where you want the various parts to sit. I sometimes use a spectrum analyzer when mixing, not because ears don’t work well enough for the task, but because the analyzer provides invaluable ear training and shows exactly which instruments take up which parts of the audio spectrum. This can often alert you to an abnormal buildup of audio energy in a particular region. If you really need a sound to "break through" a mix, try a slight boost in the 1 to 3kHz region. Don’t do this with all the instruments, though; the idea is to use boosts (or cuts) to differentiate one instrument from another. To place a sound further back in the mix, sometimes engaging the high cut filter will do the job—you may not even need to use the main EQ. Also, applying the low cut filter on instruments that veer toward the bass range, like guitar and piano, can help trim their low end to open up more space for the all-important bass and kick drum. Step 7: Add Any Essential Signal Processing "Essential" doesn’t mean "sweetening," but processing that is an integral part of the sound (such an echo that falls on the beat and therefore changes the rhythmic characteristics of a part, distortion that alters the timbre in a radical way, vocoding, etc.). Step 8: Create a Stereo Soundstage Now place your instruments within the stereo field. Your approach might be traditional (i.e., the goal is to re-create the feel of a live performance) or something radical. Pan mono instruments to a particular location, but avoid panning signals to the extreme left or right. For some reason they just don’t sound quite as substantial as signals that are a little bit off from the extremes. Fig. 3 shows the Console view from Sonar. Note that all the panpots are centered, as recommended in step 5, prior to creating a stereo soundstage. Fig. 3: When you start a mix, setting all the panpots to mono can pinpoint sounds that interfere with each other; you might not notice this if you start off with stereo placement. As bass frequencies are less directional than highs, place the kick drum and bass toward the center. Take balance into account; for example, if you’ve panned the hi-hat (which has a lot of high frequencies) to the right, pan a tambourine, shaker, or other high-frequency sound somewhat to the left. The same concept applies to midrange instruments as well. Signal processing can create a stereo image from a mono signal. One method uses time delay processing, such as stereo chorusing or short delays. For example, if a signal is panned to the left, feed some of this signal through a short delay and send its output to the another channel panned to the right. However, it’s vital to check the signal in mono at some point, as mixing the delayed and straight signals may cause phase cancellations that aren’t apparent when listening in stereo. Stereo placement can significantly affect how we perceive a sound. Consider a doubled vocal line, where a singer sings a part and then doubles it as closely as possible. Try putting both voices in opposite channels; then put both voices together in the center. The center position gives a somewhat smoother sound, which is good for weaker vocalists. The opposite-channel vocals give a more defined, distinct sound that can really help spotlight a good singer. Step 9: Make Any Final Changes to the Arrangement Minimize the number of competing parts to keep the listener focused on the tune, and avoid "clutter." You may be extremely proud of some clever effect you added, but if it doesn’t serve the song, get rid of it. Conversely, if you find that a song needs some extra element, this is your final opportunity to add an overdub or two. Never fall in love with your work until it’s done; maintain as much objectivity as you can. You can also use mixing to modify an arrangement by selectively dropping out and adding specific tracks. This type of mixing is the foundation for a lot of dance music, where you have looped tracks that play continuously, and the mixer sculpts the arrangement by muting parts and doing major level changes. Step 10: Audio Architecture Now that we have our tracks set up in stereo, let’s put them in an acoustical space. Start by adding reverberation and delay to give the normally flat soundstage some acoustic depth. Generally, you’ll want an overall reverb to create a particular type of space (club, concert hall, auditorium, etc.) but you may also want to use a second reverb to add effects, such as a gated reverb on toms. But beware of situations where you have to drench a sound with reverb to have it sound good. If a part is questionable enough that it needs a lot of reverb, redo the part. Step 11: Tweak, Tweak, and Re-Tweak Now that the mix is on its way, it’s time for fine-tuning. If you use automated mixing, start programming your mixing moves. Remember that all of the above steps interact, so go back and forth between EQ, levels, stereo placement, and effects. Listen as critically as possible; if you don’t fix something that bothers you, it will forever haunt you every time you hear the mix. While it’s important to mix until you’re satisfied, it’s equally important not to beat a mix to death. Once Quincy Jones offered the opinion that recording with synthesizers and sequencing was like "painting a 747 with Q-Tips." A mix is a performance, and if you overdo it, you’ll lose the spontaneity that can add excitement. You can also lose that "vibe" if you get too detailed with any automation moves. A mix that isn’t perfect but conveys passion will always be more fun to listen to than one that’s perfect to the point of sterility. As insurance, don’t always erase your old mixes—when you listen back to them the next day, you might find that an earlier mix was the "keeper." In fact, you may not even be able to tell too much difference between your mixes. A veteran record producer once told me about mixing literally dozens of takes of the same song, because he kept hearing small changes which seemed really important at the time. A couple of weeks later he went over the mixes, and couldn’t tell any difference between most of the versions. Be careful not to waste time making changes that no one, even you, will care about a couple days later. Step 12: Check Your Mix Over Different Systems Before you sign off on a mix, check it over a variety of speakers and headphones, in stereo and mono, and at different levels. The frequency response of the human ear changes with level (we hear less highs and lows at lower levels), so if you listen only at lower levels, mixes may sound bass-heavy or too bright at normal levels. Go for an average that sounds good on all systems. With a home studio, you have the luxury of leaving a mix and coming back to it the next day when you’re fresh, after you’ve had a chance to listen over several different systems to decide if any tweaks need to be made. One common trick is to run off some reference CDs and see what they sound like in your car. Road noise will mask any subtleties, and give you a good idea of what elements "jump out" of the mix. I also recommend booking some time at a pro studio to hear your mixes. If the mix sounds good under all these situations, your mission is accomplished. Craig Anderton is Executive Editor of Electronic Musician magazine and Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. Why be normal? Use your footpedal to control parameters other than volume and wah By Craig Anderton A lot of guitar hardware multieffects, like the Line 6 POD HD500, Roland ME-70, DigiTech iPB-10 and RP1000, Vox ToneLab ST, and Zoom G3X (Fig. 1) have a footpedal you can assign to various parameters. Fig. 1: Many multieffects, like Zoom's G3X, have built-in pedals. However, if not, some have an expression pedal jack so you can still use a pedal with the effects. If you're into amp sims, you're covered there too: Native Instruments' Rig Kontrol has a footpedal you can assign to any amp sim's parameters, and IK Multimedia's StealthPedal (Fig. 2) also works as a controller for amp sim software, not just IK's own AmpliTube. Fig. 2: IK's StealthPedal isn't only a controller, but includes jacks for plugging in a second expression pedal, as well as a dual footswitch. In most multieffects, volume and wah are the no-brainer, default pedal assignments. However, there are a whole lot of other parameters that are well-suited to pedal control. Doing so can add real-time expressiveness to your playing, and variety to your sound. ASSIGNING PEDALS TO PARAMETERS Some multieffects make this process easy: They have patches pre-programmed to work with their pedals. But sometimes the choices are fairly ordinary and besides, the manufacturer's idea of what you want to do may not be the same as what you want to do. So, it pays to spend a little time digging into the manual so you can figure out how to assign the pedal to any parameter you want. Effects with a computer interface are usually the easiest for making assignments, and they're certainly easiest to show in an article due to the ease of taking screen shots. For example, with DigiTech's iPB-10, you can use the iPad interface to assign the expression pedal to a particular parameter. In Fig. 3, the pedal has been assigned to the Screamer effect Drive parameter. Fig. 3: The iPB-10 pedal now controls the Screamer effect's Drive parameter. Note that you can set a minimum and maximum value for the pedal range; in this case, it's 8 and 58 respectively. This example shows the POD HD500 Edit program, set to the Controllers page. Here, the EXP-1 (main expression pedal) controller has been assigned to delay Feedback (Fig. 4). Fig. 4: It's easy to assign the HD500's pedal to various parameters using the POD HD500 Edit program. Note that like the iPB-10, you can set minimum and maximum values for the pedal range. Most amp sims have a "Learn" option. For example, with Guitar Rig, you can control any parameter by right-clicking on it and selecting "Learn" (Fig. 5). Fig. 5: The Chorus/Flanger speed control is about to "learn" the controller to which it should respond, like a pedal that generates MIDI controller data. With learn enabled, when you move a MIDI controller (like the StealthPedal mentioned previously), Guitar Rig will "learn" that the chosen parameter should respond to that particular controller's motion. Often these assignments are stored with a preset, so the pedal might control one parameter in one preset, and a different parameter in another. THE TOP 10 PEDAL TARGETS Now that we've covered how to assign a controller to parameters, let's check out which parameters are worth controller. Some parameters are a natural for foot control; here are ten that can make a big difference to your sound. Distortion drive This one's great with guitar. Most of the time, to go from a rhythm to lead setting you step on a switch, and there's an instant change. Controlling distortion drive with a pedal lets you go from a dirty rhythm sound to an intense lead sound over a period of time. For example, suppose you're playing eighth-note chords for two measures before going into a lead. Increasing distortion drive over those two measures builds up the intensity, and slamming the pedal full down gives a crunchy, overdriven lead. Chorus speed If you don't like the periodic whoosh-whoosh-whoosh of chorus effects, assign the pedal so that it controls chorus speed. Moving the pedal slowly and over not too wide a range creates subtle speed variations that impart a more randomized chorus effect. This avoids having the chorus speed clash with the tempo. Echo feedback Long, languid echoes are great for accenting individual notes, but get in the way during staccato passages. Controlling the amount of echo feedback lets you push the number of echoes to the max when you want really spacey sounds, then pull back on the echoes when you want a tighter, more specific sound. Setting echo feedback to minimum gives a single slapback echo instead of a wash of echoes. Echo mix Here's a related technique where the echo effect uses a constant amount of feedback, but the pedal sets the balance of straight and echoed sounds. The main differences compared to the previous effect are that when you pull back all the way on the pedal, you get the straight signal only, with no slapback echo; and you can't vary the number of echoes, only the relative volume of the echoes. Graphic EQ boost Pick one of the midrange bands between 1 and 4 kHz to control. Adjust the scaling so that pushing the pedal all the way down boosts that range, and pulling the pedal all the way back cuts the range. For solos, boost for more presence, and during vocals, cut to give the vocals more "space" in the frequency spectrum. Reverb decay time To give a "splash" of reverb to an individual note, just before you play the note push the pedal down to increase the reverb decay time. Play the note, and it will have a long reverb tail. Then pull back on the pedal, and subsequent notes will have the original, shorter reverb setting. This works particularly well when you want to accent a drum hit. Pitch transposer pitch For guitarists, this is like having a "whammy bar" on a pedal. The effectiveness depends on the quality of the pitch transposition effect, but the basic idea is to set the effect for pitch transposed sound only. Program the pedal so that when it's full back, you hear the standard instrument pitch, and when it's full down, the pitch is an octave lower. This isn't an effect you'd use everyday, but it can certainly raise a few eyebrows in the audience as the instrument's pitch slips and slides all over the place. By the way, if the non-transposed sound quality is unacceptable, mix in some of the straight sound (even though this dilutes the effect somewhat). Pitch transposer mix This is a less radical version of the above. Program the transposer for the desired amount of transposition – octaves, fifths, and fourths work well – and set the pedal so that full down brings in the transposed line, and full back mixes it out. Now you can bring in a harmony line as desired to beef up the sound. Octave lower transpositions work well for guitar/bass unison effects, whereas intervals like fourths and fifths work best for spicing up single-note solos. Parametric EQ frequency The object here is to create a wah pedal effect, although with a multieffects, you have the option of sweeping a much wider range if desired. Set up the parametric for a considerable amount of boost (start with 10 dB), narrow bandwidth, and initially sweep the filter frequency over a range of about 600 Hz to 1.8 kHz. Extend this range if you want a wider wah effect. Increasing the amount of boost increases the prominence of the wah effect, while narrowing the bandwidth creates a more intense, "whistling" wah sweep. Increasing the output of anything (e.g.., input gain, preamp, etc.) before a compressor This allows you to control your instrument's dynamic range; pulling back on the pedal gives a less compressed (wide dynamic range) signal, while pushing down compresses the signal. This restricts the dynamic range and gives a higher average signal level, which makes the sound "jump out." Also note that when you push down on the pedal, the dynamics will change so that softer playing will come up in volume. This can make a guitar seem more sensitive, as well as increase sustain and make the distortion sound smoother. And there you have the top ten pedal targets. There are plenty of other options just waiting to be discovered—so put your pedal to the metal, and realize more of the potential in your favorite multieffects or amp sim. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. It's possible to get "hot" masters without losing dynamics by Craig Anderton I was driving along one of those Floridian roads that goes between the coasts, and is flatter than the Spice Girls without auto-tuning . . . in other words, a perfect place to crank up my car's CD player. As it segued from a recent CD into Simple Minds' "Real Life," which I hadn't heard in quite a while, I noticed it was somewhat quieter, so I turned up the volume. And in the process, I got to experience dynamics—like they used to have on CDs back in the 80s. Much has been said about the evils of overcompression, but we're so used to it that sometimes you need to hear great music, intelligently mixed without excessive compression, to remember what we're missing. Dynamics are an essential component of a tune's overall emotional impact. Yet some engineers kill those dynamics, because "everyone else does it," and they don't want their songs to sound "weak" compared to others. So we're stuck in a rut where each song has to be louder than the last one—listener fatigue, anyone? I sometimes wonder if the decline in sales of recorded music has something to do with today's mastering style, which makes music that although loud, is ultimately not that much fun to listen to. So what's an engineer to do? Compromise—find that sweet spot where you preserve a fair amount of dynamics, but also have a master that's loud enough to be "in the ballpark" of today's music. The following tips are designed to help you do just that. Maybe your tune won't be quite as loud as everyone else's, but I bet it will elicit a more emotional response from those willing to turn up their volume control a bit. NUKE THE SUBSONICS AND DC OFFSET Digital audio can record and reproduce energy well below 20Hz from sources like downward transposition/pitch-shifting, and DSP operations that allow control signals (such as fades) to superimpose their spectra onto the audio. While inaudible, they still take up headroom. You may be able to reclaim a dB or two by simply removing everything below 20Hz. However, note that if you can find individual tracks that contribute to a subsonics problem and do any needed fixes while mixing (Fig. 1), this eliminates the need to add filtering on the entire tune. Fig. 1: The low frequencies are being cut at 48dB/octave starting around 30Hz in a Cakewalk Sonar project track, thus eliminating subsonics before they get into the mix. Another culprit, DC offset, reduces headroom because positive or negative peaks are reduced by the amount of offset. Removing residual DC offset, using the "Remove DC offset" function found in most digital audio editors (Fig. 2) and DAWs, "centers" the waveform around the 0V point. This allows a greater signal level for a given amount of headroom. Fig. 2: Like many other programs, Sony's Sound Forge includes DSP to remove DC offset. DO YOU REALLY NEED MONDO BASS? As the ear is less responsive to bass frequencies, there's a tendency to crank up the bass, especially among those who lack mixing experience. Reducing bass can open up more headroom for other frequencies. To compensate for this and create the illusion of more bass: Use a multiband compressor on just the bass region. The bass will seem as loud, but take up less bandwidth. Try the Waves MaxxBass plug-in (Fig. 3; a hardware version is also available), or the Aphex Big Bottom process. MaxBass isolates the signal's original bass and generates harmonics from it; psycho-acoustically, upon hearing the upper harmonics, your brain "fills in" the bass's fundamental. The Big Bottom process uses a different, but also highly effective, psychoacoustic principle to emphasize bass. Fig. 3: The Waves MaxxBass isolates the signal's original bass and generates harmonics from it. You can then adjust the blend of the original bass with the bass contributed by the MaxxBass. FIND/SQUASH PEAKS THAT ROB HEADROOM Another issue involves peak vs. average levels. To understand the difference, consider a drum hit. There's an initial huge burst of energy (the peak) followed by a quick decay and reduction in amplitude. You will need to set the recording level fairly low to make sure the peak doesn't cause an overload. As a result, there's a relatively low average energy. On the other hand, a sustained organ chord has a high average energy. There's not much of a peak, so you can set the record level such that the sustain uses up the maximum available headroom. Entire tunes also have moments of high peaks, and moments of high average energy. Suppose you're using a hard disk recorder, and playing back a bunch of tracks. Of course, the stereo output meters will fluctuate, but you may notice that at some points, the meters briefly register much higher than for the rest of the tune. This can happen if, for example, several instruments with loud peaks hit at the same time, or if you're using lots of filter resonance on a synth, and a note falls within that resonant peak. If you set levels to accommodate these peaks, then that reduces the song's average level. You can compensate for this while mastering by using limiting or compression, which brings the peaks down and raises the softer parts. However, if you instead reduce these peaks during the mixing process, you'll end up with a more natural sound because you won't need to use as much dynamics processing while mastering. The easiest way to do this is as you mix, play through the song until you find a place where the meters peak at a significantly higher level than the rest of the tune. Loop the area around that peak, then one by one, mute individual tracks until you find the one that contributes the most amount of signal. For example, suppose a section peaks at 0dB. You mute one track, and the peak goes to -2. You mute another track, and the section peaks at -1. You now mute a track and the peak hits -7. Found it! That's the track that's putting out the most amount of energy. Referring to Fig. 4, zoom in on the track, and use automation or audio processing to insert a small dip that brings the peak down by a few dB. Now play that section again, make sure it still sounds okay, and check the meters. In our example above, that 0dB peak may now hit at, say, 3dB. Proceed with this technique through the rest of the tune to bring down the biggest peaks. If peaks that were previously pushing the tune to 0 are brought down to 3 dB, you can now raise the tune's overall level by 3dB and still not go over 0. This creates a tune with an average level that's 3dB hotter, without having to use any kind of compression or limiting. Fig. 4: (A) shows the original signal. In (B), the highest peak has been located and is about to be attenuated by 3dB using Steinberg Cubase's Gain function. © shows what happens after attenuation—it's now only a little higher than the other peaks. In (D), the overall signal has been normalized up to 0.00dB. Note how the signal has a higher average level than in (A)—all the other peaks are higher than they were before—but there was no need to use traditional dynamics processing. CHEAT! The ear is most sensitive in the 3-4 kHz range, so use EQ (Fig. 5) to boost that range by a tiny amount, especially in quiet parts. Fig. 5: A very broad, 0.5dB boost has been added at 3.2kHz in iZotope's Ozone 5. The tune will have more presence and sound louder. But be extremely careful, as it's easy to go from teeny boost to annoying stridency. Even 1dB of boost may be too much. If you still need something slightly hotter, bring on a level maximizer or high-quality multiband compressor. However, by implementing the level maximizing tricks mentioned above, you won't need to add much dynamics processing. If you've been adding, for example, four to six 6dB of maximization, you may be able to get equally satisfying results with only one or two dB of maximization, thus squashing only the highest peaks while leaving everything else pretty much intact. A final consideration involves mastering for the web. While some engineers add massive amounts of compression to audio that will be streamed, in practice data compression allows for a reasonable amount of dynamics. If you're streaming audio, then the sound quality is already taking quite a hit, so preserving dynamics can help make the music sound at least a little bit more natural. If you work with streaming audio, try the techniques mentioned above instead of heavy squashing, so you can judge whether the resulting sound quality is more satisfying overall. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. Fix vocal pitch without nasty correction artifacts by Craig Anderton The critics are right: pitch correction can suck all the life out of vocals. I proved this to myself accidentally when working on some background vocals. I wanted them to have an angelic, “perfect” quality; as the voices were already very close to proper pitch anyway, I thought just a tiny bit of manual pitch correction would give the desired effect. (Well, that and a little reverb.) I was totally wrong, because the pitch correction took away what made the vocals interesting. It was an epic fail as a sonic experiment, but a valuable lesson because it caused me to start analyzing vocals to see what makes them interesting, and what pitch correction takes away. And that’s when I found out that the critics are also totally wrong, because pitch correction—if applied selectively—can enhance vocals tremendously, without anyone ever suspecting the sound had been corrected. There’s no robotic quality, it doesn’t steal the vocalist’s soul, and pitch correction can sometimes even add the kind of imperfections that make a vocal sound more “alive.” This article uses Cakewalk Sonar’s V-Vocal as a representative example of pitch correction software, but other programs like Melodyne (Fig. 1), Waves Tune (Fig. 2), Nectar (Fig. 3), and of course the grand-daddy of them all, Antares Auto-Tune (Fig. 4), all work fairly similarly. They need to analyze the vocal file, after which they indicate the pitches of the notes. These can all be quantized to a particular scale with “looser” or “tighter” correction, and often you can correct timing and formant as well as pitch. But more importantly, with most pitch correction software you can turn off automatic quantizing to a particular scale, and correct pitch with a scalpel instead of a machete. That's the technique we're going to describe here. Fig. 1: Celemony Melodyne Fig. 2: Waves Tune LT Fig. 3: iZotope Nectar's pitch correction module Fig. 4: Antares Auto-Tune EVO BEWARE SIGNAL PROCESSING! Pitch correction works best on vocals that are "raw," without any processing; effects like modulation, delay, or reverb can make pitch correction at best glitchy and at worst, impossible. Even EQ, if it emphasizes the high frequencies, can create unpitched sibilants that confuse pitch correction algorithms. The only processing you should use on vocals prior to employing pitch correction is de-essing, as that can actually improve the ability of pitch correction to do its work. If your pitch correction processor inserts as a plug-in (e.g., iZotope's Nektar), then make sure it's before any other processors in the signal chain. WHAT TO AVOID The key to proper pitch correction use is knowing what to avoid, and the prime directive is don’t ever use any of the automatic correction options—unless you specifically want that hard correction, hip-hop vocal effect (in V-Vocal, these are the controls grouped under the “Pitch Correction” or “Formant Control” boxes). Do only manual correction, and then, only if something actually sounds wrong. Avoid any “labor-saving” devices; don’t use options that add LFO vibrato. In V-Vocal, I always use the pencil tool to change or add vibrato. Manual correction takes more effort to get the right sound (and you’ll become best friends with your program’s Undo button), but the human voice simply does not work the way pitch correction software works when it’s on auto-pilot. By making all your changes manually, you can ensure that pitch correction works with the vocal instead of against it. DO NO HARM One of my synth programming “tricks” on choir and voice patches is to add short, subtle upward or downward pitch shifts at the beginning of phrases. Singers rarely go from no sound to perfectly-pitched sound, and the shifts add a major degree of realism to patches. Sometimes I’ll even put the pitch envelope attack time or envelope amount on a controller so I can play these changes in real time. Pitch correction has a natural tendency to remove or reduce these spikes, which is partially responsible for pitch-corrected vocals sounding “not right.” So, it’s crucial not to correct anything that doesn’t need correcting. Consider the “spikey” screen shot (Fig. 5), bearing in mind that the orange line shows the original pitch, and the yellow line shows how the pitch was corrected. Fig. 5: The pitch spikes at the beginning of the notes add character, as do the slight pitch differences compared to the “correct” pitch. Each note attack goes sharp very briefly before settling down to pitch, and “correcting” these removed any urgency the vocal had. Also, all notes except the last one should have been the same pitch. However, the first note being slightly flat, with the next one on pitch (it had originally been slightly sharp), and the next one slightly sharp, added a degree of tension as the pitch increased. This is a subtle difference, but you definitely notice a loss if the difference is “flattened” to the same pitch. In the last section the pitch center was a little flat; raising it up to pitch let the string of notes resolve to something approximating the correct pitch, but note that all the pitch variations were left in and only the pitch center was changed. The final note’s an interesting case: It was supposed to be a full tone above the other notes, but the orange line shows it just barely reached pitch. Raising the entire note, and letting the peak hit slightly sharp, gave the correct sense of pitch while the slight “overshoot” added just the right amount of tension. VIBRATO Another problem is where the vibrato “runs away” from the pitch, and the variations become excessive. Fig. 6 shows a perfect example of this, where the final held note was at the end of a long phrase, and I was starting to run out of breath. Referring to the orange line, I came in sharp, settled into a moderate but uneven vibrato, but then the vibrato got out of control at the end. Fig. 6: Re-drawing vibrato retains the voice’s human qualities, but compensates for problems. Bearing in mind the comments on pitch spikes, note that I attenuated the initial spike a bit but did not flatten it to pitch. Next came re-drawing the vibrato curve for more consistency. It’s important to follow the excursions of the original vibrato for the most natural sound. For example, if the original vibrato went up too high in pitch, then the redrawn version should track it, and also go up in pitch—just not as much. As soon as you go in the opposite direction, the correction has to work harder, and the sound becomes unnatural. This emphasizes the need to use pitch correction to repair, not replace, troublesome sections. Also note that at the end, the original pitch went way flat as I ran out of breath. In the corrected version, the vibrato goes subtly sharp as the note sustains—this adds energy as you build to the next phrase. Again, you don’t hear it as “sharp,” but you sense the psycho-acoustic effect. MAJOR FIXES Sometimes a vocal can be perfect except for one or two notes that are really off, and you’re loathe to punch. V-Vocal can do drastic fixes, but you’ll need to “humanize” them for best results. In the before-and-after screen shot (Fig. 7), the pitch dropped like a rock at the end of the first note, then overshot the pitch for the second note, and finally the vibrato fell flat (literally). The yellow line in the top image shows what typical hard pitch correction would do—flatten out both notes to pitch. On playback, this indeed exhibited the “robot” vibe, although at least the pitches were now correct. Fig. 7: The top image shows a hard-corrected vocal, while the lower image shows it after being “humanized.” The lower image shows how manual re-drawing made it impossible to tell the notes had been pitch-corrected. First, never have a 90 degree pitch transition; voices just don’t do that. Rounding off transitions prevents the “warbling” hard correction sound. Also note that again, the pitch was re-drawn to track the original pitch changes, but less drastically. Be aware that often, the “wrong” singing is instinctively right for the song, and restoring some of the “wrongness” will enhance the song’s overall vibe. Shifting pitch will also change the formant, with greater shifts leading to greater formant changes. However, even small changes may sound wrong with respect to timbre. Like many pitch correction program, V-Vocal also lets you edit the formant (i.e., the voice’s characteristic timbre). When you click on V-Vocal’s F button, you can adjust formant as easily as pitch (Fig. 8). Fig. 8: The formant frequency has been raised somewhat to compensate for the downward timbre shift caused by fixing the pitch. In the screen shot with formant editing, the upper image shows that the vibrato was not only excessive, but its pitch center was higher than the pitch-corrected version. The lower pitch didn’t exactly give a “Darth Vader” timbre, but didn’t sound right in comparison to the rest of the vocal. The lower image shows how the formant frequency was raised slightly. This offset the lower formant caused by pitch correction, and the vocal’s timbre ended up being consistent with the rest of the part. A REAL-WORLD EXAMPLE To hear these kind of pitch correction techniques—or more accurately, to hear a song using the pitch correction techniques where you can’t hear that there’s pitch correction—check out the following music video. This is a cover version of forumite Mark Longworth’s “Black Market Daydreams” (a/k/a MarkydeSad and before that, Saul T. Nads), and there’s quite a bit of touch-up on my vocals. But listen, and I think you’ll agree that pitch correction doesn’t have to sound like pitch correction. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. The scoop on making loops—in 11 steps By Craig Anderton Your drummer just came up with the rhythm pattern of a lifetime, or your guitarist played a rhythm guitar hook so infectious you think you might need to soak the studio in Clorox. And you want to use these grooves throughout a song, while cutting some great vocals on top. There’s something about a loop that isn’t the same as the part played over and over again . . . and vice-versa. Sometimes you want to maintain the human variations that occur from measure to measure, but sometimes you want consistent, hypnotic repetition. When it’s the latter, here’s how to create a loop—from start to finish. 1 CHOOSE YOUR PITCH If you plan to use a loop in different keys, realize that pitch transposition places more demands on a stretching algorithm than time stretching. One solution is to record the loop in two or more keys. Most stretch algorithms can handle three semitones up and down without sounding too unnatural. So, when I was recording loops for the “AdrenaLinn Guitars” loop library, I played each loop in E (to cover the range D–G) and Bb (for G#–C#). In cases where it wasn’t possible to obtain the same chord voicing in the two keys, I used DSP-based time stretching to create the alternate version. This feature is available in several programs, and while files processed with DSP aren’t stretchable, the sound quality is good enough that you can create a loop from the transposed version. 2 PLAY AGAINST A BACKING TRACK One of the easiest way to create a loop involves grabbing part of a track from a multitrack recording. But when creating a loop from scratch, it’s difficult to give a good performance if you’re playing solo. Create a MIDI backing track to play against, and you’ll have a better feel. 3 RECORD AT A SLOWER TEMPO Stretched files sound better when sped up than slowed down, because it’s easier to remove audio and make a loop shorter than try to fill in the gaps caused by lengthening audio. Set the tempo for the right feel, and practice until you nail the part. But before hitting record, slow the tempo down (this is why I recommend a MIDI backing track—not only is it easy to change tempo, you can transpose pitch as needed, and quantize so you have a rhythmic reference). Typically, an Acidized/Apple Loops or REX loop can stretch (if properly sliced and edited) over a range of about –15\% to +60\% or higher. So, a 100 BPM loop will be viable from about 85 BPM to over 160 BPM. For really downtempo material, like some hip-hop and chill, consider cutting at 70 or 80 BPM instead. As a bonus, you may find it easier to play the part more precisely at the slower tempo; also, any timing errors will become less significant, the more you speed up the loop. 4 TO SWING OR NOT TO SWING There are two opposing viewpoints about whether to incorporate swing and other “grooves” in a loop, or go for rhythmic rigidity. Some feel that if a loop wants to swing, let it. Unless it has a huge swing percentage, it will usually play okay against something recorded without swing. However, modern DAWs often let you apply swing and groove templates to audio, so there’s a trend toward recording loops to a rhythmic grid so they can be modified within the DAW for swing and other grooves. 5 HOW MANY MEASURES? Although quite a few loops are one measure long, two-measure loops “breathe” better—the first measure is tension, the second is release. Four-measure loops work well for sounds that evolve over time. Eight- or sixteen-measure loops are more like “construction kits” which you can use in their entirety, but from which you can also extract pieces. It’s easy to shorten a long loop. For example, if you create a four-measure loop that builds over four measures but want to build over eight measures instead, split the loop in the middle, repeat the first two measures twice (to provide the first four measures), then play the full four-measure loop to complete the eight-measure figure (Fig. 1). Fig. 1: If you make a long loop, you can always cut it into smaller pieces. In this example using Sony Acid Pro 7, the original four-measure loop goes from measure 5 to measure 9. But its first two measures have been copied and pasted in measures 1 and 2, as well as measures 3 and 4. 6 CUTTING THE LOOP One of the best ways to create loops is to record for several minutes, so you have a choice of performances. Most DAWs let you create a loop bracket and slide it around to isolate particular portions of the track. You can also experiment with changing the loop length—you might find that what you thought would be a one-measure loop works well as a two- or four-measure loop, which gives some subtle, internal variations. After deciding on the optimum length, use the loop brackets to zero in on the best looping candidates. Say you’re recording rhythm guitar. Solo the track, and listen to the entire rhythm guitar part. Mark off regions (based on the number of measures you want to use) that would make the best loops. After locating the best one, cut the beginning and end to the beat. With human-played loops, neither the beginning nor end will likely land exactly on the beat. Zoom in on the loop beginning, and slide the track so that the loop’s beginning lands exactly at the beginning of a measure. Snap the cursor to the measure beginning, and do a split or cut. You’ll also need to cut at the end of a measure; if the loop extends past the measure boundary or ends before it by a little bit, turn off snap and cut at the end of the loop. Then turn snap back on, and use the DAW’s DSP stretching function to drag the end of the loop to the measure boundary. How to do this varies depending on the program, but it generally involves click-dragging the edge of the audio while holding down a modifier key, like Ctrl or Alt. If you hear a click when the loop repeats because there’s a level shift between the loop start and end, add a very short (3-10 ms) fade at the loop start and end. 7 PROS AND CONS OF AUDIO QUANTIZATION Now scan the loop for note attack transients and see if they line up properly with note divisions. Small timing differences are not a problem and, if done musically (e.g., a snare on a loop’s last beat hits just a shade late), will enhance the loop. But if a note is objectionably late or early, you can use an audio quantization function (like Ableton’s Warp as shown in Fig. 2, Sonar AudioSnap, Cubase Multitrack Quantization, and the like) to quantize the audio. Fig. 2: The upper waveform in Ableton Live has warp markers circled that mark the beginning of a transient, but which aren’t aligned to the beat. The lower waveform shows the results of moving the warp markers on to the beat. If this degrades the fidelity, another option is to isolate the section that needs to be shifted by splitting at the beginning and end, then sliding the attack into place. If this opens a problematic gap between the end of the note you moved and the beginning of the next note, try the following: Add a slight fade to the first note so it glides more elegantly into the gap. Copy a portion of the first note’s decay, and crossfade it with the note end to cover the gap. Use DSP stretching to extend the decay of the note you moved forward in time. If the note was early and you shifted it later, then there will be a gap after the previous note, and the end of the note you moved might overlap the next note. If the gap is noticeable, deal with it as described above. As to the end, either: Shorten it so it butts up against the beginning of the next note. Crossfade it with the next note’s beginning if there’s no strong attack. If you’ve edited the loop, you’ll need to make it one file again. Bounce the region containing the loop to another track, bounce into the same “clip,” or export it and bring it back into the project. 8 CONSIDER SOME PROCESSING A “dry” loop is the most flexible — if you add reverb, then the stretching process has to deal with that. Cut a dry loop instead, and add reverb once the loop is in your DAW. If an effect such as tempo-synced delay is an integral part of the loop, embed the effect in the file for a “plug and play” loop. Otherwise, add the effect during playback. Some people “master” their loops with compression and EQ so the loops really jump out. But when you record other tracks (vocals, piano, etc.) then master the song, if you want to squash the dynamics a bit, then the loop dynamics will be super-squashed, and if you add a bit of brightness, the loop will shatter glass. If there are response anomalies I’ll often add a little EQ, and just enough limiting to tame any rogue peaks, but that’s it. Loops fit better in the track that way, and are more workable when it’s time to mix and master. You can always add processing more easily than you can take it away. 9 CHOOSE YOUR STRETCH METHOD The three main stretchable audio formats are Acidized WAV files, Apple Loops, and REX files. REX files are arguably the most universally recognized, with Acidized WAV files a close second. Mac programs generally recognize Apple Loops, but few Windows programs do. Several programs on both platforms recognize Acidized files. Different formats are best for different types of audio. REX files are optimum for percussive audio, as long as prominent sounds don’t decay over other sounds (e.g., a cymbal that lasts for a measure sounding at the same time as a 16th-note hi-hat pattern). A single-note bass line or simple drum part is the ideal candidate for REXing. WAV and Apple Loops aren’t always as good for percussive sounds as REX files, but are better with everything else—particularly sustained sounds. Your software will likely influence your choice. Apple’s Apple Loops Utility (Fig. 3) is a free program for creating Apple Loops; you’ll need either Sony Acid or Cakewalk Sonar to Acidize WAV files. To create REX files, you’ll need Propellerhead Software’s ReCycle program. Fig. 3: The Apple Loops Utility is a free program that allows optimizing the stretching characteristics of AIFF or WAV files, as well as tagging them for database retrieval. 10 CREATE AN ACIDIZED OR APPLE LOOPS VERSION Acidized and Apple Loops are structurally quite similar, and the techniques that help turn a file into a stretchable loop are similar. Basically, there need to be transient markers at the beginning of each attack transient to turn the loop into a series of slices, each of which represents a distinct “blob” of sound (e.g., kick+snare, bass note, or whatever). The programs themselves take an educated case as to where these transients need to go, but manual optimization is almost always necessary to create a loop that stretches over the widest possible range. A non-optimized file will cause artifacts when stretched (e.g., doubled attack transients that sound like “flamming,” and/or a loss of some of the fullness from percussion). Optimization (Fig. 4) involves several steps. Fig. 4: The upper waveform shows an untweaked version of a difficult-toAcidize file in Sonar’s Loop Construction window. The lower waveform has been optimized—the markers with the purple handles have been either moved from their original positions, or added. Existing strong transients should all have a marker at the transient’s precise beginning. Zoom in if needed to see the transient. Secondary transients, such as those caused by a delay or flam, should have markers as well. Remove spurious markers (i.e., they don’t fall on transients) as they can degrade the sound. With sustained material, add a transient marker at a rhythmic interval like a quarter note or eighth note. This tells the DSP to create a crossfade to help make a more seamless transition; putting it on a beat means that other sounds will likely mask any sonic discontinuities that may result from the stretching process. If you hear a “fluttering” effect during sustained notes, try adding another marker in the middle of the note. Sometimes adding a marker at the end of a note’s decay prevents roughness toward the note’s end. Enter the root key for pitched loops. This allows the loop to follow key changes in the host program. For percussive parts, specify no root key so that only the tempo changes. Transients are not always obvious. For example, a tom fill and cymbal crash might play simultaneously at the end of a drum loop, so you can’t see the individual tom transients. Listen to the part: If there’s a hit every 16th note, then just place a marker at every 16th note. If it’s mostly 16th notes but there are some hits that extend over an 8th note, add hits for the 16th notes but omit them for the sections that are longer. 11 REX FILE TIPS If you want to create a REX file, import the loop into ReCycle. The basic principles of good stretching are the same as for Acidized/Apple Loops files—you want to identify where transients fall—but with REX files these are hard cuts (Fig. 5), not just markers for the DSP to reference. Creating good REX files is an art in itself that goes beyond the scope of this article, but the tips given above regarding Acidization and Apple Loops should help considerably. Fig. 5: Once imported into ReCycle, you add markers at transients (indicated with the inverted triangles or lock icons) to create “slices.” The marker that splits the second chord in half is there for a reason—there are two eighth note chords played in quick succession. Even though you can’t see the transient that marks the beginning of the second chord, it still needs to be marked so that it plays back at the right time. If you followed the above directions and optimized your loops, they should work with a variety of material over a wide range of tempos, while fitting perfectly into asong—and that’s what it’s all about. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Does the XP MIDI Port Limitation Still Exist in Windows 7? It Sure Seems that Way . . . by Craig Anderton I don’t like to write articles that describe what may be a solution to what may be a problem, but I don’t really have a choice . . . let me explain. Windows XP originally had a limit of 10 MIDI ports. If you exceeded that amount, MIDI devices simply wouldn’t show up as available in DAWs and other programs. I believe this was eventually increased to 32 ports, but still, if you exceeded the limit you needed to dive into the registry and delete unused or duplicate ports. Part of the problem was from Windows creating duplicate ports if you plugged a USB MIDI device into different USB ports. Remembering to plug a device into the same port each time you used it, and deleting any duplicates, was an easy way to free up ports. I recently tried installing Korg’s KONTROL software and USB-MIDI driver for the nano- and microKEY series devices, and while Korg’s driver software showed the devices as existing and connected, the KONTROL software insisted they weren’t connected. This seemed like the problem I’d run into before with port limitations when programs couldn’t access something that was connected. Google was of limited help, but the general consensus seemed to be that the port limitation problem still persisted in versions of Windows past XP, even though some thought there was an unlimited number of ports. Who knows? If someone reading this has a definitive answer, let me know so I can update this article. Anyway, I tried the "XP registry diving" approach, but that didn’t work with Windows 7. However, on the Cakewalk forums, I found a very simple batch process that lets you see hidden devices in Device Manager. Simply type the following in Notepad and save it as a .BAT file (e.g., Hidden.BAT): set devmgr\_show\_nonpresent\_devices=1 start Devmgmt.msc Righty-click on the .BAT file, then choose Run as administrator from the context menu; this opens Device Manager. Go View > Show hidden devices, then open Sound, Video, and Game Controllers. A little speaker icon to the left of each item will be solid if the device is connected, and grayed out if not. Note that the following picture shows two entries for the Line 6 POD HD500. With the HD500 plugged in, one driver was active, and the other was not. So I right-clicked on the grayed-out driver, and chose Uninstall. A dialog box showed a checkbox for deleting the driver software; I believe you need to leave it unchecked so as not to render the “real” port inoperable. I found multiple duplicates for multiple pieces of gear, and deleted them. After doing so, the Korg KONTROL software worked perfectly. So while I can’t guarantee this solved the problem, or if it’s the optimal way to do so, the problem was nonetheless resolved. Again, let me emphasize this all falls under the “gee, I dunno, I guess it works” category, so I’d welcome any comments from people who have a definitive answer for all this! Craig Anderton is Editorial Director of Harmony Central and Founding Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. Use your iPad to create a reference source for all your gear. It's easy, simple, and free! by Craig Anderton Although some people still like printed manuals, it’s great that so many manufacturers include PDF files with distribution media, or online as a downloadable file. The search function alone makes PDFs handy, but of course, they also save costs and are environmentally responsible (if you really want a paper manual, you can always print out the PDF). With the iPad’s ability to conveniently store PDFs in a library, you can gather all this material in one place for easy reference. If you have an older piece of gear without a PDF manual, scan the pages, then download Open Office from www.openoffice.org—a free (and excellent) office suite from Sun Microsystems. You can insert each scanned image as a page within a text document, then export it as a PDF. THE iPAD CONNECTION Go the App store and download iBooks, a free app that’s a host for buying books, but also has the option to store PDFs. There are several ways to transfer PDFs into iBooks; with some PDFs you access online, you’ll briefly see an “open in iBooks” option (Fig. 1). If this goes away, tap the document’s top right to restore it, and tap “open in iBooks” This stores the manual in iBooks. This isn’t just a link to the online doc; if you have no wi-fi, you’ll still be able to read it. Fig. 1: If you download a PDF document and can “open in iBooks” (see upper right), that automatically saves the file and makes it available for future reference. If there is no “open in iBooks” option, or you’re grabbing a PDF you made, then email the file to yourself from your computer. Open your email program in the iPad, and download it. When it opens, you’ll see the “open in iBooks” option. EDITING IN IBOOKS You can move, delete, and otherwise edit how your manuals are arranged. You can also create “Collections” of a particular type of gear, manufacturer, etc. (Fig. 2). Fig. 2: Tapping the Collections button creates another “bookshelf” you access by swiping. For example, I created a category for documentation for the Casio XW series of keyboards, including the manuals, appendices of sounds, and MIDI implementation (Fig. 3). Fig. 3: This collection consists of only Casio-related manuals. Finally, here’s a shot of the main bookshelf screen (Fig. 4). If it’s too difficult to read the manual “covers,” you can also choose to show a list of manuals. Fig. 4: This shows the pre-categorized manuals from the main bookshelf page. Pretty cool, eh? But credit where credit is due: Thanks to engineer/producer Peter Ratner for suggesting this idea. I’ve found it to be really helpful to just reach for the iPad when I have a question about a piece of gear. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. Song order and transitions are a crucial part of the recording and mastering processes By Craig Anderton Your songs are superbly mixed, expertly mastered, and ready to be unleashed on a public thirsting for the soul-stirring slices of artistic triumph that only you can deliver. But before you start thinking about trading in your Toyota Corolla for a Lamborghini, don’t forget the final step of the recording process—assembly. Although there’s talk about the “death of the CD,” the reality is that it’s still a common form of music distribution, particularly at a band’s merch table. And, it’s still the main way of distributing an entire album in one package. The purpose of assembling a CD is to make sure all the disparate pieces hang together as a cohesive listening experience. There are several elements involved in assembling: Running order. Which song should start? Which should close? Is there a particular order that is more satisfying than others? Total length. There’s a reason why most pro bands cut and mix more songs than they’re going to use: It gives you the luxury of weeding out the weaker ones. Transitions. The default space between songs on a standard Red Book CD is 2 seconds, but that’s not a law. Songs can butt right up against each other, or have a longer space if a breather is required. Crossfades. Some songs were meant to intertwine their ends and beginnings, producing a seamless transition that immediately sucks the listener into the next track. Let’s look at these issues in depth, but first, consider the tools you’ll use to assemble your CD. ASSEMBLY TOOLS The greatest thing ever for album assembly is the portable music player (which of course now includes smartphones). You can do your assembly, create an MP3 or AAC file, and listen in a variety of contexts so you can live with your work until you get it exactly as desired. The same could be said of recordable CDs that you can plan in cars, over various stereo systems, and the like. Either sure beats the old school options—acetate copies you could play only a couple times, or “safety” tapes with more hiss than an ancient PA mic preamp. Many programs will let you assemble cuts in order and burn a CD, but make sure the software supports Disk At Once (DAO) burning. Track At Once (TAO) burning means you’re stuck with a space between tracks, so you will not be able to do crossfades, or place markers during the applause in a live set without hearing a disturbing gap. My favorite multitrack programs with sophisticated Red Book CD assembling options are PreSonus Studio One Pro 2 and Magix Samplitude Pro X. Either one is adept at album assembly, but Studio One Pro 2 also has the unusual feature of integrating with its multitrack recording page (Fig. 1). Fig. 1: With Studio One Pro 2, if you make any changes in a multitrack mix, you can update the modified file that's being assembled on the mastering (project) page. Or, as this screen shot shows, you can update the files en masse if multiple changes have been made to the multitrack songs. What this means is that if while assembling an album you decide, for example, that the vocals are just a little too low on one cut, you can zip over to the multitrack project, make your changes, and they’re automatically reflected on the mastering page. Of course this works only with multitrack projects created in Studio One Pro, but still—it’s pretty slick. IS EVERYTHING IN ORDER? You may already think you have an optimum order, but keep an open mind. In particular, you only get one chance to make a good first impression, so the first few seconds of a CD are crucial. If you don’t grab the ear of the listener/program director/booking agent immediately, they’re going to move on. Sorry, but that song with the long, slow build that ends up with everyone in the house shaking their butts is probably better off as your closer than your opener. There are some exceptions; dance music often starts off with something more ambient to set a mood before the beat comes in. Or, you may intend your CD to be an experience that should be listened to from start to finish. That’s fine, but understand that these days, it’s by and large a singles-oriented world . . . the stronger your opener, the better the odds a listener will actually hear the rest of the CD. You also have to plan the overall flow. Will it build over time? Hit a peak in the middle, then cool down? Provide a varied journey from start to finish? Do you want to close with a quiet ballad that will add a sense of completion, or with a rousing number intended to take people to the next level? One of the best models for album assembly is, well, sex. Sometimes it starts off slow and teasing, then proceeds with increasing intensity. Or there might be that instant, almost desperate attraction, that starts off high-energy but over the course of time, evolves into something more gentle and spiritual. Or hey, maybe we’re just talking straight ahead lust from start to finish! In any event, think whether the CD is making love with your audience or not, and whether it follows that kind of flow. FUN WITH SPREADSHEETS When I assemble an album, I boot up Open Office and make a spreadsheet. Aside from title, the categories are key, tempo, core emotion (joy, revenge, discovery, longing, etc.), length, and lead instrument (male vocal, female vocal, instrumental, etc.). This can help you discover problems, like having three songs in a row that are all the same key, or which have wild tempo variations that upset the flow. For more information, check out an article I wrote about using spreadsheets to help optimize song orders. In one project I was able to pretty much start out strong, have the tempo increase over the course of the album (with a few dips in the middle to vary the flow), and have a general upward movement with respect to key, except for a few downward changes to add a little unpredictability. Although there were several instrumental songs, I never had one follow another immediately; they were there to break up strings of songs with vocals. As a result of all this planning, the album had a good feel—it followed a general pattern, but had some cool variations that kept the experience from becoming too predictable. WHAT ABOUT LENGTH? With vinyl, coming up with an order was actually a bit easier. Albums were shorter, so you only had to keep someone’s attention for about 35-40 minutes instead of 70 or more. The natural break between album sides gave the opportunity for two “acts,” each with an opener and closer. Today some people seem to feel that if you don’t use all the available bits in a CD, you’re cheating the consumer. Nonsense. Many people don’t have an hour or more just to sit and listen to music anyway. As a consumer, I’d rather have 40 strong minutes that hang together than 30 minutes of all the best material “front-loaded” at the beginning, followed by 40 minutes of average material that peters out into nothing. As OJ Simpson’s lawyer Johnny Cochran once said, “Less CD time is surely no crime.” (Well okay, he didn’t say that, but you get the point.) TRANSITIONS I have to admit to a prejudice here, which is that I like a continuous musical flow more than a collection of unrelated songs. I’ve been doing continuous live sets most of my life, and that carries over into CDs. I want transitions between songs to feel like a smooth downshift on a Porsche as you take a curve, not something that lurches to a stop and then starts up again. As a result, I pay a lot of attention to crossfades and transitions. On a CD I assembled years ago for the group Function, they had already decided on an order, and it was a good one. However, one song ended with a fading echo; while cool, this had such a sense of completeness that when the next song hit, you weren’t really ready. After wrestling with the problem a bit, I copied the decay, reversed it, and crossfaded it with the end of the tune. So the end result was the tune faded out, but before it was gone, faded back in with the reversed echo effect. As reverse audio tends to do, this ended with a abrupt stop, which turned out to be the perfect setup to launch people right into a butt splice that started the next song. Be alert for “song pairs” that work well together, then figure out a good way to meld them. One lucky accident was assembling a CD where one song ended with a percussive figure, and the song that followed it started with a different percussive figure. With a space between them, the transition just didn’t work. But I took the beginning of the second tune, matched it to the end of the previous tune, and crossfaded the two sections so that during the crossfade, the two percussion parts played together. Instead of a yawning, awkward gap between the tunes, the first tune pushed you into the second, which was simultaneously pulling you in, thanks to the crossfade. Don’t be afraid to adjust the default space between songs, either (Fig. 2). If there’s a significant mood change, leave a little space. If there’s a long fade out, you might not want to have any space before the next song begins, lest the listener’s attention drifts. Fig. 2: In this transition, not only is there not a space between two cuts, but a crossfade has been added. Note that the crossfade curves in Studio One Pro 2 can be customized to a linear, concave, or convex shape—whatever makes for the smoothest transition. BURN, BABY, BURN Once you have everything figured out, test each transition (start playback about 20 seconds before the end of a song, then listen to about 20 seconds of the next song and see if the transition works), then listen from start to finish. If you don’t hear the need for any changes, fine. But burn a CD and live with it for a few days. Listen to it in the background, in your car, on an MP3 player while you’re doing the food shopping, whatever. Listen for parts where you lose interest, any awkward transitions, and other glitches. Next, make all necessary changes, then burn another CD or transfer to a portable music player, and start the process over. At some point, the various strands of the CD will hang together like a well-woven tapestry . . . and assembly is complete. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Just how much of a DAW can you get for under $150? The answer might surprise you $199.99 MSRP, $149.99 street www.acoustica.com by Craig Anderton I was checking stats for Harmony Central’s YouTube channel and was shocked to see that our had 53,000 views, making it the second most-watched video in the last year—bested only by a gear interview with Rush’s Alex Lifeson. (And almost a year after its release, ithe Mixcraft 6 video is still in the top 10 every month.) What’s so special about DAW software from a relatively small company for it to garner that level of attention and curiosity? Mixcraft doesn’t try to be a “Pro Tools killer,” nor is it so “lite” that it just floats away from lack of substance. It has always had the reputation for being inexpensive and easy to use, and pulled off the delicate balancing act of being powerful enough to be satisfying, yet intuitive enough not to be frustrating. Mixcraft 6 manages to keep that balancing act alive, despite adding more depth and power. As a result, Mixcraft has managed to acquire a cult following—a pretty large cult, actually. When it first appeared, Mixcraft appealed primarily to musicians on a budget who didn’t want to deal with something more sophisticated and potentially confusing. But these days, Mixcraft is also picking up some new fans—people who just don’t need all the bells and whistles of more complex programs, and just want something that’s fast, stable, and easy to use. In fact for some types of projects, Mixcraft is the fastest program I’ve found for getting from start to finish . . . more on that later. This review doesn’t need to go into excessive detail, because you can download a trial version and check the program out for yourself. However, like all software, you still need to invest some time into learning the in and outs before you can decide whether it’s right for you or not. So, we’ll concentrate on what Mixcraft has to offer, and then you can decide whether it might be the program you’ve been seeking. DIFFERENT VERSIONS Mixcraft comes in four different versions. This review focuses on Mixcraft Pro Studio 6, which is the line’s flagship. What differentiates it from the standard version of Mixcraft ($74.95 download, $84.95 boxed) are additional plug-ins and virtual instruments, so if you just subtract the Pro Studio 6 plugs (covered later), you’ll know what the standard version is all about. (Mixcraft 6 Home Studio, which lists for $49.95, limits the track count, includes only basic plug-ins, has no automation, and includes about 1/3 of the content included with hte other versions). It’s not the droid you’re looking for.) Another version, to be introduced at Winter NAMM 2013, bundles a USB mic . . . details will be retrofitted to this review after the official announcement. INSTALLATION Mixcraft runs under Windows from version XP onward as a 32-bit program, although it also runs fine under 64-bit versions. CPU and memory requirements are relatively modest (1GHz and 2GB respectively), and its “footprint” is more like a slipper than a boot. Mixcraft 6 Pro Studio is available boxed or as a download, and copy protection is a simple activation code—no dongles or going through hoops. BASICS Mixcraft’s “lay of the land” isn’t significantly different from other DAWs (Fig. 1): It has tracks and buses, a mixer, accepts VST or DirectX plug-ins, offers tabbed views of various sections, and wraps all this in a “unified,” single-screen graphic interface where you can nonetheless undock selected elements if desired. However, if you look a little deeper, Mixcraft has some philosophical differences that relate mostly to creating a faster workflow. Fig. 1: The main Mixcraft graphic user interface. For example, MIDI, instrument, and video tracks have no structural distinction and are treated similarly. In fact Mixcraft doesn’t even bother with MIDI tracks, on the assumption that you’ll be using them primarily for virtual instruments—insert a virtual instrument track, and it takes MIDI in and produces audio out. However, if you have something like a MIDI-aware plug-in, you can just pick up the MIDI from an existing virtual instrument, or insert a new one and de-select any instrument that’s loaded. Mixcraft also does ReWire, but treats ReWire devices as it would any other instrument plug-in. Mixcraft’s instrument tracks also do something I haven’t seen in any DAW (Fig. 2): when inserting the instrument, you can define volume, pan, keyboard range, transposition, velocity range, and outputs—so rather than inserting instruments and then defining splits and layers later on the course of the project, you can define any splits and layers from the gitgo (as well as modify them later). This architecture also makes it easy to layer multiple virtual instruments. Fig. 2: Mixcraft has a way to insert instruments that’s so simple and obvious that apparently, no one thought of it before. However, MIDI’s transparency doesn’t mean it’s ignored. Mixcraft has tabbed sections for editing, and one of them covers MIDI editing—so when it comes to tweaking, MIDI is roughly on the same footing as audio. Audio tracks offer automation lanes, as do MIDI—but the latter are based on MIDI controller information. Audio track automation can be added with “rubber band,” line-style automation, but not recorded from a control surface; however, you can record MIDI controller data from a control surface to automate virtual instrument and effect plug-in parameters. It’s also possible to use a control surface to do remote control of functions like track arming, transport control, loop toggle, insert track, etc. Mixcraft has an easy “MIDI learn” function for control surfaces. Clip automation is also available for audio and MIDI clips, offering volume, pan, low pass filter with resonance, and high pass filter with resonance. There are other track types, like output tracks (essentially buses that can go to various interface outputs), aux/bus tracks for sends, submix (group) tracks, and a master track which is typically where you stereo mix terminates. Mixcraft doesn’t do surround, but that’s hardly surprising given the price, or how many people actually work in surround. LOOPOLOGY Mixcraft isn’t an Ableton Live or FL Studio type of looping program, yet it incorporates looping in a painless and clever way. Acoustica has partnered with zplane to use their digital audio stretching algorithms; files are simply stretched to fit (the downside is that you can’t import REX files directly into Mixcraft). In fact, one of the very coolest features—and again, this is a “why don’t all programs do this?” feature—is that when you first bring a loop into Mixcraft, it asks if you want to conform the project tempo to the loop tempo or vice-versa; if it’s a pitched loop, you can also decide whether to conform the key to the project or loop. Subsequent loops are then matched to that initial default. Loops can of course be “rolled out,” edited, and the like; I also like the “+1” button, where clicking creates one additional iteration. Note also that Mixcraft reads tempo & key information from Acidized and GarageBand loops, so you can use these as you would in their respective programs, and they work identically to Mixcraft’s own loops. As if to drive the point home about looping, Mixcraft comes with a sound library of over 6,300 royalty-free loops and effects. What’s more, these aren’t “bonus filler” loops, but an eminently useable collection that spans a wide variety of genres, from Acid Techno to Zombie Rock. They’re arranged pretty much as construction kits, but files are searchable, tagged, and categorized, making it easy to mix and match among different kits—especially because you can also sort based on tempo, key, instrument, etc.. Fig. 3 shows what happens when you click on the Library tab in the Details area. Fig. 3: The Library not only contains a wide selection of material, but makes it easy to find and use particular sounds and loops. All this content comes with the boxed version, so you might think this would make for a hellacious download. But for the downloaded version, Mixcraft essentially loads “placeholders” for the various loops and samples; clicking on a loop’s play button downloads what you’ve selected to disk. Over time, as you audition more samples you eventually end up with everything on your hard drive although you can also download them all in one fell swoop, or one category at a time. You can also import your own loops and integrate them into the library structure. I can’t emphasize enough how useful this content is, even for pros. Many times I need to come up with a quick music bed at (for example) trade shows when something needs to slide under the video coverage; I have yet to find a program that gets this done faster than Mixcraft. COMPING Here’s another feature where Mixcraft got it right. Each part can go into its own lane, and you can loop or punch within tracks to comp very specific sections. You can even punch within a loop, and choose whether new takes mute old takes or overdub new takes. While in general comping is a fairly sophisticated feature, Mixcraft makes it quite straightforward. EDITING There are four “Details” tabs for editing and other functions, and this entire section can be undocked. Undocking is primarily important for the mixer, as you can place it on a separate monitor in a two-monitor setup, allowing you to see more tracks in the main monitor. Project is the most basic tab—it offers tempo, key, time signature, auto beat match on or off, the project folder location, and a notepad for entering what’s essentially project metadata. It also provides an alternate location to insert individual effects into the master track, although you can do that at the master track as well (which offers the added benefit of effects chains, covered in the plug-ins section). The Sound tab is a little more involved (Fig. 4). Fig. 4: The Sound tab showing an audio clip. The screen shot (which shows Sound undocked) is pretty self-explanatory, except for the Noise Reduction option: this lets you isolate a “noise fingerprint,” then reduce noise matching that fingerprint by a selectable percentage. If the clip you’re editing is MIDI, then Sound shows MIDI data (Fig. 5). Fig. 5: The Sound tab showing a MIDI clip. This keeps improving with newer versions, and now includes several MIDI editing options, a controller strip you can assign to whatever controller you want to edit, primitive notation view, drum maps, snap, and the like. MIDI editing isn’t on a Cubase/Sonar/Logic level by any means, but gets the job done. The Mixer tab (Fig. 6) is your basic hardware mixer emulation (complete with virtual wood end panels!). Fig. 6: Mixcraft’s mixer in action. While it looks pretty cool, it does have limitations; the EQ is three-band fixed EQ, and you can’t customize channel placement, strip width, etc. It’s definitely something I’d reserve for mixing, while sticking to the main track view when tracking and editing. VIDEO Okay, so Mixcraft is a surprising DAW. But it really doesn’t get more surprising than this: Mixcraft has more sophisticated video capabilities than any other DAW I’ve used. If any program has the right to call itself “direct from your garage to YouTube,” this is it. You can load multiple video clips (with their associated audio) into a single video track, mix WMV and AVI formats (but no MP4 or MOV), and even do some editing like trimming, crossfading clips, and changing video and audio stream lengths independently. Furthermore, you can insert still images in the video track (JPG, BMP, PNG, GIF) to supplement the video, or create “slide show” videos by crossfading between images. Text “clips” can be inserted into a text lane and include fades, different background and text colors, and basic intro and outro text animations (like move and reveal). Topping it all off: 25 bundled video effects including brightness, posterize, color channel strength and inversion, emboss, and more (Fig. 7). Fig. 7: As if video wasn’t enough, Mixcraft also includes automatable video effects. These are added to the video track just like adding automation lanes to audio, with automatable video effect parameters. Sure, Mixcraft isn’t exactly Sony Vegas Pro—but if Vegas Pro had a baby, it might look somewhat like this. PLUG-INS The biggest difference between Mixcraft 6 Home Studio, Mixcraft 6, and Mixcraft Pro Studio 6 are the included plug-ins. I’m going to cop out of listing them all here, as Acoustica has a comparison chart on their site that lists the various plug-ins, as well as which version has which plug-ins. Suffice to say there’s a wide range of plug-ins that cover all the usual bases (Fig. 8), with the Pro Studio 6 version expanding the repertoire considerably—for example you get a grand piano, two additional vintage synths, and a lot more effect plug-ins, including some mastering plugs from iZotope. If you're happy with your existing collection of plug-ins then the standard version of Mixcraft will take care of the rest of your DAW needs; but bear in mind that all the additional plug-ins essentially cost you $75, so you get quite a lot in return to add to your collection. Fig. 8: The Messiah virtual analog synthesizer is just one of many virtual instrument plug-ins. Of course, Mixcraft can also load third-party plug-ins. The only problem I experienced was loading UA’s version 6.4 powered plug-ins, but similar problems with this version have also been reported with other 32-bit Windows programs; if UA or Acoustica come up with a workaround or patch, I’ll update this review. In the “pleasant surprise” category, you can create effects chains, as well as save and load them. This even extends to a chain consisting of a virtual instrument with effects. What’s more, Mixcraft can handle instruments with multiple outputs—older versions couldn’t do that. In any event, if you find yourself needing more instrument plug-ins as well as the ability to use REX files, I’d recommend ReWiring Propellerhead Software's Reason Essentials into Mixcraft. It’s a powerful combination, and still a major bargain from a financial standpoint. WHAT’S NOT INCLUDED As you’ve probably gathered by now, Mixcraft is truly a full-featured program. However, it is missing a few significant features that are often found in more expensive DAWs. No recording of mixer fader movements. You can use automation lanes to draw/edit automation envelopes, but not record on-screen or control surface fader movements into these lanes. No MIDI plug-ins No audio quantization No direct REX support, although you can insert instruments that play back REX files No VST3 support CONCLUSIONS Mixcraft isn’t a cut-down version of a flagship program; it is the flagship program. As a result, only Mixcraft 6 Home Studio actually removes features to meet a sub-$50 price point, and the Pro Studio 6 version is more about adding extra features to the core program that, while welcome, may be features that not all users would want (like mastering effects). So whether you’re interested in Mixcraft 6 or Mixcraft 6 Pro Studio, you get a very complete program—and don’t forget about the video features—at what can only be considered a righteous price. What’s also interesting is how many “little touches” are included that show someone’s really thinking about what musicians need in a program. For example, every track has a built-in tuner you can access with one click. Simple? Yes. Effective? Yes. If you hover the mouse over a track’s FX button, you’ll see a list of inserted effects; right-clicking on it opens the GUIs for all effects, including ones that are part of an effects chain. And when you save a file, Mixcraft automatically generates a backup. I can’t believe all programs don’t do this, but at least Mixcraft got the memo that backups are a Good Thing. You can burn projects to CD, but also, mixdown to MP3, WMA, or OGG formats as well as WAV files—no separate encoder needed. Is Mixcraft for you? It’s easy enough to find out: Download the demo. I think you’ll be as surprised as I was at what this low-cost, efficient, and user-friendly DAW can do. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Sony expands its wireless line with multiple digital offerings—including versions for guitar/bass and handheld mic, designed especially for musicians DWZ-B30GB Guitar/Bass ($499.99 MSRP, $399.99 street) DWZ-M50 Handheld Mic ($699.99 MSRP, $549.99 street) by Craig Anderton I was never a fan of analog wireless, because it sometimes had the potential to turn from “wireless” into “w1reL3 ss,” if you catch my drift. I also didn’t like the companding that was usually employed to overcome the inherently questionable signal-to-noise ratio. My initial experience with digital wireless was Line 6’s XD-V70 wireless mic, and it made me a believer. First, of course, was sound quality—no companding, just PCM linear digital audio. Second was what happened when you got out of range: It just stopped. There was no noise, chattering, weird fades, or artifacts; either the receiver picked up the audio, or it didn’t. Chalk up another area where digital has bested the analog world, although in the case of wireless, the tradeoff can be a higher price point. Now Sony has entered the affordable digital wireless arena with their DWZ line. We’ll look at the DWZ-B30GB for guitar/bass first, and then proceed to the DWZ-M50 handheld wireless mic for vocals. DWZ-B30GB BASICS This is a license-free 2.4GHz system that includes a bodypack transmitter, receiver, belt clip, guitar-to-bodypack cable, printed manual, and CD-ROM with manuals in English, French, Spanish, German, and Italian (Fig. 1). Fig. 1: The package contents. Clockwise from top: AC adapter, receiver, cable, bodypack transmitter, belt/strap clip, CD-ROM with the manual in five languages, and printed manual in English. The digital audio format is 24-bit linear PCM, with no compression or other processing, so the sound quality blows away the average analog wireless system. The body pack is about 2.5" x 3.75" x 0.75", with a stubby antenna protruding about 7/8" from the body. It has two switches, for mic/instrument level and attenuation (0, -10, and -20dB). The instrument jack is an 1/8" type, which mates with the included 1/4" to 1/8" cable (it’s about 32"). The only other controls are switches for lock/unlock to prevent accidental changes of channel or power/muting, and channel select (complemented by a seven-segment LED readout to show the channel). Power comes from two AA cells; an LED indicator shows battery strength (but only two states—“good” and “almost dead”), while another LED shows the audio state—signal present, excessive level, weak, or mute enabled. In other words, it’s pretty easy to deal with. The receiver is light and compact—about 5-1/8" x 2-7/8" x 1-5/8". There’s an XLR balanced out (Fig. 2), additional 1/4" main out jack (both can be used simultaneously), and 1/4" tuner out jack. Fig. 2: You can send audio to the front of house mixer from the XLR, while driving a guitar amp from the 1/4" output and feeding a tuner with the second 1/4" jack. A very nice touch is that muting audio at the bodypack doesn’t mute the tuner output, so you can still tune no matter what. A switch chooses between narrow and wide RF modes (more on this later), with one six-position rotary switch for Channel, and an eight-position rotary switch for Cable Tone. The latter lets you match the wired and wireless sounds as closely as possible by using high-cut filtering to emulate the loading effect of different length cables on your pickups; the switch is calibrated in meter lengths, from 1 to 25 (but really, any guitarist who uses a 25m cable is certainly the target market for a wireless system!). There are also several indicator/status LEDs. Power comes from either the included 12V adapter, a 9V negative tip input (e.g., from a pedalboard power source), or 9V transistor radio type battery. The pedalboard power feature is great when you have a wired connection from the pedalboard back to your amp, but want to be liberated from patching into your pedalboard. With alkaline batteries, Sony estimates about 10 hours’ battery life for the belt pack, and 3.5 hours for the receiver. Note that both units also have mystery USB micro-B connectors; these aren’t referenced in the documentation, but Sony confirmed they’re included for potential software updates. DWZ-B30GB OPERATION It’s very easy to get the system going. There are two RF modes: Wide Band (optimized to reduce interference to other wireless equipment) and Narrow Band (optimized for avoiding interference from other wireless equipment). You do need to stick with one mode or the other when using multiple units on different channels. For the bodypack, you choose one mode or the other on power up—hold the channel select button down while turning on power, choose the mode, then do a long press to confirm the mode and set the channel. If you want to change the channel, another long press lets you do so, and short presses cycle through the channels. Normally I wouldn’t go into this level of detail in a review—this is more the province of manuals, right?—but I wanted to get across what’s involved in doing setup, as it’s pretty painless. At the receiver end, just switch-select the mode, then spin the channel dial until it matches what you set on the bodypack. There are six channels total, and as long as the transmitter and receiver are set to the same mode and channel, there’s not much that can go wrong other than a dead battery or going out of range. As to the Cable Tone function, it’s both weird and brilliant—weird because I’m always trying not to load down my guitar, but brilliant because cords do load down guitars with passive pickups, and that has become part of some musicians’ sound. Now they can dial in the desired amount of degradation. THE DWZ-M50 MICROPHONE The DWZ-M50 is somewhat more ambitious and expensive than the DWZ-B30GB, but is equally easy to use and also works in the 2.4GHz band with 24-bit PCM audio. Here’s what’s included (Fig. 3). Fig. 3: The package contents. Clockwise from top: AC adapter, receiver, CD-ROM with the manual in five languages and printed manual in English cable, handheld microphone, and the two antennae. The mic stand clip is in the center. Let’s start with the cardioid, unidirectional dynamic mic. Even with batteries it feels a little lighter than an SM58, despite an overall slightly larger diameter and a body that extends about 3.25" beyond that of an SM58. However, when you consider that the SM58 is invariably wedded to an XLR connector at the end, and of course the Sony isn’t (hey, it’s wireless!), then practically speaking Sony is only about 1.25" longer due to its protruding, stubby antenna. I’m not sure what mic capsule Sony is using, but it’s in the same sonic league as popular stage-oriented dynamic mics. I did find it necessary to use a wind screen (as I do with all mics), and being a dynamic, I appreciated the 5-band EQ included the receiver so I could boost the highs just a bit. The mic has a removable/interchangeable wind screen, and removable/interchangeable capsule (specified as needing a 31.3mm diameter and pitch of 1.0mm pitch; Sony says the mic is compatible with their CU-C31, CU-F31, and CU-F32 mic capsules). Unscrewing the element lets you access an attenuator with settings of 0, -6, and -12dB. Furthermore, you can unscrew the mic grip to reveal the lock/unlock slide switch, channel display, channel selector button, and (like the guitar system) a USB micro-B connector. The power/muting button is always accessible, and the battery/muting indicator is always visible; like the DWZ-B30GB, the battery indicator displays one of two states: good, or “you-better-put-in-new-batteries-soon.” THE DWZ-M50 RECEIVER The receiver is larger than the one for the DWZ-B30GB, although it has the same complement of output jacks (including—yes—a USB micro-B connector); one difference is that the XLR is switchable between mic and line levels (Fig. 4). There are also connectors for the two antennae. The receiver can’t be battery-powered, but uses the included 12V (positive tip) adapter. Fig. 4: The receiver’s rear panel has balanced and 1/4" jacks, as well as all other connectors. The front panel is dominated by a large and extremely readable color LCD, and all adjustments are menu-driven from a variety of menus (Fig. 5). Again, you can choose between wide and narrow band operation, but channel setup works somewhat differently than you might expect; instead of choosing a channel on the mic and having the receiver hone in on it, you instead can have the receiver scan for the optimum channel, or scan for clear channels and display which ones have low, moderate, or high interference. In either case, you then set the mic channel to match. You can also select channels manually, but I don’t see any reason not to the let the receiver do the work for you unless you’re using multiple units. Fig. 5: The display prior to turning on the mic. Goodies in addition to the graphic EQ include the option to set whether the aux/tuner out jack passes or blocks signal when the mic is muted, and the ability to optimize the remaining transmitter battery time display for alkaline, Ni-MH, or lithium batteries. In use, the display shows the selected channel, signal strength for each antenna, audio levels, estimated remaining battery time for the transmitter, and whether the equalizer is on or off (Fig. 6). Fig. 6: The display indicates signal strength, audio levels, and other parameters. CONCLUSIONS I tested both systems for range. It’s important to find a location that’s not next to a major interference source; out of curiosity, I set the receiver up within a few inches of a wireless modem, and not surprisingly the receiver couldn’t find a clear channel. Moving it just a few feet gave a couple clear channels, and a little further away, all the channels were available with minimal interference. Under real-world conditions, when using both the guitar and mic transmitters indoors with various objects in between them and the receivers (as well as some random RF interference), I was able to get a 100\\\% reliable connection at 70 feet away. I ran out of open space at that point, but Sony says the maximum line-of-sight range can be up to 200 feet for the DWZ-B30GB and 300 feet for the DWZ-M50. When I put three walls between the transmitter and receiver at 70 feet, the connection was no longer reliable, which based on prior experience, was expected. The squelching when going from signal to no signal wasn’t as elegant as some more expensive systems, but thanks to digital technology, as long as I was in range the sound quality didn’t change within that range—no cut-outs or pops. You probably can sing from the balcony seats if you’re line-of-sight and there’s not a lot of interference; for typical distances—i.e., anywhere on a big stage—you’re good to go. It’s clear Sony’s intention was to combine performance with cost-effectiveness. Of course, being digital the DWZ systems start off with an inherent advantage; but the implementation is also noteworthy. The guitar/bass version is simpler, less expensive, and slightly easier to set up but also includes clever options, like the cable emulator and the ability to run the receiver from a pedalboard’s power supply in case you want to go wireless to your pedalboard rather than your amp (or if you just don’t want to carry around one more AC adapter). The mic is equally adept at performing its duties, but with a somewhat heftier feel (and price). Again, there are extras—like the graphic EQ, excellent display, ability to use other capsules, as well as somewhat greater range. It also helps that the mic “feels” right, with sound quality comparable to “industry standard” mics; the battery life is excellent, too. This is Sony’s first foray into affordable digital wireless for musicians, but they got it right from a technical standpoint, as well as in terms of the user interface. The bottom line is as long as the power sources are doing their thing, you’re not in a super-dirty RF environment, and the transmitter and receiver are set to the same channel, you really can’t go wrong. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. If you're getting started in desktop mastering, these five tips will serve you well by Craig Anderton Mastering is a specialized skill; but if you want to be able to master your own material, the only way you’ll get good at it is to do it as much as possible. While we’d need a book to truly cover desktop mastering (I like Steve Turnidge’s Desktop Mastering book so much I endorsed it), these five essential tips will make your life a lot easier, regardless of your level of expertise. Save all of a song’s plug-in processor settings as presets.After listening to the mastered version for a while, if you decide to make “just one more” slight tweak—and the odds are you will—it will be a lot easier if you can return to where you left off. (For analog processors, take a photo of the panel knob positions.) Saving successive presets makes it easy to return to earlier version. With loudness maximizers, never set the “ceiling” (maximum level) to 0dB. Some CD pressing plants will reject CDs if they consistently hit 0dB for more than a certain number of consecutive samples, as it’s assumed that indicates clipping. Furthermore, any additional editing—even just crossfading the song with another during the assembly process—could increase the level above 0. Don’t go above -0.1dB; -0.3dB is safer. Setting an output ceiling (i.e., maximum output level) below 0dB will ensure that a CD duplicator doesn't think you've created a master with distortion. Typical values are 0.1dB to 0.5dB. Halve that change. Even small changes can have a major impact—add one dB of boost to a stereo mix, and you’ve effectively added one dB of boost to every single track in that mix. If you’re fairly new to mastering, after making a change that sounds right, cut it in half. For example, if you boost 3dB at 5kHz, change it to 1.5dB. Live with the setting for a while to determine if you actually need more—you probably don’t. Bass management for the vinyl revival. With vinyl, low frequencies must be centered and mono. iZotope Ozone has a multiband image widener, but pulling the bass range width fully negative collapses it to mono. Another option is to use a crossover to split off the bass range, convert it to mono, then mix it back with the other split. Narrowing the bass frequencies can make a more "vinyl-friendly" recording. Here, the bass region (Band 1) has been narrowed to mono with a setting of -100.0\\\%. The “magic” EQ frequencies. While there are no rules, problems involving the following frequencies crop up fairly regularly. Below 25Hz: Cut it—subsonics live there, and virtually no consumer playback system can reproduce those frequencies anyway. 300-500Hz: So many instruments have energy in this range that there can be a build-up; a slight, broad cut helps reduce potential “muddiness.” 3-5kHz: A subtle lift increases definition and intelligibility. Be sparing, as the ear is very sensitive in this range. 15-18kHz: A steep cut above these frequencies can impart a warmer, less “brittle” sound to digital recordings. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. So is this a crazy or brilliant idea? Better read the entire review before making up your mind $999.99 MSRP, $499.99 street www.peavey.com www.autotuneforguitar.com by Craig Anderton When Gibson introduced their Robot self-tuning technology, I took a lot of flak on forums for defending the idea. A typical comment was “I already know how to tune a guitar, that’s a really stupid idea” to which my response was “yes, but can you tune all six strings perfectly in under 15 seconds?” In my world, time is money. Sure, I can tune a guitar. But when I was recording sample and loop libraries with guitar, I’d spend 30-40\% of my time tuning, not playing, because libraries have to be perfectly in tune. To pick up a guitar, pull up a knob, strum, and get back to work was a revelation. And as a side benefit, being able to do alternate tunings live in the blink of an eye, and get back to perfect tuning without making the audience wait, were powerful recommendations for automatic tuning. Which brings us to the AT-200. It’s based on an entirely different approach and technology compared to Robot tuning, but accomplishes many of the same goals—and has its own unique attributes that are made possible only by clever application of DSP. Robot tuning works by using electronics to monitor the string pitch, and servo motors to tune the strings physically by turning the machine heads. The AT-200 is based on Antares’ Auto-Tune—yes, the same vilified/praised technology used on vocalists to do everything from turn their voices into machine-like gimmicky to touching a vocal line so transparently and subtly you don’t even know it’s being used. Sure, Auto-Tune is used to make lousy singers sound bearable. But it’s also a savior for great singers who nail the vocal performance of a lifetime except for that one flat note at the end of a phrase. With the AT-200, Auto-Tune uses DSP-based pitch transposition to correct each string’s audio output so it sounds in tune (Fig. 1). As a result, the physical string itself can be out of tune, but it doesn’t matter; what you hear coming out of the amp is in tune. This leads to a disconnect for some people, because the physically vibrating string may not match what comes out of your amp (this also happens with the Line 6 Variax when you do alternate tunings; Robot technology doesn’t do this, because it’s adjusting the actual string pitch). Fig. 1: The board that serves as the AT-200’s pet brain. This is a little bizarre at first, but it simply means turning up the amp to where it’s louder than the strings (not too hard, given that the AT-200 is a solid-body guitar). In the studio, if you’re using headphones while laying down a part, you won’t hear the strings anyway. As a result there can be times when your brain is saying “it’s not in tune” while your ears are telling you “it’s in tune.” Believe your ears! If you tune close enough to begin with, Auto-Tune doesn’t have to work too hard and the most you’ll hear is a chorusing effect if the strings are slightly off-pitch. There’s a sonic difference between the Auto-Tuned sound and that of the straight pickups; the level is lower, and the sound lacks some of the treble “snap” of the magnetic pickups (I really like the pickups, by the way). However, what you don’t hear are the artifacts typically associated with pitch-shifting. When recording, I simply increased the input level on the interface and added some high-frequency shelving to compensate. More importantly, the “native” Auto-Tuned needs to be fairly neutral to allow for the upcoming guitar emulations; if there’s too much “character” that’s weighted toward a specific guitar, then you have to “undo” that before you can start emulating other guitar sounds. BUT IF YOU THINK THAT’S ALL THERE IS TO IT . . . This might seem like a good time to stop reading if you have other things to do—okay, there are signal processors that tune each string, great, I get it. But keep reading. One of the side benefits is there’s perfect intonation (what Antares calls “Solid-Tune™”) as you play. You know those chords with really difficult fingerings where you end up pushing a string slightly sharp? No more, as long as you strum the chord after fretting (if the pitch changes after strumming, if the note remains within a small pitch window, the AT-200 will correct it; otherwise it will think you’re bending, and not correct it). It’s freakish to play a guitar where no matter how difficult the fingerings or where you are on the neck, the intonation is perfect. Not only is this aesthetically pleasing, but there’s a “domino effect” with distortion: You hear the same kind of “focused” distortion normally associated with simply playing tonics and fifths. Note that it’s not doing Just Intonation; everything still relates to the western 12-tone scale (but I’d love to see an add-on for different intonations). If you think this would cause problems with bends or vibrato, Antares has figured that out. If a pitch is static, Auto-Tune will correct it. But as soon as the pitch starts to move outside of a small pitch window because you’re bending a note or adding vibrato, the correction “unlocks” automatically for that string. You simply don’t run into situations where Auto-Tune tries to correct something you don’t want corrected. The system also allows for alternate tunings, as long as the tuning involves shifting down (future add-ones are slated to address alternate tunings where pitches are shifted up from standard). Auto-Tune works based on the pitch at the nut, but you can fool it into thinking the nut is somewhere else. For example, suppose you want a dropped D tuning. Fret the second fret on the sixth string (F#), strum the strings, and initiate tuning. Auto-Tune will “think” the F# is the open E, and tune F# to E. So now when you play the E open string, you’ll hear a D as the string is transposed down two steps. It gets better. Want that heavy metal drop tuning? Barre on, for example, the fourth fret while tuning, and now whatever you play will be transposed down four semitones. Being a wise guy, I tried this on the 12th fret and—yes, I was now playing bass. What’s more, it actually sounds like a bass. Say what? Or try this: fret the 12th fret on only the 5th and 6th strings. Now when you play chords, you’ll have one helluva bottom end. The manual gives suggested fingering to create various alternate tunings—open G, baritone, DADGAD, open tunings, and the like. The only caution with alternate tunings is that you need to press lightly on the string when engaging the Auto-Tune process. If you press too hard and the string goes slightly sharp, Auto-Tune will obligingly tune those fretted strings slightly flat to compensate. WHAT ABOUT THE GUITAR? Of course, all the technology in the world doesn’t matter if the guitar is sketchy. It seems Peavey wanted to avoid the criticisms the original Variax endured (“great electronics, but what’s with the funky guitar?”). Obviously Line 6 did course corrections fairly quickly with subsequent models, and the recent James Tyler Variax is a honey of a guitar by any standards. But Peavey needed to walk the fine line between a guitar you’d want to play, and a price you’d want to pay. They choose the basic Predator ST “chassis,” which is pretty much Peavey’s poster child for cost-effectiveness. Read the reviews from owners online; I’ve seen several where someone brought a Predator as a replacement or second guitar, but ended up using it as their main axe. The general consensus—which includes me—is that the Predator is a highly playable, fine-sounding guitar whose quality belies its price, with solid action and out-of-the-box setup. Not surprisingly, so is the AT-200. Spec-wise, it has a bolt-on, North American rock maple neck with a 25.5" scale, 24 frets, 15.75" radius, and rosewood fingerboard (Fig. 2). Fig. 2: The AT-200 features a bolt-on neck. The body is solid bassword, with a quilted maple cap; available finishes are black and candy apple red. The pickups are humbuckers with alnico 5 magnets (Fig. 3), and one of the highly welcome AT-200 features it that you can use it like a regular guitar—if the batteries die during the gig, just pull up on the tone knob and the pickups go straight to the audio output. Fig. 3: Pickups and the complement of controls. Other features are a three-way pickup selector, and string-through-body construction for maximum sustain (Fig. 4). Fig. 4: Detail of the bridge pickup and bridge; note the string-through-body construction. The tuners are decent. They’re diecast types with a 15:1 gear ratio, mounted on a functional but plain headstock (Fig. 5). The guitar doesn’t come with a case, so factor that into the price; also figure you’ll want the breakout box, described later. Fig. 5: AT-200 headstock and tuners. EASE OF USE The guitar ships with a removable “quick start” overlay and frankly, it could double as the manual (Fig. 6). Fig. 6: This pretty much tells you everything you need to know to get up and running. You make sure four AA cells are inserted (see Fig. 7; alkalines last about nine hours); plugging in the guitar turns on the electronics. Push down on the Tone control to activate the Auto-Tune technology, strum all six strings, and push down on the volume knob ot initiate tuning. Done. Yes, it’s that simple. If you want Auto-Tune out of the picture, pull up on the Tone knob. Fig. 7: The battery compartment is closed, and to the right of the exposed cavity with the electronics. THE FUTURE I never advise buying a product for what it “might” do, only for what it does, because you never know what the future will bring. That said, though, it’s clear Peavey and Antares have plans. There’s a clear division of labor here: Peavey provides the platform, while Antares provides the software. In addition to the standard 1/4" audio output, the AT-200 has a 8-pin connector + ground that connects to an upcoming breakout box. This is expected early in 1Q 2013, and is slated to sell for under $100. It will provide power to the guitar so you don’t need batteries, as well as an audio output. There will also be MIDI for use with external MIDI footswitches for tasks like preset selection, as well as doing updates. If you want to do updates but don’t want the breakout box, a “MIDI update cable” with the 8-pin connector on one end and MIDI on the other will cost $13 and allow doing updates from your computer. At the Antares end of things, this is a software-based platform so there are quite a few options. They’ve already announced an upcoming editor for live performance that runs on iOS devices; it lets you specify pickup sounds, alternate tunings, pitch shifting, “virtual capo” settings, and the like. I saw this software in prototype form at a press event that introduced the technology, so I would imagine it’s coming very soon. Antares has also announced AT-200 Software Feature Packs that add optional-at-extra-cost capabilities in three versions—Essential, Pro, and Complete. For example, the Essential includes processing for three different guitar sounds, Pro has six, and Complete has nine unique guitar voicings as well as bass. They also include doubling options (including 12 string), various tunings, and the like. These are all described on the www.autotuneforguitar.com web site. ROBOT OR AUTO-TUNE? This review wouldn’t be complete without a comparison. Both work and both are effective, but they’re fundamentally different. The biggest difference is that with the Robot system, because it works directly on tuning physical strings, “what you hear is what you get.” With alternate tunings, the guitar is actually tuned to those tunings. Also, the audio output is the sound of the string; there’s no processing. As a result, there’s zero difference between the sound made by the guitar and the sound coming out of the amp. Robot tuning is for those who prioritize tonal purity, and are willing to pay for the privilege. Auto-Tune trades off the physical string/resulting sound disconnect for more flexibility. You’ll never be able to tune physical strings up or down an octave, but you can do that with virtual strings—and the tuning process is close to instant. Although the audio is processed, the impact on the sound is minimal at best but still, there’s a layer of electronics between the string and you. On that other hand, that’s also what allows for emulating different characteristic guitar sounds. What’s surprising, though, is that there’s no discernible latency. (Well, there has to be some; laws of physics, and all that. But it’s not noticeable, and I’m very sensitive to timing.) Furthermore, the fact that this processing doesn’t add artifacts to the guitar’s tone is, to me, an even more impressive technical accomplishment than changing pitch. BELIEVE IT With apologies to Peavey and Antares, there’s something about this concept that makes you want to dismiss it. C’mon . . . Auto-Tune on a guitar? Taking out-of-tune strings and fixing them? Perfect intonation no matter where you play? Add-on software packs? What the heck does this have to do with my PRS or Les Paul or Strat? Now, those are real guitars! Except for one thing: the AT-200 is a real guitar (Fig. 8). Unless you notice the 8-pin connector, you’d never know there was anything different about this guitar. Play it, and it plays like a guitar . . . and it feels and looks like a guitar. All the magic is “under the hood,” and you don’t know it’s there until you start playing. The ease of use is off the hook. If it takes you more than a minute or two to be up and running, you might want to consider a different career. Fig. 8: The AT-200 doesn’t exactly look like a high-tech marvel . . .which is one of its strengths. Yes, it’s priced so that those getting serious about guitar can afford an AT-200, and derive the benefits of not having to hassle with tuning or worry about intonation. But I suspect a lot of veterans will add this to their guitar collection as well. After I got used to Robot tuning, it was always strange to go back to guitars where I just couldn’t push a button and be in tune. After getting used to the AT-200, it’s disorienting to go back to guitars that don’t have perfect intonation. Nor is it like vocals, where using Auto-Tune arbitrarily to create perfect pitch takes the humanity out of the performance; with guitar chords, out-of-tune notes just sound . . . well, wrong, and well worth fixing. And if you want to bend and slide, go ahead—the correction will wait in the wings until you want it again. Overall, this is a surprising, intelligent, and novel application of technology with extraordinarily practical consequences. After seeing prototypes, I expected to think the AT-200 was clever; I didn’t expect to think it was brilliant . . . but it is. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Check out this collection of tips and techniques from one of today’s most prolific UK Garage producers by Jeremy Sylvester Garage has been around since the 1990s, but it continues to influence other EDM genres as well as retain its own following. Whether you’re interested in creating “pure” Garage music using UK Garage loops or want to incorporate some its elements in other forms of music, the following tips should help get you off to a good start. THE GROOVE Drums are the backbone of any Garage production, and a solid drum groove is the most essential element in any UK Garage track. Before getting into choosing your sounds, remember that timing is everything. Shuffling, swung beats give UK Garage its unique stamp—so when building your drum pattern, it’s important to set your quantize/swing groove to between 50-56\% (Fig. 1). This will set the tone for the rest of the elements added later on. Fig. 1: Setting a little swing for MIDI grooves or quantized audio grooves gives more of a Garage “feel.” This screen shot shows the Swing parameter in Logic Pro's Ultrabeat virtual drum machine. BUILDING THE DRUM KIT Creating good drum patterns requires a good drum kit, so let’s start with the kick drum. Spend time searching for good sounds; for 4x4 Garage tracks, a strong, punchy kick drum that’s naturally not too bass heavy, and with a some midrange frequency presence, is the perfect starting point for any groove. This will leave some headroom for when you start to look for bass sounds to create bass line patterns later on; you don’t want the kick to take over the low end completely. Once you’ve decided on a kick (of course, with DAWs you can always change this later on), search for a nice crispy clap. If it has too much sustain, try to take some release off it and shorten its length. You want it to sound quite short and sharp, but not too short as you still want to hear its natural sound. Next, begin to add all of the other elements for your pattern. It’s very important to keep the groove simple, with enough space in the groove to add all your other sounds later on. Lots of people make the mistake (myself included!) of over-complicating the drum—as they say, less is more. The key is to make sure every element of your pattern has a distinct role, so that every drum element is there for a reason. When programming drums, imagine you are a “drummer” and concentrate on how a drummer plays to help you construct patterns. Another good tip is to make several patterns, all slightly different, to give your overall groove some variety. Also, keep your hi-hats neat and tidy; you don’t want them to sound undefined and “soupy.” PLACEMENT AND EFFECTS Keep the kick drum and other bass parts in mono, with other drum elements (such as hi-hats) in stereo to give the groove a nice spread. Maintaining bass frequencies in mono is particularly important if you ever expect a track to appear on vinyl. Resist temptation, and keep effects on the drums to a bare minimum. Too much FX (such as reverb) can drown out the groove and make it too wet, which sacrifices the energy of the drums. This will be very noticeable over a club sound system, more so than in the studio. Additionally, try playing around with the pitch of the sounds (Fig. 2). De-tuning kick drums or percussive elements of your groove will bring another dimension to your pattern and completely change the overall vibe. Fig. 2: Most samplers and drum modules (this screen shot shows Native Instruments' Battery) provide the option to vary pitch for the drum sounds. CHORDS, STABS, AND MELODIES As well as the groove drum pattern, another important element of UK Garage is the melodic structure. If like many people you don’t play keyboard, then you can always use one-shots/hits to help you. One-shots can be in the form or short chord keyboard hits, bass notes, percussive sounds, or synth stabs. When adding melodic elements to create a pattern, listen to the drum groove you have and work with it, not against it. The rhythmic pattern of your melody must complement the groove; in other words, the drum pattern and melody line must “talk to each other” and the melody must become part of the groove. Try using lowpass filters automated by an envelope, as well as effects, to manipulate and create movement with the sound; then add reverb for depth and warmth. Use parameter controls over velocity maps, for example, to control cutoff and decay and add variations. This will create shape, and adding some compression will really bring out some new life in your sound. If you are going for a rhythmic UK garage 4x4 style, space is important. When I mentioned above about “less is more,” it really means something here. Picture a melody in your head and imagine how people will be “dancing” to it. This will determine the way you create your melodic groove pattern. UKG melodic patterns tend to be “off beat” grooves, not straight line groove patterns. This is what gives Garage its unique style and vibe. When choosing sounds, try to look for rich harmonic sounds; some good options are obscure jazzy chords, deep house chord stabs, or even sounds sampled from classic keyboard synths (such as Korg’s M1 keyboard for those classic organ and house piano patches). ARRANGEMENT When arranging your song, always keep the DJ in mind and imagine how he/she will be mixing your track within their DJ set. The intro is very important for DJ’s as this allows them enough room to mix your track into another. Make your arrangement progress in 16 bar sections, so the DJ and the clubber know when to expect changes within the song. Within each of these sections, some elements of the groove may consist of 1, 2, 4 or 8-bar repeating patterns. These elements tend to move around by adding, removing, or altering every four or eight bars. Breakdowns tend to be in the middle of the track, so if you have a track that is six minutes long, you can drop the breakdown around the three-minute mark. There is no hard and fast rule to this, so use your imagination; this is intended only as a guide. You could also have a mini-breakdown on either side of this, for instance, right after the intro and just before the first major section of the song when everything is in. Be imaginative, and experiment with different arrangement ideas. You could start with drums, then lead into some intro vocals and then the mini drop, or you could start with a non-percussive intro that builds up into a percussive drum section and then goes into the song’s main section; it’s totally up to you and depends on the elements you have within your song. It’s also a good idea to finish the final section of your sing with drums. This is something a DJ really likes, as it allows once again for them to start mixing in another track within their DJ set. VOCALS AND VOCAL CHOPS Garage is known for its very percussive vocal chops; this is an essential part of the genre, especially when you are doing “dub” versions. You can use various kinds of MIDI-based samplers and software instruments to do this. Back in the day, Akai samplers were very popular—you would chop up and edit sounds within the device, map it across a keyboard, and play it manually. Nowadays there are many different ways of doing this, with instruments uch as Ableton Live’s Simpler or Logic’s EXS24 being the most popular. Another option is to slice a file (e.g., like the REX format; see Fig. 3), then map the individual slices to particular notes. Fig. 3: Slicing a file and mapping the slices to MIDI notes makes it easy to re-arrange and play vocal snippets on the fly, or drop them into a production. Furthermore, you can often re-arrange slices within the host program. In this screen shot from Reason, the original REX file mapping is on the right; the slice assignments have been moved around in the version on the left. Play around with vocals by chopping up samples every syllable. You could have a short vocal phrase of 5-6 words, but once chopped up and edited you can create double or even triple the amount of samples; this allows you possibilities to manipulate the phrase in any way you want, even completely disguising the original vocal hook. Map out these vocals across a keyboard or matrix editor, and have fun coming up with interesting groove vocal patterns over your instrumental groove pattern. Also try adding effects and filters, and play around with the sound envelopes in much the same way you would with the one shot chord sounds (as explained earlie)r. Treat the vocals as a percussive element of the track, but listening to the melody and lyrical content so it still makes sense to what the track is about. It’s a good idea to program 4-5 variations from which you can choose. I hope you find these tips useful; now go make some great music! This article is provided courtesy of Producer Pack, who produce a wide variety of sample and loop libraries. This includes the Back to 95 Volume 3 library from the article's author, Jeremy Sylvester.
  24. This highly cost-effective controller makes an auspicious debut $299.99 MSRP, $199.99 street samsontech.com by Craig Anderton Keyboard controllers are available in all flavors—from “I just want a keybed and a minimal hit on my wallet” to elaborate affairs with enough faders and buttons to look like a mixing console with a keyboard attached. Samson’s Graphite 49 falls between those two extremes—but in terms of capabilities leans more toward the latter, while regarding price, leans more toward the former. It’s compact, slick, cost-effective, and well-suited to a wide variety of applications onstage and in the studio. OVERVIEW There are 49 full-size, semi-weighted keys and in addition to velocity, Graphite 49 supports aftertouch (it’s quite smooth, and definitely not the “afterswitch” found on some keyboards; see Fig. 1). Fig. 1: Applying and releasing what seemed like even pressure to me produced this aftertouch curve. Controllers include nine 30mm faders, eight “endless” rotary encoders, 16 buttons, four drum pads, transport controls, octave and transpose buttons, mod wheel, and pitch bend (Fig. 2). Fig. 2: There are dedicated left-hand controls for octave, transpose, pitch bend, and mod wheel (click to enlarge). Connectors consist of a standard-sized USB connector, 5-pin MIDI out, sustain pedal jack, and jack for a 9V adapter—generally not needed as Graphite 49 is bus-powered, but if you’re using it with something like an iPad and Camera Connection Kit that offers reduced power, an external tone module, or other hardware where you're using the 5-pin MIDI connector instead of USB, you’ll need an AC adapter. One question I always have with attractively-priced products is how they’ll hold up over time. This is of course difficult to test during the limited time of having a product for review, but apparently UPS decided to contribute to this review with some pro-level accelerated life testing. The box containing Graphite 49 looked like it had been used as a weapon by King Kong (against what, I don’t know); it was so bad that the damage extended into the inner, second box that held Graphite 49. Obviously, the box had not only been dropped, but smashed into by something else . . . possibly a tractor, or the Incredible Hulk. But much to my surprise, Graphite worked perfectly as soon as I plugged it in. I did take it apart to make sure all the ribbon connectors were seated (and took a photo while I was it it—see Fig. 3), but they were all in place. Pretty impressive. Fig. 3: Amazingly, Graphite 49 survived UPS’s "accelerated life testing" (click to enlarge). OPERATIONAL MODES Graphite 49 is clearly being positioned as keyboard-meets-control surface, and as such, offers four main modes. Performance mode is optimized for playing virtual synthesizers or hardware tone modules, and gives full access to the various hardware controllers. Zone mode has a master keyboard orientation, with four zones to create splits and layers; the pitch bend, modulation, and pedal controllers are in play, but not the sliders, rotaries, and button controllers. Preset mode revolves around control surface capabilities for several popular programs, and is a very important feature. Setup mode is for creating custom presets or doing particular types of edits. There’s a relationship among these modes; for example, any mode you choose will be based on the current preset. So, if you create a preset with Zone assignments and then go to Performance mode without changing presets, the Performance will adopt Zone 1’s settings. PRESET MODE: DAW CONTROL Although many keyboards now include control surface capabilities, Graphite 49 provides a lot of options at this price in the form of templates for popular programs (Fig. 4). Unfortunately, though, the control surface capabilities are under-documented; the manual doesn’t even mention that Graphite 49 is Mackie Control-compatible. However, it works very well with a variety of DAWs, so I’ve written a companion article (don't miss it!) with step-by-step instructions for using Graphite 49 and smiilar Mackie Control-compatible devices with Apple Logic, Avid Pro Tools, Ableton Live, Cakewalk Sonar, Propellerhead Reason, MOTU Digital Performer, Sony Acid Pro (also Sony Vegas), Steinberg Cubase, and PreSonus Studio One Pro. (I found that Acid Pro and Vegas didn’t recognize Graphite 49 as a Mackie Control device, but they both offer the option to choose an “emulated” Mackie Control device, and that works perfectly.) Fig. 4: Graphite 49 contains templates for multiple DAWs, including Ableton Live (click to enlarge). The faders control level, the rotaries edit pan, and the buttons usual controlling solo and mute, but with some variations based on how the DAW’s manufacturer decided to implement Mackie Control (for example with Logic Pro, the button that would normally choose solo controls record enable). The Bank buttons change the group of 8 channels being controlled (e.g., from 1-8 to 9-16), while the Channel buttons move the group one channel at a time (e.g., from 1-8 to 2-9), and there are also transport controls. (Note that as Pro Tools doesn’t support Mackie Control you need to select HUI mode, which doesn’t support the Bank and Channel shifting.) Reason works somewhat differently, as Graphite 49 will control whichever device has the focus—for example if SubTractor is selected, the controls will vary parameters in SubTractor and if the Mixer 14:2 is selected, then Graphite 49 controls the mixer parameters the same way it controls the mixers in other DAWs. However Reason 6, which integrates the “SSL Console” from Record, treats each channel as its own device; therefore Graphite 49 controls one channel at a time with that one particular mixer. I tested all the programs listed above with Graphite 49, but there are additional presets for Nuendo, Mackie Tracktion, MK Control (I’m not quite sure what that is), Adobe Audition, FL Studio, and Magix Samplitude. There are also 14 user-programmable presets, and a default, general-purpose Graphite preset. This preset provides a good point of departure for creating your own presets (for example, when coming up with control assignments for specific virtual instruments). The user-programmable presets can’t be saved via Sys Ex, but 14 custom presets should be enough for most users. The adoption of the Mackie Control protocol is vastly more reassuring than, for example, M-Audio’s proprietary DirectLink control for their Axiom keyboards, which usually lagged behind current software versions. We’ll see whether these presets can be updated in the future, but it seems that the “DAW-specific preset element” relates mostly to labeling what the controls do, as the Mackie protocol handles the inherent functionality. There’s also a certain level of “future proofing” because you can create your own presets so if some fabulous new DAW comes out in six months, with a little button-pushing you’re covered. CREATING YOUR OWN PRESETS Editing custom assignments follows the usual cost-saving arrangement of entering setup mode, then using the keyboard keys (as well as some of the hardware controls) to enter data. Thankfully, the labels above the keys are highly legible—it seems that in this case, the musicians won out over the graphic designers. The relatively large and informative display (Fig. 5) is also helpful. Fig. 5: When you adjust various parameters, the display gives visual confirmation of the parameter name and its value (click to enlarge). Although I’d love to see Samson develop a software editor, the front-panel programming is pretty transparent. CONTROLS AND EDITS Let’s take a closer look at the various controls, starting with the sliders (Fig. 6). Fig. 6: There are nine 30mm sliders. While 30mm is a relatively short throw, the sliders aren’t hard to manipulate, and their size contributes to Graphite 49’s compact form factor (click to enlarge). One very important Graphite 49 feature is that there are two virtual banks of sliders—essentially doubling the number of physical controls. For example, the sliders could control nine parameters in a soft synth, but then with a bank switch, they could control another nine parameters. Even better, the rotaries and buttons (Fig. 7), as well as the pads, also have two banks to double the effective number of controls. Fig. 7: The rotary controls are endless encoders. Note that there are 16 buttons, and because there are two banks, that’s 32 switched controls per preset. Speaking of pads (Fig. 8), these provide comfortably big targets that not only respond to velocity, but aftertouch. Fig. 8: The pads are very useful for triggering percussion sounds, as well as repeatitive sounds like effects or individual notes. Rather than describe all the possible edits, some of the highlights are choosing one of seven velocity curves as well as three fixed values (individually selectable for the keyboard and pads), reversing the fader direction for use as drawbars with virtual organ instruments, assigning controls to the five virtual MIDI output ports, changing the aftertouch assignment to a controller number, and the like. Don’t overlook the importance of the multiple MIDI output ports. In its most basic form, this allows sending the controller data for your DAW over one port while using another port to send keyboard notes to a soft synth—but it also means that you can control multiple parameter in several instruments or MIDI devices within a single preset. Finally, the bundled software—Native Instruments’ Komplete Elements—is a much-appreciated addition. I’m a huge fan of Komplete, so it was encouraging to see that NI didn’t just cobble together some throwaway instruments and sounds; Elements gives you a representative taste of what makes the full version such a great bundle. A lot of “lite” versions are so “lite” they don’t really give you much incentive to upgrade, but Elements will likely leave you wanting more because what is there is quite compelling. CONCLUSIONS I’ve been quite impressed by Graphite 49, and very much enjoy working with it. The compact form factor and light weight make it very convenient to use in the studio, and UPS (along with the keyboard’s inherent capabilities) proved to my satisfaction that Graphite 49 would hold up very well for live performance. During some instances when my desktop was covered with devices I was testing, I’ve simply put Graphite 49 on my lap. There are few, if any, keyboard controllers that could fit on my lap so easily while offering this level of functionality. My only significant complaint is I feel the documentation could be more in-depth—not necessarily because there’s a problem with the existing documentation, but because I suspect that Graphite 49’s cost-effective pricing will attract a lot of newbies who may not be all that up to speed on MIDI. Veterans who are familiar with MIDI and have used controllers will have no problem using Graphite 49, but it would be shame if newbies didn’t take full advantage of Graphite 49’s considerable talents because they didn’t know how to exploit them. Samson is a new name in controllers; my first experience with their line was Carbon, which I also reviewed for Harmony Central. Its iPad-friendly design and exceptionally low cost got my attention, but Graphite 49 gives a clearer picture of where the company is heading: not just inexpensive controllers, but cost-effective ones that are suitable for prosumer and pro contexts. Graphite 49’s full-size keys, compact footprint, comfortable keybed, control surface capabilities, and pleasing aesthetic design are a big deal—and at this price, you’re also getting serious value. I’d be very surprised if Samson doesn’t have a hit on their hands. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. I like things that solve problems . . . $32.99 MSRP, $19.99 street www.planetwaves.com by Craig Anderton Reviewing a guitar strap may seem ridiculous, but this isn’t your normal guitar strap—here’s why. Have you ever had a strap slip off an end pin? I sure have. Fortunately, thanks to quick reflexes developed in my errant youth by playing excessive amounts of pinball, I was usually able to grab the guitar before it went crashing to the floor. Except twice: Once when it happened to a blonde Rickenbacker 360 12-string, which was heartbreaking, and once with a Peavey Milano. (Fortunately, it landed on its end pin and survived unscathed. Then again, this guitar has survived Delta Airlines’ baggage handlers on transcontinental trips, so it’s proven an inherent indestructibility.) Since then, I’ve tried various arcane ways of holding straps to end pins—the kind of strap that’s screwed in between the end pin and guitar, custom straps, and the like. They all worked, but had some kind of limitation—usually that it was hard to remove the strap to use on a different guitar, or before slipping the guitar in its case. IT’S A LOCK Then I got turned on to the Planet Waves Planet Lock Guitar Strap. I don’t know who at the company thinks up these weirdly genius things, but it’s pretty cool. Each end of the strap has an open and closed position. In the open position, a rotating disc exposes an opening (Fig. 1). The large hole fits over the end pin head, then you pull on the strap end so that the end pin’s bevelled section fits in the small hole. Fig. 1: The strap end in the open position. Rotating a clickwheel/thumbwheel rotates the disk around the end pin’s bevelled section (Fig. 2), gripping it firmly. The disc doesn’t have to rotate around it completely in order to be effective. Fig. 2: The strap end in the closed position. The clickwheel has ratchets to hold it in place. If you want to remove the strap end, you simply push a release button; this allows rotating the disc to the open position so you can slide the strap off the end pin. ADDITIONAL OPTIONS There are several variations on the strap I reviewed, including multiple styles (Fig. 3). Fig. 3: Different Planet Lock strap styles. There’s also a slightly more costly polypropylene version, and a Joe Satriani model. Although the strap works with most end pins, it doesn’t work with all of them. If you have incompatible end pins, Planet Waves will send you a set of guaranteed-to-work end pins (black, gold, or silver; see Fig. 4) if you send them a copy of your store receipt and $2.50 shipping/handling. Fig. 4: Universal end pins for the Planet Lock strap. These are also available for sale individually for $7.99 street if you have multiple guitars with incompatible end pins, and want to use the strap with them. However, these end pins weren’t designed specifically for the Planet Lock strap, so they’ll work with other straps as well. INDEED . . . IT’S A LOCK For $20 there’s not much to complain about, except that the strap lacks heavy padding (and also, that it didn’t exist when I bought my Rickenbacker). However, the 2” width distributes weight evenly, and I haven’t found it tiring to wear for hours at a time. But more importantly, I don’t have to worry about the guitar turning into a runaway and crashing to the floor. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  • Create New...