Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. Here’s how to make your phaser sound more dramatic by Craig Anderton Today’s software allows for techniques that, while possible to implement with hardware effects, would be a hassle to do—like turbocharging any ordinary phase shifter plug-in. The object of the technique presented in this article is to create the Godzilla of phasers, with a far more pronounced phasing effect, and wider stereo image. Any recording software works for this technique as long as you can switch polarity (commonly called phase) on audio channels. Old-school hardware mixers had a phase switch, and many software programs include a phase switch in their virtual mixers. If not, the program may include a plug-in that reverses phase (Fig. 1). Fig. 1: The lower left shows Ableton Live’s Invert utility plug-in inserted into a track, with the phase reversed for both the left and right channels. To the right, Presonus Studio One Professional’s “Mixtool” plug-in is reversing (inverting) channel polarity. Here’s the step-by-step procedure for turbocharging your phaser. 1. Copy your main guitar audio track to create a second, identical track. Some programs will have a “duplicate” or “clone” command, or you may be able to simply copy/drag the audio into an additional track. 2. Insert your phaser plug-in into the main track. 3. Reverse the second track’s polarity, and turn its fader all the way down. 4. Choose a phaser sound you like. 5. Start playback. As the guitar plays, slowly bring up the fader for the second, copied track. As you raise the fader, the phaser effect will become more dramatic, and you’re hear a wider stereo image. Adjust the fader for the desired sound (Fig. 2). This technique’s “secret sauce” is that the out-of-phase, dry audio cancels out any elements in the phase shifted sound that aren’t being modified by the phaser. So, when the levels of the dry signals are equal and out-of-phase, all that’s left is the purely phase-shifted sound. Fig. 2: This shows the turbocharged phaser setup in Cakewalk Sonar’s Console view. Track 1 (left) has a phaser effect inserted; the duplicated channel on the right (Track 2) has the polarity reverse switch enabled (circled in red). Due to the cancellation, the overall level will be somewhat lower. Changing the faders individually might upset the balance between the two tracks; the optimal solution is to group the two channel faders after you’ve found the right setting. That way when you change levels, they’ll both change together. And note that this technique works with other effects as well. You can turn compressors into expanders, and get some pretty amazing reverb sounds—hmm, sounds we might need another article. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Almost all your gear has a ground connection in common—find out how this affects both AC and audio by Craig Anderton Ground is the reference for all things electrical in your studio. Think that makes the subject of grounding important? Yup. There are two main, related areas of grounding: The ground associated with your AC wiring The signal grounds associated with your audio gear GROUND AND AC With 3-wire AC sockets, there are three wires that connect back to the building’s main power panel, each with a standardized color: the hot wire (red or black), neutral wire (white), and ground wire (bare or green). The hot wire is the one delivering the voltage, and the one you don’t want to touch under any circumstances. After electricity passes through its load, it returns to ground through the neutral wire. The ground connection is separate from the neutral, and provides a zero voltage reference for your entire system. The casing of most of your gear connects to ground as a safety measure; follow your house wiring back far enough, and you’ll probably find a copper stake driven into the ground, or a connection to a steel frame in large buildings. The safety aspect comes about because if there is some malfunction within your gear, the ground line provides the quickest path for electricity to flow to ground instead of going through, say, your body. You want the ground line from your studio to have the lowest possible resistance to ground, so that it truly is at zero volts, but not just for safety reasons. Here’s why. Suppose there’s a refrigerator on the same circuit that draws a lot of current. When the compressor switches on and off, it generates large spikes that can get into the ground line. If the ground line has no resistance and connects directly to ground, any spikes go to ground and stay out of your equipment. But if those spikes can travel along the ground line, which can happen if the ground wire isn’t truly zero ohms, then this noise may show up on the chassis of something like a mic preamp, and get into its circuitry through various signal paths. (This should not be confused with a ground loop, which we’ll discuss later; we’re simply talking about noise on the ground line.) To make matters worse, noise also travels to your gear through the hot line. Although this tends to be “dirtier” than ground lines, it has less opportunity to work its way into a circuit because it gets absorbed by the power supply—or at least, it should. Short of hiring a licensed electrican to install wiring dedicated to your studio—which will likely set you back a few thousand dollars—there’s not much you can do about your existing wiring, except to make sure that the current requirements are considerably less than what the electrical service feeding your studio puts out. As much of today’s gear is pretty low-powered, it’s doubtful you’ll put too much stress on your electrical lines. Fortunately, there are many ways to filter and condition AC, as well as isolate your gear from it. Furman is one of the bigger names in power conditioning; Equi=Tech actually balances the AC lines (through a process too complex to summarize here, but check out their white papers). Monster also makes power conditioners that are intended for consumer applications (e.g., large screen TVs) but are suitable for use in the studio. I’ve measured the residual noise of gear hooked up to these types of devices, and do believe the hype: It’s lower than “raw” AC. Some of these devices can also maintain a constant voltage, and eliminatr spikes on the AC line that can be murder to gear; and they can also handle pretty decent-sized loads. These are very effective devices, but they’re also not cheap. Are they overkill for your situation? Maybe. But if you live in areas that are subject to lightning strikes or significant power supply fluctuations, they’re inexpensive compared to the cost of replacing gear. On the other end of the spectrum, don’t get sucked into buying inexpensive barrier strips with “surge protection.” These often have a cheap protection device that might work once, then blow (sort of like a fuse) except you have no way of knowing if it’s blown or not...and if it is, it won’t provide protection any more. At the very least, if you use anything with a microprocessor you should feed its AC line with an uninterruptible power supply (UPS). They provide isolation from the AC line and protect against brownouts and blackouts, but can also promote more reliable operation—there are times when the voltage cuts out so briefly you don’t notice it, but your microprocessor does (Fig. 1). Fig. 1: Furman's F1000-UPS uninterruptible power supply voltage regulator/power conditioner is a heavey-duty unit designed to provide significant protection. But not all UPS devices are the same. Try to get one with a replaceable battery, otherwise you’ll just have to throw it out when the battery eventually loses the ability to hold a charge. And if you have any kind of phone line going into your computer (e.g., modem), make sure you get a UPS that also protects the phone line. More computer motherboards are fried from spikes getting in through the phone or DSL lconnections than the AC line—which I can verify from the time there was a lightning strike a couple hundred feet from my studio. The drawback of a UPS compared to the high-priced power conditioners is that usually only a few outlets will be battery backed (typically for computer and monitor, so you can shut down your computer in case of a blackout). Other outlets (intended for printers and such) will often have surge protection, but this likely will not be a rigorous as expensive conditioners, nor will the voltage be stabilized. But no matter what you use, here’s one crucial tip: Never clip off the ground prong from a 3-prong plug to reduce noise, nor use a 3-to-2-prong “cheater” adaptor in the belief that it will eliminate ground loops (our next topic of discussion). That ground line is a safety feature, and I’ve seen nasty and potentially lethal shocks result from touching a grounded piece of gear (like a mic connected to a PA) with one hand, while touching another one with a lifted ground. You’ve been warned! Besides, there are other places in the signal path to deal with ground loop issues. GROUND LOOPS A ground loop means there is more than one ground path available to a device. In Fig. 2, one path goes from device A to ground via the AC power cord’s ground terminal, but A also sees a path to ground through the shielded cable and AC ground of device B. Fig. 2: A ground loop can form when a device “sees” two different paths to ground. Because ground wires have some resistance, there can be a voltage difference between the two ground lines, thus causing small amounts of current to flow through the ground. This signal may couple into the hot conductor because they’re so close together. The loop can also act like an antenna for hum and radio frequencies. Furthermore, many components inside a single piece of gear connect to ground. If that ground is “dirty,” this noise can get picked up by the circuit. Ground loops cause the most problems with high-gain circuits like mic preamps, as massive amplification of even a couple millivolts of noise can be objectionable. But even with lower-gain situations, ground loops can be a problem. For example, suppose your keyboard is plugged into an amp with XLR outs, and these feed into a PA mixer as well. The keyboard sees a path to ground through the amp, and another through the PA. What with all the gear on stage, dimmer circuits in lights, maybe a computer or two, and so on, the stage is a pretty “dirty” environment—so the signals floating along the ground lines may be fairly high level and produce an audible buzz. There are several possible fixes, all of which involve managing how a signal flows (or doesn’t flow) to ground. THE SINGLE PLUG SOLUTION You can solve some ground loop problems by plugging all equipment into the same grounded AC source, which attaches all ground leads to a single ground point (e.g., a barrier strip that feeds an AC outlet through a short cord). However, make sure that the AC source is not overloaded, and conservatively rated to handle the gear plugged into it. THE BROKEN SHIELD SOLUTION A solution for some stubborn ground loop problems involving unbalanced lines is to isolate the piece of gear causing the problem, then disconnect the ground lead (shield) at only one end of one or more of the audio patch cords between it and other devices. The inner conductor is still protected from hum by a shield connected to ground, yet there’s no completed ground path between the two devices except for AC ground. If you make your own cables, wire up a few “ground loop-buster” cords with a disconnected shield at one end. Mark them plainly; if used as conventional cords, you’ll likely encounter various problems. If you’re using a balanced line setup and DI box, you’ll often find a ground lift switch (especially if the DI box has an audio transformer) that accomplishes the same type of effect as breaking the shield in unbalanced lines. THE AUDIO ISOLATION TRANSFORMER SOLUTION Using a 1:1 audio isolation transformer is more elegant than simply breaking the shield, and still interrupts the ground connection while carrying the signal. For a commercial implementation, check out Ebtech’s rack mount units or how Radial Engineering implements some of their DI boxes. However, these are just two of many options. THE “WHAT, ME WORRY?” SOLUTION Disclaimer: The following is technically the wrong way to do things. So, don’t blame me if you try it and it works. Instead of trying to manage your grounds, make as many ground connections as possible—connect rack units to metal rack cases, run wires between the ground connections on various rack frames, run more wires from the rack frame to the metal case of barrier strips, and so on. Your system ends up with so many ground lines that the overall resistance to ground drops to just about nothing (remember Ohm’s law—putting resistors in parallel lowers the resistance). Electrically speaking, this is called a ground plane. Although it’s not supposed to work, in relatively simple studios it can. But just forget I ever said anything, okay? In a previous home studio that had lots of analog gear with digital clocks, there was all kinds of noise on the ground lines. Trying to do the star ground thing just didn’t fly. Finally I just connected wires between racks, between mixer and racks, to ground terminals on barrier strips, and all was well. Again, I don’t want to imply this is the way to go, but I don’t want people to spend a lot of time and effort when it’s not necessary, either. NUTS AND VOLTS It's important to remember that electricity can be lethal. If you plan to deal with AC power, it's important to consult with a professional. You'd be nuts to play around with volts if you aren't 100% sure you know what you're doing! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. Take Your Sound Quality Up Another Notch without Busting Your Budget Too Badly By Craig Anderton We all know the usual litany of ways to improve your sound: "Buy a better mic," "buy a better preamp," "buy a better..." Yeah, you get the idea. (My personal favorite is "write a better song," but that's beyond the scope of this article.) However, there are a lot of simple fixes you to do to improve your sound - sometimes dramatically - that involve little or no money. And even if your ship has come in, these tips are still well worth following if you want to bring your recorded sound to a higher level. Change Your Recording Sample Rate to 88.2kHz...Maybe I'm not entirely convinced this will result in an audible difference, but some people swear they can hear an improvement with sample rates above 44.1kHz. The question is whether the improvement is worth the extra storage space (and computer horsepower) - and if so, which sample rate to use. Few think that 176.4 or 192kHz is worth the effort not just because of issues like reduced track count, but also because there may be other issues, such as plug-ins not being designed to work at those rates. While the high sample rate buzzword is 96kHz, sample rate converting down to the 44.1kHz of a Red Book CD requires some fancy math - your 96kHz signal gets divided by 2.176870748299319727891156462585. Sure, today's sample rate converters should be able to handle the calculations without roundoff errors, but you might get better results by recording at 88.2kHz (Fig. 1). Fig. 1: MOTU's Digital Performer is one of many programs that's happy to work with an 88.2kHz sample rate. 88.2kHz provides virtually all the practical benefits of going with 96kHz, and some people think material recorded at 88.2kHz sounds better than material recorded at 96kHz by the time it ends up on a CD. In any event, give 88.2kHz a shot; if it sounds better to your ears, go for it. 24: Not Just a TV Show We mean "24" as in bit resolution for recording. If you're still recording with 16 bits, flick that bit resolution switch in your host program now. Yes, your files will take up more space. But storage continues to get less expensive, and computers are fast enough to process this extra data without giving too much of a hit to your track count and ability to use plug-ins. 24 bits provides more headroom while recording, better dynamic range, and more "footroom" as well. Besides, with most converters, to obtain an "honest" 16 bits of resolution you need at least 20-bit converters anyway. Clean Your Contacts Oxidized contacts can definitely affect sound quality in a couple of ways. One is that resistance can build up, which is equivalent to putting a resistor in series with your cable. A more insidious problem is when crystallization builds up, and those little diodes act like little crystal radios, ready to detect RF and inject it into your system as low-level hash. The more patch points and mechanical switches you can get rid of, the better. For those that remain, use contact cleaner like Caig's DeoxIT to keep your connections clean. This can make a big difference, especially if you have a lot of patching in your studio.. Use The Highest Internal Resolution Your Host Allows If you're piling on the tracks, doing complex mixes with a ton of automation, and want reverb tails to decay into nothingness, higher internal resolution can make a difference. For example, clicking on a check box within Cakewalk Sonar switches its audio engine over to 64-bit precision (Fig. 2). Fig. 2: Here's the switch that turns on Cakewalk Sonar's 64-bit Double Precision audio engine. As with high sample rates, higher audio engine resolutions are controversial as some people say there's no significant difference. Regardless, it's worth a try - this doesn't really stress out your computer, even if you're using a 32-bit operating system. If it sounds better...use it. Note that this is not the same as the bit resolution used for recording; it's the resolution used when calculating levels, EQ, and other processes within host DAW software. Deaden Anything That Resonates Or Vibrates At the very least, resonances can be annoying. But even worse, you may mistake them as part of the sound coming out of your speakers. Bundle cables, caulk gaps, tighten down screws (and of course, turn off the snares on snare drums that aren't in use!). With Digital, Consider -6.0dB as 0dB Most digital meters, while more accurate than their inertia-ridden analog counterparts, measure the instantaneous level of the samples that make up the signal - not the actual signal level that results from interpolating those samples. So, it's entirely possible that the actual level is several dB higher than what the meter indicates, which means your signals could easily be going into clip-land occasionally without your knowing it. (Note that Fig. 3 shows an easy solution to seeing those clips: SSL offers a downloadable meter plug-in for Mac or Windows called X-ISM that indicates inter-sample clipping.) Fig. 3: SSL's X-ISM plug-in for measuring inter-sample distortion. It works and it's free - what's not to like? Granted, some will say "Use your ears; if you don't hear it, who cares?" But while you may not hear distortion on a single track, add together a bunch of mildly clipped tracks, and something may sound "wrong" - even if you can't identify the exact cause of the problem. So, give your peaks a little breathing room and treat -6 as max. Besides, with 24-bit resolution, you're just throwing away an extra bit if you're recording 6dB lower. This won't make any significant difference in sound quality. Enable High-Res Mode on Plug-Ins and Soft Synths High clock speeds and dual core processors have pretty much put an end to the days of underpowered CPUs, but their legacy continues in many plug-ins that offer "high-quality" and "low-quality" options, with the latter placing less stress on your CPU. But you shelled out for that shiny new computer specifically to stress out your CPU, so seek out those "quality" switches (Fig. 4) and turn them all up to the max quality possible. Fig. 4: In Propellerheads' Reason, disable the Low BW option in SubTractor and Dr. Rex. But with the NN-XT and NN-19, enable High Quality Interpolation. Roll Off the Subsonics This has been mentioned numerous times over the years, but just in case you missed it, use a sharp low-cut filter to roll off all unneeded bass frequencies (Fig. 5). Fig. 5: Sonar's Quad EQ has a highpass filter whose slope can do up to 48dB/octave. For example if the lowest fundamental in a track is 100Hz, start rolling off below that. Getting rid of unnecessary lows can help open up the sound of a mix. Ditch Your Mic's Foam Wind Screen They're great for keeping spit from your lead singer out of the mic during live performance, but they affect the high frequency response and just plain don't sound that good. If you don't mic real closely, your mic has a low frequency rolloff switch, and your singer doesn't get out of control, you may not even need a windscreen - try it. But if you do, get one of those round mesh models (Fig. 6) instead of using a foam "mic condom." Fig. 6: Mesh pop filters cost more, but they'll protect your mic while preserving its sound quality. Decouple Near-Field Monitors from Their Stands We'll assume you've already placed your near-fields on stands, and made sure there aren't reflective surfaces between the speakers and your ears (e.g., desktops, mixing consoles, etc.). But you still may have problems because of sound coupling from the speakers to the stands, which then causes other surfaces to vibrate. Although you can buy decoupling pads, the cheapest solution is to gather together some of those thick, neoprene promotional mouse pads you never use anyway and put them between the speakers and stands. However, a far better and more effective solution is the Primacoustic Recoil Stabilizer line (Fig. 7). Fig. 7: Primacoustic's Recoil Stabilizer delivers excellent decoupling at a reasonable price - it may seem like snake oil, but unlike something like high sample rates you really can hear a difference. If there was a lot of coupling going on, decoupling the speakers will result in a more focused, tighter sound, with much more defined bass and clearer highs. Do a Shootout with Your EQ Plug-Ins Not all EQ plug-ins sound the same. Run some signal sources through several different EQs at extreme, but identical, settings; for example, try boosting treble while processing crash cymbals, and determine which EQ gives the "sweetest" sound. Try boosting upper mids with vocals to find out which vocals get "harsher" and which ones simply get more present. Also experiment with cutting extreme amounts of mids to find out how various EQs hold up. Richer, More Realistic Reverbs In a real acoustic space, reverb consists of millions of reflections. No matter how hard a reverb algorithm tries, it can only approximate that degree of complexity. Even convolution reverbs, while very realistic, cannot duplicate the sound of a real acoustic space - only simulate it. One quick fix is to run two reverbs in parallel. For example, if you have a really good hall sound, run it in parallel with a plate sound (Fig. 8). Each reverb will tend to "fill in the cracks" in the other one's sound, producing a more complex and satisfying reverb effect. Fig. 8: Running two reverbs in parallel or even in series can yield a much smoother, richer sound than relying on a single reverb. Here, IK's CSR Hall and CSR Plate are inserted in parallel FX buses within Steinberg's Cubase. Also try combining reverbs in series; a lot depends on the types of reverbs you're using. At some point while you're experimenting, you'll likely find a perfect combination of the two. Save both presets, because you'll likely want to use them again. Cheap Gear + Quality Converters = Expensive Sound It stands to reason that a $300 box isn't going to include a $1,000 D/A converter on board, but many effects do include a digital out - and with quality conversion, you can hear what a device really sounds like. Of course, if you have a suitable digital audio interface, you can feed the effect's digital out directly to your computer. But sometimes, getting a little analog mojo into the signal chain - especially if it's high-quality analog mojo - can add a character to the sound you won't obtain by going digital-to-digital. How Much Latency is Too Much? One common trick is to lower latency while recording, then kick it up to a higher value when mixing. This causes less stress on the CPU, thus allowing more plug-ins and virtual instruments to run, as well as providing the bandwidth to handle complex automation and other tasks. However, some engineers swear that using more latency than you really need doesn't help the sound, because it causes buffer timing issues that have the same kind of effect as using a loose clock signal: Smearing of the sound, and narrowing of the soundstage. I don't know of any hard proof about this, but there's enough anecdotal evidence floating around that this concept deserves a closer look. Meanwhile, it's probably a good idea to use no more buffering than is really needed, even if you're mixing. Dithering Differences Different dithering algorithms are subtly different. But when you're dealing with something that's happening around the noise floor, it's hard to quantify exactly what's happening. To judge how dithering affects the sound, record something acoustic with a long decay, like a decaying piano chord with the sustain pedal up. Next, cut just the end of each track (say, where the signal dips below -65dB or so), turn the volume way up while being very careful to make sure no high-level noises can get into the mix, then apply various types of dithering with different noise levels and noise shaping. Decide which one works best for you - assuming you actually need any, as with today's high resolution recording and computing options, dithering may do nothing more than add a layer of noise you don't really need. Using Noise Reduction When There Really Isn't Any Noise Taken on a track-by-track basis, you may not hear any hiss in a project. But add together a bunch of tracks with low-level hiss, and it's not so low-level any more. Fortunately, today's noise reduction algorithms (such as the noise reduction tools in Sony Sound Forge and iZotope RX3) do a superb job of minimizing noise while maintaining transparency of sound. The less noise they need to get rid of, the better the sound quality. So if you're using noise reduction to reduce noise that sits around, say, -65dB or so, you can bring the noise down to -80dB with virtually no audible degradation. The key to good noise reduction is to take a "fingerprint" of only the noise. Often you can find this at the head of a track, or during silences in the middle. Subtract this from your audio using a noise reduction tool that supports this type of operation, and do this for all your tracks; you may be startled by the kind of clarity this imparts to the final mix - it's like removing a scrim in a theater production. Clean Up Your AC Power These last two options aren't low cost, but they're worth mentioning anyway if you're more concerned about pushing performance than pinching pennies. I always knew that having properly conditioned power was good for your equipment, but never really believed it made an actual sonic difference until I reviewed the Equi=Tech balanced power system for EQ magazine several years ago. Taking residual noise measurements with and without the Equi=Tech revealed about a few dB less noise with the Equi=Tech in use. It's a relatively costly way to shave a couple dB, but every little bit helps - and good power filtering/conditioning helps promote happier, longer-lived gear anyway. Digital Distortion: From Harsh to Creamy Not all distortion is bad - just ask a guitarist. However, some digitally-generated distortion can have a harsh quality, even when it's not supposed to (such as amp simulation software). One "magic bullet" I've found for smoothing out sounds like power chords is the Declick and Decrackler processor in iZotope's RX3 (Fig. 9; Adobe Audition's Click and Pop remover also works). Seriously. Fig. 9: Smooth out intentional digital distortion with iZotope's RX Declick and Decrackle processing. Depending on how heavily you apply it, it smoothes out the spiky stuff. However, don't assume that more is better - sometimes a light amount of decrackling is all you need. RX2 has a handy "Output Clicks Only" button so you can hear only what's being removed. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. By Craig Anderton Touch: It’s what separates the great from the almost-great. It’s what every bass player wants, and some manage to find . . . but few have ever really defined. Nonetheless, in addition to the unquantifiable, there is the quantifiable aspect of touch—so let’s investigate. TUNING Tuning is crucial with bass, perhaps even more so than with other instruments, because of the low frequencies involved. A bass is a resonant system, where even if you’re only playing one string at a time, you can’t avoid having occasional, sympathetic resonances from the other strings. Even slightly out-of-tune strings will create slow, rolling beat frequencies. This is very different from, say, guitar, where slightly out-of-tune high notes create more of a chorusing effect; chorusing on a bass robs the part of its power. CHOOSING THE RIGHT NOTES One of the great things about bass (or guitar, for that matter) is that you can play the same note in different places on the neck to obtain different timbres. Those who play samplers know how valuable “round robin” note assignment can be, where hitting the same note repeatedly triggers different samples to avoid the “machine-gun notes” effect. Bass has “round robin” assignments too, but you do the assigning. An obvious example is open versus fretted strings. For example, when going from an A to a D, don’t necessarily go from one open string to another, but play the D on the A string. That mutes the A string so its vibrations don’t interfere with the D string, and the contrast with the decay can shape the sound as well—going from an open string to a fretted string shortens the decay and “closes down” the line, whereas going to an open string leaves the line more “open” because of the extra sustain. Fretted notes tend to draw less attention in a mix than open strings, and this can also be used to good advantage. During the verse, try playing fretted notes to give more support to the vocals; but for the chorus, use open strings as much as possible. PICKUP HEIGHT The distance of the pickups from the strings makes a big difference on how your touch interacts with the bass because pickups follow the inverse square law, where output drops off rapidly with increasing string distance. Placing the pickups further away makes a heavy touch seem more light and the overall sound less percussive, while placing the pickups closer to the strings makes a light touch seem heavier and emphasizes percussive transients. I have two preferences with pickups. First, I usually set the neck pickup closer to the strings than the bridge (Fig. 1). This isn’t just to balance out levels; I tend to pluck just below the neck pickup, so having it a bit lower accommodates the extra string excursion. Second, I like to angle the pickups so that they’re a bit further away from the lower strings, and closer to the higher strings. I tend to slam the lower strings harder, so this pickup placement evens out the string levels somewhat, even before they hit any kind of amp or compression. Fig. 1: Where you set the pickup height in relation to the strings can make a big difference in the overall touch. In any event, if you haven’t experimented with pickup height, spend some time recording your bass with the pickups at various heights. You might be surprised how much this can influence not only your tone, but the effects of your “touch.” THE TONE CONTROL IN YOUR FINGERS There are many ways to play bass strings: Pushing down with fingers, using a pick, pulling up and slapping, plucking with the fingers . . . and each one gives a different tonal quality, from smooth and round to twangy and percussive. Match your picking technique as appropriate to the song, and your “touch” will augment the arrangement. You can make your bass lay demurely in the background, or push its way to the front, just by what’s in your fingers. TOUCH MEETS ELECTRONICS Touch also works in conjunction with whatever electronics the bass first sees in the signal chain. The bass reacts differently to your touch depending on whether it first sees a straight preamp, a preamp with saturation, a tube amp, or a solid-state amp. I go for a bit of saturation in a preamp (as long as it’s soft, smooth saturation—not hard clipping) as that tends to absorb some of the percussive transients, giving a smoother tone that works well with subsequent compression. But that’s because with the kind of music I play, the bass tends to be mostly supportive. In small ensemble situations where the bass takes a more prominent role (e.g., jazz trios), a clean preamp will preserve those transients better, letting the bass “take over” a bit more in the mix. If you’re feeding a compressor, its settings have a huge influence on touch. With lots of compression, you can pluck the string softly for a muted tone, but the volume level will still be relatively high due to the compression. Hit the string harder, and if the compressor has a fast attack, the compression will absorb the percussive transient, making the tone more docile. If the compressor has a slow attack, that initial transient will pop through. In this situation, touch doesn’t only involve working with the bass, but with the electronics as well. Before we sign off, remember this: The bass doesn’t exist in a vacuum, and your touch interacts with every aspect of it—strings, frets, pickups, and downstream electronics. Optimize these for your touch, and you’ll optimize your bass sound. Acknowledgement: Thanks to Brian Hardgroove, bassist/bandleader for Public Enemy, for his contributions to this article. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. By Craig Anderton Cakewalk’s cross-platform, step-sequencing-oriented synthesizer has a ton of hidden features and shortcuts. Here are some favorites; the numbers correspond to the numbers in the screen shot. 1 BETTER SOUND QUALITY Each Element has a Quality parameter that defaults to Std. If the patch uses pitch sweeps, change this to Hi to minimize aliasing. To further minimize aliasing, click on the Options button (the Screwdriver icon toward the upper right) and check “Use sinc interpolation when freezing/rendering.” 2 RAPTURE MEETS MIDI GUITAR Click on the Options button. Check “Set Program as Multitimbral” so Rapture elements 1-6 receive MIDI channels 1-6, which can correspond to guitar strings 1-6. For the most realistic feel where playing a new note cuts off an existing note sounding on the same string, set each element’s Polyphony to 0 (monophonic with legato mode), and Porta Time to 0.0. 3 ENABLING PORTAMENTO Portamento is available only if an Element’s Polyphony = 0. If Polyphony = 1, only one voice can sound (monophonic mode), but without legato or the option to add portamento. 4 MULTI OPTION DETAILS An Element’s Multi option can thicken an oscillator without using up polyphony. However, it works only with short wavetables, not longer samples or SFZ files. 5 ACCEPTABLE FILE FORMATS Each Element can consist of a WAV, AIF, or SFZ multisample definition file. SFZ files can use WAV, AIF, or OGG files. Samples can be virtually any bit depth or sample rate, mono or stereo, and looped or one-shot. 6 MELODIC SEQUENCES When step sequencing Pitch, quantize to semitones by snapping to 12 levels or 24 levels (right-click in the sequencer to select). If you simply click within the step sequencer, each time you type “N” it generates a new random pattern. 7 CHAINING ELEMENTS FOR COMMON FX You can route an oscillator (with its own DSP settings) through the next-higher-numbered Element’s EQ and Effects by right-clicking on the lower-numbered Element number and selecting “Chain to Next Element.” (You can’t do this with Element 6 because there is no higher-numbered element.) 8 KNOB DEFAULT VALUES Double-click on a knob to return it to its default value. 9 THE PROGRAMMER’S FRIEND: THE LIMITER When programming sounds with high resonance or distortion, enable the Limiter to prevent unpleasant sonic surprises. 10 FIT ENVELOPE TO WINDOW If the envelope goes out of range of the window, click on the strip just above the envelope graph, and choose Fit. 11 SET ENVELOPE LOOP START POINT Place the mouse over the desired node and type “L” on your QWERTY keyboard. Similarly, to set the Loop End/Sustain point, place the mouse over a node and type “S.” 12 CHANGE AN ENVELOPE LINE TO A CURVE Click on an envelope line segment, and drag to change the curve. 13 CHANGE LFO PHASE Hold down the Shift key, click on the LFO waveform, and drag left or right. 14 CHOOSING THE LFO WAVEFORM Click to choose the next higher-numbered waveform or right-click to choose the next lower-numbered waveform. But it’s faster to right-click above the LFO waveform display, and choose the desired LFO waveform from a pop-up menu. 15 PARAMETER KEYTRACKING The Keytracking window under the LFO graph affects a selected parameter (Pitch, Cut 1, Res 1, etc.) based on the keyboard note. Adjust keytracking by dragging the starting and ending nodes. Example: If Cut 1 is selected and the keytracking line starts low and goes high, the cutoff will be lower on lower keys and higher with higher keys. If the line starts high and goes low, the cutoff will be higher on lower keys and lower with higher keys. 16 CHANGE KEYTRACKING CURVE Click on the Keytrack line and drag up or down to change the shape. 17 CHOOSE AN ALTERNATE TUNING Click on the Pitch button for the Element you want to tune. Click in the Keytrack window and select the desire Scala tuning file. ADDING CUSTOM LFO WAVEFORMS Store WAV files (8 to 32-bit, any sample rate or length) in the LFO Waveforms folder (located in the Rapture program folder). Name each WAV consecutively, starting with LfoWaveform020.wav, then LfoWaveform021.wav, etc. SMOOTHER HALL REVERB If you select Large Hall as a Master FX, create a smoother sound by loading the Small Hall into Global FX 1 and the Mid Hall into Global FX 2. Trim the reverb filter cutoffs to “soften” the overall reverb timbre. THE MOUSE WHEEL The wheel can turn a selected knob up or down, change the level of all steps in a step sequence, scroll quickly through LFO waveforms, zoom in and out on envelopes, and more. Hold the Shift key for finer resolution, or the Ctrl key for larger jumps. FINEST KNOB RESOLUTION Use the left/right arrow keys to edit a knob setting with fives times the resolution of just click/dragging with the mouse. NEW LOOK WITH NEW SKINS In the Rapture folder under Program Files, the Resources folder has bit-mapped files for Rapture graphic elements (e.g., background, knobs, etc.). Modify these to give Rapture a different look. COLLABORATING ON SOUNDS To exchange files with someone who doesn’t have the same audio files used for an SFZ definition file, send the audio files separately and have your collaborator install them in Rapture’s Sample Pool library. This is where Rapture looks for “missing” SFZ files. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. By Craig Anderton The purpose of a vocal is to connect with your audience, but clearly, not all singers do. The reason for that lack of connection is often a reflection of a lack of connection within the singer—if the singer doesn’t bond with the vocal, there’s no way the audience is going to bond with the singer. This can be a particular problem in the studio, where you there’s no audience to prompt you to remain connected to the vocal; so here are ways to prompt yourself. In the process, you’ll connect better with your listeners. FUSION: IT’S THE PACKAGE As obvious as it may sound, reflect on the fact that music and lyrics are a single package: One of a vocalist’s main tasks is to integrate the two into a single experience. To put it in tech terms, music and lyrics are each separate data streams, and the singer multiplexes them into a single, cohesive statement. Or, think of the melody as the carrier, and words as the modulator. In any event, the point is never to emphasize one element at the expense of the other. Aretha Franklin is an outstanding example of someone who fuses lyrics and melody into a single entity. Bob Dylan is another one, whose quirky lyrics match his quirky voice; or consider Bob Marley, whose vocals were sometimes closer to a percussive instrument. For some examples of people who don’t fuse music and lyrics, just tune in to any American Idol show where they’re auditioning singers. Some of them are so into screaming and overemoting with their voice they forget that they’re also supposed to be telling a story. Sometimes I almost feel you could go up to these people, say “What were you singing about?,” and they wouldn’t be able to tell you. SURPRISE—YOU’RE A SALESPERSON When you’re singing, you’re a salesperson—because you need to sell the listener that you believe in what you’re singing, that you know how to sing, and that you’re worth listening to. They say the best salespeople are those who believe in the product they’re selling, and that includes singing. But this doesn’t just mean confidence; plenty of lousy singers truly believe they’re great. Of course, believing in yourself never hurts, but believing in the song is key. There’s no point in singing lyrics you don’t believe in, whether it’s a cover song or something you wrote. If you ever find yourself “going through the motions” when signing a song, strike it from your repertoire or album. AUDIO “EYE CONTACT”: INTIMACY When singing live, eye contact is crucial for establishing a connection with the audience. When I go on stage, the most amazing thing is all those eyes looking at me—which immediately makes me want to look into the eyes of everyone there. We’re human; we long for contact and communication, and singing to people means you not only have to believe in the song, you have to believe that someone else does, too. But how can you possibly simulate that in the studio? Although you can’t make eye contact with your listener, you can increase intimacy in two ways: Use the proximity effect to add bass and warmth, and/or use compression or limiting (Fig. 1) to make you sound “closer” to the listener. Fig, 1: A good compressor is just one way to help create a more intimate sound, thus providing a better connection with the listener. Clockwise from top: Native Instruments' VC160 and VC76 compressors, Waves V Comp, Nomad Factory Model FA-770, Universal Audio LA-2A. Generally, intimacy implies a natural, close-up sound—something almost conversational in nature (although possibly a loud conversation!). But intimacy has other facets. Getting back to the “fusion-of-two-data-streams” concept, sometimes the way the voice connects is by being distant and ethereal—sounding more like a voice from inside the listener, rather than being outside the listener (for a prominent example of this style, think Enya). It’s even possible to combine both; this is something Dido does well, with a voice that’s both evocative, but conversational. WAIT UNTIL PLAYBACK BEFORE YOU JUDGE YOURSELF! When you cut a vocal, you must turn off the internal critic that apparently lives in just about every artist’s head. Don’t attempt to judge yourself when you sing. Don’t think “On my next take, I need to do that phrase better.” It’s harder to turn this off than you might think, because self-judgment is something that happens almost sub-consciously—you’ll probably find that once you become conscious of that internal critic, first you’ll curse me for making you aware of something you now can’t ignore, and second, that it’s hard to turn off. But you must turn it off. Remember, you’re selling that vocal to the listener, not just yourself. Put everything you have into projecting that vocal outward. Listen to yourself only enough to make sure you’re on pitch; put all your energies into your voice. It’s like baseball: You don’t look at the bat, you look at the ball and you naturally move the bat to hit it. Always keep the end listener in mind, and your vocal will flow naturally toward that goal. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Reduce Amp Sim Harshness with De-Essing by Craig Anderton Feeding too much treble into an amp sim set for a distorted sound can lead to a harsh, brittle timbre due to distorting the high frequencies. Although pulling back on your guitar’s tone control can reduce highs, the tradeoff is often a more muffled sound. Fortunately, “de-esser” processors provide an “intelligent” way to reduce the highs entering an amp sim. A de-esser’s main purpose is to reduce vocal sibilants (“s” sounds) by compressing only high frequencies, thus lowering the level of sibilants while leaving the rest of the vocal untouched. When placed between a guitar and amp sim, it reduces high frequencies from your axe when they’re prominent, but otherwise doesn’t affect your signal. As a bonus, the compression adds a little additional smoothness and sustain. Most DAWs have either a compressor that can provide a de-essing function (see Fig. 1), a multiband compressor (see Fig. 2), a dedicated de-esser module (see Fig. 3), or several of these options. If not, you can add a third-party de-essing plug-in. Fig. 1: PreSonus’s Studio One Pro doesn’t have a dedicated de-esser, but its compressor can do de-essing. Fig. 2: Ableton Live's Multiband Compressor, like other multiband compressors, can serve as a de-esser by compressing only the high frequencies. Fig. 3: Clockwise from upper left: MOTU’s MasterWorks multiband compressor, Nomad Factory Blue Tubes de-esser, Pro Tools’ Digirack de-esser, and Waves’ Renaissance de-esser. A compressor that can de-ess typically has an internal sidechain that puts a filter in the compression detection path, thus filtering out only high frequencies for compression. For example, in Studio One Pro’s compressor, the sidechain section shown in Fig. 1 is in the lower right. It’s set to Internal Sidechain Filter, with a low cut filter that compresses everything above 1.88kHz (ratio 20:1, threshold -48dB). This module also has a “Listen Filter” button that lets you monitor what’s being filtered. This makes it easy to zero in on the frequency range you want to compress. Dedicated de-essers generally have a subset of a full-blown compressor’s controls, with at least Frequency and Threshold parameters. Adjusting works similarly with all de-essers when applied to an amp sim: 1. Start off with no filtering 2. Set a threshold that's considerably lower than the high-frequency peaks, then slowly extend the high-frequency range that’s being compressed. Note that with some de-essers the threshold control works in “reverse,” with higher settings producing more compression. 3. As you listen to the amp sim output, at some point the sound will become sweeter as the highs start being compressed. If the compression effect is too obvious, raise the threshold and/or reduce the ratio (if present) to give a subtler effect. When you're done, your reward will be a much sweeter amp sim sound. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Your data is the most important part of your computer...back it up! By Craig Anderton Mac OS X has many useful features, and one of them is the ability to save your data using the built-in CD/DVD-ROM burning options built into the Mac operating system itself. If your Mac has a SuperDrive (internal, or external connected via USB), you can back up data easily to DVD-ROM and CD-ROM optical media. This article is based on using the Mavericks operating system; however the DVD-burning utility hasn’t changed much over the years, so the following will likely work with your current Mac OS X version. Begin by inserting a blank, recordable DVD-ROM into your Mac’s optical drive. A dialog box appears; select Open Finder, then click on OK (Fig. 1). If you want a blank DVD-ROM to always Open Finder when inserted, click “Make This Action the Default.” Fig. 1: If this dialog box doesn’t appear, the Mac hasn’t recognized the disc (e.g., it could be defective, or a Blu-Ray). When the “Untitled DVD” disc icon appears on your desktop, double-click on the icon. This opens up the empty DVD-ROM window where you drag the files you want to burn (Fig. 2); dragging them creates an alias of the files in the window. Fig. 2: Drag the files you want to burn to the empty DVD-ROM window. Note that you can edit the alias names in the DVD-ROM window without altering the original files, and the DVD-ROM will be burned with the edited names. Next, select Burn Disc from the drop-down menu, or click on the Burn button toward the window’s upper right corner (Fig. 3). Fig. 3: The window’s Burn button is the most convenient way to continue the burning process. Now you can name the disc, and specify the burn speed. The utility defaults to maximum speed, but I generally choose a speed one level slower just to give the burning process a little slack—I’ve never had a coaster when using the slightly slower speed, although that may just be coincidence. Also note that you can save the contents in a “Burn Folder.” This holds aliases for a particular collection of files to be burned (such as all the cuts in a compilation). Finally, click on Burn (Fig. 4). Fig. 4: After naming and specifying the speed, you’re ready to go. A progress bar shows the status of the burning process. When it’s finished, the DVD-ROM is done—and your data is backed up. Don’t you feel just a little more secure now? Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. It’s the Achilles Heel of computer-based recording... By Craig Anderton Lurking deep within your computer is a killjoy for anyone who wants to play software synthesizers in real time, or play instruments (such as guitar) through processing plug-ins: Latency — the delay your computer introduces between the time you hit a note on a keyboard, and when you hear it come out of the speakers. But look at it from the computer’s point of view. Even the most powerful processor can only do so many millions of calculations per second; when it’s busy scanning your keyboard, checking its ports, shuffling data in and out of RAM, and generally sweating its little silicon butt off, you can understand why it sometimes has a hard time keeping up. To avoid running out of audio, the computer sticks some of the incoming audio in a buffer, which is like a savings account for your audio: When the computer is so busy elsewhere that it can’t deal with audio, it makes a “withdrawal” from the buffer instead. The larger the buffer, the less likely the computer will run out of audio data when it needs it. But a larger buffer also means that the audio is being diverted for a longer period of time before hitting the computer, which is the genesis of latency. MINIMIZING LATENCY The first step in minimizing delay is, unfortunately, the most expensive one: a processor upgrade. Today’s multi-GHz processors are so fast they actually travel backward in time! Well, not really, but massive computational power is a Good Thing. The second step involves drivers, little pieces of code that provide communications between your computer and sound card (or USB/FireWire interface). Don’t let their size fool you — they are the data gatekeepers, and how efficiently they do their task greatly affects latency. Steinberg devised the first low-latency driver mode for audio interfaces, based on their ASIO (Advanced Streaming Input Output) drivers. These tied in closely with the CPU, bypassing various layers of both Mac and Windows operating systems. At that time the Mac used Sound Manager, and Windows used something that seemed to change names every few weeks, but was equally unsuited to musical needs. Cards that supported ASIO were essential for serious musical applications; ASIO led to ASIO2, which was even better. Eventually, Apple and Microsoft wised up. Microsoft brought forth the WDM driver mode, which was light years ahead of their previous efforts. And starting with OS X Apple gave us Core Audio, which tied in even more closely with low-level operating system elements (Fig. 1). Fig. 1: The Preferences from MOTU's Digital Performer, which is in the process of testing out an Avid interface. It's being set to the interface's lowest available buffer value of 128 samples. Microsoft offers other low-latency protocols, but on Windows, ASIO remains the de facto low-latency standard. Thanks to these driver improvements, it’s now possible to obtain latencies under 10 ms with a decent processor and an audio interface that supports low-latency drivers like ASIO, WDM, or Core Audio. THE DIFFERENT TYPES OF LATENCY Be aware that when you see a latency figure, it may have nothing to do with reality. Latency may simply express the amount of reserve storage the buffers have, which will be a low figure. But there's also latency involved in converting analog to digital and back again (about 1.2ms at 44.1kHz), as well as latency caused by other factors in a computer-based system and its associated hardware. A more realistic figure is the total round-trip latency, or the total delay from input to output. For example, latency may be 1.5ms for the sample buffers, but the real latency incorporates that figure and hardware latencies. This could add up to something like 5ms for input latency and 4ms for output latency, giving a total round-trip latency of around 9ms (Fig. 2). Fig. 2: This shows the audio preferences setting from Cakewalk Sonar. The panel to the right sets the buffer size in Roland's VS-700 interface; in this case, it's 64 samples. To the left, Sonar displays this delay, as well as the input, output, and total round-trip latency. Also note that although we’ve expressed latency in milliseconds, some manufacturers specify it in samples. This isn’t as intuitive, but it’s not hard to translate samples to milliseconds. This involves delving into some math, but if the following makes your head explode, don’t worry and just remember the golden rule of latency: Use the lowest setting that gives reliable audio operation. In other words, if the latency is expressed in milliseconds, use the lowest setting that works. If it’s specified in samples, you still use the lowest setting that works. Now, the math: with a 44.1kHz sampling rate, there are 44,100 samples taken per second. So each sample is 1/44,100th of a second long, or about 0.023 ms. So if the buffer latency is 256 samples, at 44.1 kHz that means a delay of 256 X 0.023 ms—about 5.8 ms. A final complication is that the interface reports its latency to the computer, which is how it calculates its latency figures. However, this reporting is not always accurate. This isn't some kind of conspiracy, and the figure shouldn't be too far off, but the takeaway is to believe your ears. If one set of hardware sounds like it's giving lower latency but the specs indicate otherwise, your ears are probably right. WHY "DIRECT MONITORING" ISN'T ALWAYS THE ANSWER You may have heard about an audio interface feature called “direct monitoring,” which supposedly reduces latency. And it does, but only for audio input signals (e.g., mic, hardware synth, guitar, etc.). It does this by sending the input signal directly to the audio output, essentially bypassing the computer. When you’re playing software synthesizers, or any audio through plug-ins (for example, guitar through guitar amp emulation plug-ins), turn direct monitoring off. What you want to hear is being generated inside the computer, so shunting the audio input to the output is not a solution. You’ll typically find direct monitoring settings in one of two places: An applet that comes with the sound card, or within a DAW program. HOW LOW CAN YOU GO? It will always take a finite amount of time to convert analog to digital at the input, and digital to analog at the output. Unfortunately, though, ultra-low latency settings (or higher sampling rates, for that matter) make your computer work harder, so you’ll be limited as to how many software synthesizers and plug-ins can run before your computer goes compu-psycho. You’ll know your computer is going too far when it the audio starts to sputter, crackle, or mute. As latency will continue to be a part of our musical lives for the foreseeable future, before closing out let’s cover some tips on living with latency. Set your sample buffers to the highest comfortable value. For me, 5 ms is sufficiently responsive, and makes the computer happier than choosing 2 or 3 ms. When you're starting a project, you can usually set latency lower than when mixing, after you've inserted a bunch of plug-ins. If you want to lay down a guitar or soft synth part using plug-ins, try to do so early in the recording process. Sometimes there are two latency adjustments: A Control Panel for the audio interface sets a minimum amount of latency, and the host can increase from this value if needed. Or, the host may “lock” to the control panel setting. Seek out and download your audio interfaces’s latest drivers. Dedicated programmers are mainlining Pepsi and eating pizza as we speak so that we can have more efficient audio performance — don’t disappoint them. If you have multiple soft synths playing back at once, use your program’s “freeze” function (if available) to disconnect some synths from the CPU. Or, render a soft synth’s output as a hard disk audio track (then remove the soft synth), which is far less taxing on our little microchip buddies. Hint: If you retain the MIDI track driving the soft synth, which places virtually no stress on your CPU, you can always edit the part later by re-inserting the soft synth. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. When it's time to take a stretch, REX files might be the answer By Craig Anderton “The recording’s great, but can you change the tempo a bit?” This question has been posed since the days of steam-powered multitrack recorders, and there’s seldom a pretty answer. Here’s the problem: if you recorded a killer solo into a digital audio recording program at, for example, 120 BPM, changing the tempo will cause the part to be out of sync. Two measures at 120 BPM will last more than two measures at faster tempos, and less than two measures at slower tempos. And if you’re into using loops, it’s also a problem if the perfect loop for your 133 BPM song chugs along at 125 BPM. One way to deal with this is to re-record the part. But hey, we have technology! Digital signal processing algorithms can lengthen digital audio by adding data, or shorten by deleting data. Unfortunately it’s difficult to do this without compromising the sound quality, especially with substantial timing shifts. Another option is MIDI-based recording. For example, suppose you record a kick drum trigger on each beat of a measure at 120 BPM. If you change the sequencer tempo to 130 BPM, the trigger still hits on each beat – the kick parts simply occur closer together. If you slow down the tempo, the kicks hit further apart. Great – but not everything lends itself to being recorded as (or converted into) MIDI data. ENTER RECYCLE Several years ago, Propellerhead Software devised an extremely clever time-stretching solution that applies MIDI thinking to digital audio (Fig. 1). Their ReCycle program takes a piece of digital audio, cuts it into small segments (“slices"), notes where in the measure(s) these segments occur, then creates a MIDI file that triggers the segments at the appropriate time within the measure(s). Fig. 1 shows a screen shot of the program. Note that in addition to the time-stretching capabilities, it also includes EQ, basic envelope shaping, pitchm trransposition, gating, and more. As to how it does its stretching, Fig. 2 shows a simple drum loop. ReCycle separates the drum part into individual “slices,” with each slice triggered by a MIDI note recorded in a sequence. Fig. 2: The upper waveform is a 133.33 BPM drum loop in ReCycle; note the slices. The middle waveform has been converted into a REX file, slowed to 120 BPM, and converted back to WAV. Note the gap between slices. The lower waveform is the same REX file, but sped up to 145 BPM. When the sequencer plays back the MIDI notes at the original tempo, the various slices all play back in succession, at the original timings. With a slower tempo, the triggers that play back the segments are further apart; at a faster tempo, the triggers occur closer together. As a result, the rhythmic relationship of the slices remains intact, because MIDI triggers always occur where they were recorded in a measure, regardless of tempo. In other words, MIDI data always follows musical time (bars:beats:ticks) instead of absolute time (hr:min:sec). The main advantage compared to DSP time-stretching is that the audio quality remains untouched—nothing is added or deleted, just triggered at a different time. Thus, it’s theoretically (and often practically) possible to have ReCycle files whose fidelity is indistinguishable from the original, even when time-stretched (there are limitations, though, which we’ll cover later). ReCycle saves the digital audio slices and MIDI information in a single file called a REX file. There are several types of REX files, but the most important, and recent, is the REX2 format (these types of files have a .RX2 suffix). Unlike earlier mono-only ReCycle formats, REX2 files are true stereo. Interestingly, REX files usually take up much less space than their WAV equivalents. WHERE DO YOU FIND THEM? Several sample CD companies produce REX file collections for use in programs that support the format. You can also use ReCycle to create your own REX files out of audio files. Note that you don’t necessarily need ReCycle; you can follow the same concept by cutting audio into slices, and triggering them from MIDI. However, ReCycle makes the process painless, and includes several features that optimize and enhance the process of converting a file into REX format. Although some speculated that the REX file would drift into obscurity as programs like Acid (which offer on-the-fly time-stretching) appeared and the fidelity of DSP-based stretching algorithms improved, that has not been the case. If anything, the REX file format is growing in popularity, as more people recognize the advantages it can offer over “audio only”-based time-stretching techniques. WHO SUPPORTS IT? Several products can read and interpret REX files. The most developed example is Propellerhead’s own Reason, which include a sophisticated REX file player module (for example, you can change the pitch of each slice). Several programs have an elegant REX file implementation; for eaxmple with Steinberg Cubase and Cakewalk Sonar, drag the file into the Arranger screen or Track View respectively, and the programs know what to do with the MIDI and audio. Speed up or slow down the tempo, and the REX file data goes along for the ride – no tweaks required. Current versions of most other DAWs, including Logic and Performer, also make REX file support painless. SO WHAT’S THE CATCH? REX files can be the perfect solution for many time-stretching problems. But like all other time-stretch options, REX files have their limitations. Here are the main ones. Slices have a finite, unchangeable length. Therefore, slowing down the tempo creates a gap between slices. If the sound decays before the slice ends, there’s no problem. If the sound level is relatively high at the splice point, ReCycle has a function that extends a sound’s decay; however, it’s not effective with all types of signals. Similarly, speeding up cuts off the end of segments. Fortunately, this isn’t much of a problem because psycho-acoustically, you’re more interested in hearing the new sound that’s cutting off the old sound rather than the old sound itself. REX files don’t work well with sustained sounds, or very complex sounds (e.g., a part with lots of repeating delays). Ideally, slices would have a quick attack time, and consist of a single “block” sound (like a guitar chord, synth bass note, or drum hit). This makes for an unambiguous slice – you know where it starts, and where it ends. It’s much harder, and sometimes impossible, to find good slices with a sustained sound. Another problem is that each slice produces a discontinuity in the sound. If it occurs during a quiet part, this isn’t really an issue. But if there’s sound going on, you’ll often hear a pop or tick. Careful editing of the slice boundaries can minimize or eliminate clicks, but for sustained sounds, it’s not always possible to hide the slice transition. Creating a really good REX file can be tedious. Something like a drum loop or synth bass part is easy to “rexify,” but increasingly complex sounds become increasingly difficult to convert into the REX format. REX files are much better at shifting rhythms convincingly than pitch (ReCycle does allow for changing pitch). If you want to create a melodically-oriented REX file that works in any key, consider “multisampling”—e.g., recording versions in the keys of E, A, and C. Then the file needs only a little pitch shifting to work in a different key, which will sound more realistic than larger changes. USING THEM If you record a sound you plan to convert to REX format, consider your target tempo range. A typical file will still sound fine if shifted down around 5-10%, or up by 20-40%. So, to cover a 120 – 145 BPM range, I usually record the file at 125 BPM. In many cases, these loops can work well down to 100 BPM and as high as 160 (and sometimes, even beyond). REX files lend themselves to experimentation. For example, one ReCycle function can save a file as a collection of individual digital audio slices. You can bring these into a digital audio program and move the slices around, change their pitches, add envelopes, etc. ReCycle 2.0 has some built-in processing, but this is specialized and doesn’t compare to what a digital audio editor can do. For example, I made some cool sample-and-hold sounds by taking a sustained chord, putting slices every eighth note, then saving each section. I then brought the sections into Wavelab and applied a different resonant filter frequency to each one. After combining them back together into a single file, the end result was a useful loop. REX files aren’t the only way to do time-stretching, but for certain applications, they are the ideal choice. Add them to your bag of tricks, and the next time someone wants to change the tempo, you’ll be able to cope a little better. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Get hands-on control over your DAW by Craig Anderton The Mackie Control became such a common hardware controller that most DAWs included “hooks” to allow for them to be controlled by Mackie’s hardware. But that also created another trend—other hardware controllers emulating the Mackie protocol so that non-Mackie controllers could work with these same DAWs because, from the DAW’s standpoint, they appeared identical to the Mackie Control. These controllers hook up through MIDI. So, the basic procedure for having a DAW work with a Mackie-compatible device is: Assign a MIDI input to receive messages from the controller. If the controller is bi-directional (e.g., it has moving faders so they need to receive position data from the DAW), you’ll need to assign a MIDI output as well; this may also be the case if the DAW expects to see a bi-directional controller. Choose Mackie Control as a control surface within the DAW itself. If a program says there’s no Mackie Control connected (e.g., Acid Pro), there will often be an option to tell the program it’s an emulated Mackie Control. Any controller faders usually control channel level, while rotaries control panpots. Buttons typically handle mute or solo, but may handle other functions, like record enable; this depends on how the DAW interprets the Mackie Control data. Also, there are typically Bank shift up/down and Track (also called Channel) shift up/down buttons (labeled Page and Data respectively in the Graphite 49). The Bank buttons change the group of 8 channels being controlled (e.g., from 1-8 to 9-16), while the Track buttons move the group one channel at a time (e.g., from 1-8 to 2-9). Many controllers have transport buttons as well (play, stop, rewind, etc.). This article tells how to set up a basic Mackie Control that doesn’t use motorized faders. The Mackie Control protocol is actually quite deep, and some programs allow for custom assignments for various controller controls. That requires much more elaboration, so we’ll just concentrate on the basics here. We’ll use Samson’s Graphite 49 controller as our typical Mackie Control, but these same procedures work with pretty much any Mackie Control-compatible device. Note that the Graphic 49 has five virtual MIDI ports, and all remote control data is transmitted over Graphite’s virtual MIDI port #5. This allows the other ports to carry data like keyboard notes and controller positions to instruments and other MIDI-aware software. We’ll assume you’ve loaded the preset that corresponds to the programs listed below. However, note that you may be able to call up a different preset for slightly different functionality. For example, if a preset’s upper row of buttons controls solo, they can often control record enable if you call up a preset where the upper row of buttons controls record enable (e.g., the Logic preset). APPLE LOGIC PRO Graphite 49 looks like a Logic Control; as that’s the default controller, you usually won’t have to do any setup. However if this has been changed for some reason, go Logic Pro > Preferences > Control Surfaces > Setup. In the Setup window, click the New pop-up menu button and choose Install. Click on the Mackie Logic Control entry, click on the Add button, click OK, and you’re done. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Record Enable, and the lower switches control Mute. AVID PRO TOOLS Go Setup > MIDI > Input Devices. Make sure MIDIIN5 (Samson Graphite 49) is checked, then click OK. Then go Setup > Peripherals. Click the MIDI Controllers tab. For Type, choose HUI. Set Receive From to MIDIIN5 (Samson Graphite 49). Send To must be set to something, so choose MIDIOUT2 (Samson Graphite 49). The faders, rotaries, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. However, the Bank and Channel buttons don’t work with the HUI protocol. ABLETON LIVE In Options > Preferences, choose MackieControl for Control Surface, and set Input to MIDIIN5 (Samson Graphite 49); Output doesn’t need to be assigned. In the MIDI Ports section, turn Remote On for the input that says MackieControl Input MIDIIN5 (Samson Graphite 49). The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Track Activator buttons. CAKEWALK SONAR In Edit > Preferences > MIDI Devices, set the MIDI In port to MIDIIN5 (Samson Graphite 49) and the MIDI Out port to MIDIOUT2 (Samson Graphite 49). Click Apply. Click on Control Surfaces under MIDI, then click the Add New Controller button in the upper right. For Controller/Surface, choose Mackie Control and verify that the Input and Output Ports match your previous MIDI port selections. Click OK, click Apply, click Close. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. MOTU DIGITAL PERFORMER Go Setup > Control Surface Setup. Click the + sign to add a driver, and select Mackie Control. Under Input Port, choose Samson Graphite 49 Controller (channel 1). Click OK. The faders, rotaries, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. PRESONUS STUDIO ONE PRO Under Studio One > Options > External Devices, choose Add. Select Mackie Control. Set Receive From to MIDIIN5 (SAMSON Graphite 49). Send To can be set to None. Click on Okm then click on OK again. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. PROPELLERHEAD REASON Mackie Control works somewhat differently with Reason from a conceptual standpoint, because until Record was integrated with Reason in Version 6, Reason was not a traditional DAW. As a result, Graphite sends out specific control signals that apply to whatever device has the focus. It’s easiest if you also use Graphite 49 as the master keyboard controller, and go Options > Surface Locking and for Lock to Device, select Follow Master Keyboard. Also, create a track for any device you want to control, including processors or devices like the Mixer 14:2. When you click on that track, Graphite 49 will control the associated device. If you choose an Audio Track, slider S1 controls level, the F1 button controls solo, F9 controls mute, and rotary E8 controls pan. For example, if the 14:2 Mixer has the focus, the faders, rotaries, and buttons work as expected. (as does the transport) although Bank and Channel Shift commands aren’t recognized. If SubTractor has the focus, the controls affect various SubTractor parameters. There’s a bit of trial and error involved with the various devices to find which Graphite 49 controls affect which parameters; you can always create custom presets to control specific instruments, but this goes beyond the scope of this article, as it involves delving into Reason’s documentation and assigning specific controls to specific MIDI channels and controller numbers. Go Edit > Preferences and click the Control Surfaces tab. Click the Add button; select Mackie as the manufacturer, and Control for the model. Under input, select MIDIIN5 (Samson Graphite 49). For output, select MIDIOUT2 (Samson Graphite 49). Click OK, and make sure Standard is checked. Note that you can also lock the Graphite 49 to a specific device so that it will control that device, regardless of which track is selected. Go Options > Surface Locking and choose the device to be locked. SONY ACID PRO Under Options, check External Control. Under Options > Preferences, click the MIDI tab, check the MIDIIN5 (Samson Graphite 49) box under “Make these devices available for MIDI input,” then click Apply. In the External Control and Automation tab, under Available Devices choose Mackie Control and click on Add. Double-click in the Status field and in the dialog box that opens, in the Device Type field choose Emulated Mackie Control Device. Select MIDIIN5 (Samson Graphite 49) for the MIDI input if it is not already selected. Click on OK, then click on OK in the next dialog box. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. The faders, rotaries, and Transport buttons work as expected but only the first eight channels can be controlled and it is not possible to do Bank or Track shifting. Graphite 49’s upper switches control Solo, and the lower switches control Track Activator buttons. SONY VEGAS PRO The procedure is identical to Acid Pro, except that the Status field in the External Control and Automation page updates correctly after selecting Emulated Mackie Control Device instead of saying “No Mackie Devices Detected.” Note that only audio channels are controlled. STEINBERG CUBASE Go Devices > Device Setup. Click the + sign in the upper left corner and select Mackie Control from the pop-up menu. Under MIDI Input, select MIDIIN5 (Samson Graphite 49) then click on Apply. Click OK. The faders, rotaries, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. However, I couldn’t figure out how to get Cubase to recognize Graphite 49’s Bank and Channel buttons; if anyone knows, please add a comment, and I’ll modify this article. Cubase offers a very cool feature: If you check Enable Auto Select, when you move a Graphite 49 fader it automatically selects that channel. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. It takes a certain amount of effort to get amp sims to sound good—which can sometimes involve adding EQ before and/or after the sim, avoiding particular amp cabinet combinations (or choosing “golden” combinations), and so on. But one of the most important elements is gain-staging, and this article explains why. Not all distortions are created equal... by Craig Anderton Amp sims have slowly but surely gained acceptance over the years. Although some guitarists will always prefer using tubes, it’s also true that amp sims can often provide sounds that are difficult, if not impossible, to obtain in the physical world and that has inspired many guitarists to start using amp sims. But another factor is that amp sims are not just “plug and play.” It takes a certain amount of effort to get them to sound good—which can sometimes involve adding EQ before and/or after the sim, avoiding particular amp cabinet combinations (or choosing “golden” combinations), and so on. But one of the most important elements is gain-staging, and here’s why. Many guitarists experience bad amp sim tone because they don’t realize there’s the potential for two types of distortion within modules like amp and cabinet emulators: The “good” amp distortion we know and love, and the “nasty” digital distortion that results from not setting levels correctly inside the sim. KNOW YOUR DISTORTION With analog technology, if you overload an amp input you just get more distortion. Because it’s analog distortion, it sounds fine—just more distorted. But if you overload a digital amp’s input, remember that digital technology has a fixed, and unforgiving, amount of headroom. If you don’t exceed that headroom, the amp sim will sound as the designers intended. But if your signal crosses that threshold, the result is ugly, non-harmonic distortion. Never go “into the red” with digital audio—unless you’re scoring a Mad Max sequel, and want to conjure up visions of a post-apocalyptic society where the music totally sucks. SETTING INTERFACE INPUT LEVELS To avoid digital distortion, it’s important to optimize levels as you work your way from input to output. The most important gain setting is the audio interface’s input gain control, which will often be complemented by a front panel clipping LED. Adjust this so that the guitar isn’t overloading your audio interface, which will likely have a small mixer application with metering so you can verify levels (just note that the application’s fader isn’t what’s controlling the input level—it’s the interface’s hardware level control). If distortion happens this early in the chain, then it will only get worse as it moves downstream. Set the audio interface preamp gain so the guitar never goes into the red (Fig. 1), no matter how hard you hit the strings. Be conservative, as changing pickups or playing with the controls might change levels. You can always increase the gain at the sim’s input. Fig. 1: The metering for TASCAM's US-366 interface shows that the guitar input (Analog 1) level control is set so the input levels are avoiding overload. AMP SIM INPUT TRIM Your sim will likely have an input meter and level control; adjust this so that the signal never hits the red. Going one step further, Peavey’s ReValver includes an input “Learn” function (Fig. 2). Click on Learn, then play your guitar with maximum force. Fig. 2: ReValver’s Learn function automatically prevents the input and/or output from being overloaded. Learn analyzes your signal, then automatically sets levels so that the peaks of your playing don’t exceed the available input headroom. Beautiful. TRIMMING LEVELS WITHIN THE AMP Like their real-world equivalents, amp sims can be high gain devices—high enough to overload their headroom internally. This is where many guitarists take the wrong turn toward bad sound by turning up the master volume too high. The cabinets in Native Instruments’ Guitar Rig include a volume control with Learn function (Fig. 3); for sims without a Learn function, like IK’s AmpliTube, you’ll find a meter—adjust the module’s volume control so there’s no overload. Fig. 3: Guitar Rig has a Learn function for optimizing internal amp levels. SETTING OUTPUT STAGE LEVELS The final stage where level matters is the output. AmpliTube has an additional level control and meter to help you keep things under control, while Guitar Rig has a special “Preset Volume” output module with a Learn function that matches levels among patches, but also prevents distortion. ReValver offers an additional output Learn function. If you set gains properly through the signal chain from interface input to final output, you’ll avoid the kind of bad distortion that ruins what the good distortion brings to the party. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Mastering isn't only about working with stereo mixes by Craig Anderton In recording, mastering polishes your stereo or surround mix, typically using EQ and dynamics processors, to optimize the sound quality. Similarly, recording engineers often add these processors to recorded amp sounds in the studio, creating a more “produced” sound. With amp sims, using EQ and dynamics after the sim itself can make a huge difference in the overall “sweetness”—so let’s look at getting better sounds through post-sim EQ. EQ: MAKING THE CUT Real amps don’t have a lot of energy above 5kHz because of the physics of cabinets and speakers, but amp sims don’t have physical limitations. Even if the sim is designed to reduce highs, you’ll often find high-frequency artifacts, particularly if you run the sim at lower sample rates (e.g., 44.1kHz). Many EQs have a lowpass filter function that attenuates levels above a certain frequency. Set this for around 5-10kHz, with a steep rolloff a(specified in dB/octave; 12dB/octave is good, 24dB/octave is better). Vary the frequency until any high-frequency “buzziness” goes away. Similarly, it’s a good idea to trim the very lowest bass frequencies. Physical cabinets—particularly open-back cabinets—have a limited low frequency response; besides, recording engineers often roll off the bass a bit to give a “tighter” sound. A quality parametric EQ will probably have a highpass filter function. As a guitar’s lowest string is just below 100Hz, set the frequency for a sharp low-frequency rolloff around 80Hz or so to minimize any “mud.” REMOVE ANNOYING RESONANCES Amp sims can do remarkably faithful amp emulations—warts and all. But the recording process sometimes “smoothes out” those warts a bit, due to miking, mic position, and other factors involved in the recording process. Another consideration: Different amps sound different with various pickups, strings, etc. An amp sim that sounds great with one guitar might not sound right with another one. As a result, I’ve found that certain guitar/amp sim combinations produce what I call “annoying frequencies”—resonances that add a fizzy, peaky, unpleasant sound. Fortunately, you can get rid of these pretty easily with a parametric equalizer. The following presents the basic; there's also a more detailed article called "How to Make Amp Sims Sound More Analog," with lots of audio examples, if you really want to get into the subject. 1. Turn down your monitors because there may be some really loud levels as you search for the annoying frequency (or frequencies). 2. Enable a parametric equalizer stage. Set a sharp Q (resonance), and boost the gain to at least 12dB. 3. Sweep the parametric frequency as you play. There will likely be a frequency where the sound gets extremely loud and distorted—more so than any other frequencies. Zero in on this frequency. 4. Now use the parametric gain control to cut gain, thus reducing the annoying frequency. 5. Similarly, check for and reduce other annoying frequencies, if present. PUTTING IT ALL TOGETHER Here are three examples (Figs. 1-3) of these EQ techniques being applied to particular guitar sounds. Fig. 1: The piezo pickup on this guitar has a peak at 1.39kHz; cutting gain at that frequency improves the sound dramatically. Also note the high and low frequency rolloffs, and another cut around 500Hz. Fig. 2: This EQ response takes out the midrange for a “scooped” sound. It might not look like any signal could make it through this, but it actually sounds very smooth. Fig. 3: In addition to cutting frequencies and rolling off the highs and lows, there’s an upper midrange boost so that the guitar cuts better through a mix. When you’re done, between the high/low frequency trims and the midrange cuts, your amp sim should sound smoother, creamier, and more realistic. Now throw a little compression on the guitar for a hotter sound, and enjoy your new tone! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. Get "inside the waveform" when you need to do sophisticated editing By Craig Anderton Did you ever wish you could “get inside” a piece of digital and edit out just one specific element? For example, suppose you have a great drum loop, but the kick is just plain wrong and you wish you could substitute a different one. If only you could delete the kick and leave everything else alone... Actually, you can—with results that range from stunningly effective to, well, not so stunningly effective. With Spectrum Editing, you don’t just alter amplitude, but can define specific frequency ranges and process only those selected ranges. We’ll show how this process works by eliminating the kick from a drum loop, but the same principle applies to emphasizing or de-emphasizing particular frequency ranges. This is particularly useful with drum loops, due to the drum hits often being fairly isolated, and occupying different frequency ranges. It’s possible to do things like emphasize only a drum’s transient, accent the snare backbeat, and in some cases, remove a sound entirely from a part. Start by selecting Spectrum in the waveform window. The usual amplitude display will be replaced by a multi-colored display that shows lower frequencies at the bottom, higher frequencies toward the top, and uses color and brightness to indicate levels (Fig. 1). Fig 1: Wavelab’s Spectrum display To the right of the Loudness tab, there’s a wrench; you can click this to edit the spectrogram display. Unless you’re working mostly with high frequencies, click on this and select Logarithmic Frequency Scale (Fig. 2). Then click Apply and OK. This will make it easier to see the kick drum. Fig. 2: The log scale makes it easier to edit lower frequencies. Type S to choose the Spectrum Selection tool (it’s also available on the toolbar, to the right of the time selection button). Zoom in if needed to see what’s happening at the various frequencies more easily. Study the waveform while playing it to correlate sound to shapes and colors. For example with Wavelab, red is the loudest level, then it goes through the spectrum (orange, yellow, green, blue, indigo, violet) to softer levels, with dark violet being the softest. So in this example, a yellow blob at a low frequency (toward the bottom of a window) shows a kick drum. Draw a rectangle around the part you want to delete (Fig. 3) Fig. 3: Isolate the kick. In the edit area above the waveform, Surgery should be selected so you can choose the desired “Processing of the selection.” In this case we want to Damp the level; here the level will be dropped by -48dB. Under Filter Settings, Bandpass is the right choice because we want to remove only the selected frequencies. You can also choose the filter steepness, and a crossfade time between the processed and unprocessed sections (Fig. 4). Fig. 4: This is where you specify how you want to process the area you’ve selected. After making the desired settings, click on Apply. As if by magic, the kick is pretty much gone; repeat for additional kicks. Note that because the selection tool is a rectangle, you may need to “carve away” at various frequency components rather than expect to delete the entire kick with one rectangle (Fig. 5). Fig 5: The yellow blobs indicating the kick are gone, and the loop sounds virtually kickless. Spectrum Editing has many other uses. You can isolated just a transient, and increase its gain to give more attack. Also note you have processing options other than Damp, like Dispersion and Fades, and you can play around with the filter as well—there are options for low-pass filter and high-pass filter as well as pass-band filter. You can even set fthe filter steepness (in dB/octave) and adjust the crossfade time. If you have some particularly demanding editing to do, like removing a single cough in the middle of a live acoustic performance, it can take some time to juggle all these parameters appropriately. But when all else fails, Spectrum Editing can accomplish if not miracles, at least feats that seem pretty miraculous. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Nobody likes maintenance—but at least you can make the process a little more painless by Craig Anderton Maintenance is a necessary part of life. Sure, it’s more fun to spend time creating rather than maintaining, but paradoxically, a well-maintained studio will actually let you create more because it will run smoothly and cause fewer interruptions. Some aspects of maintenance are obvious, like backing up your data or updating software. Others are less obvious, such as those that deal with organization—which is really just another facet of maintenance—or little hardware tweaks. Then again, there’s also preventive maintenance, which can really make a difference. So, here are some tips I’ve learned over the years that have helped my studio run just a bit less chaotically than normal. MAINTAIN QUALITY CONTROL Many electronic component failures occur within the first 72 hours of operation. This problem, called infant mortality, can be minimized by “burning in” electronic devices for at least 72 hours. However, burning in is a time-consuming process, and not many companies burn in gear prior to it leaving the factory; instead, they offer a 90 day warranty so that you can do the quality control. Therefore, when you first get a new piece of equipment, run it continuously for a few days to weed out any failures before the warranty period is up. MAINTAIN A LITTLE MOISTURE One day after walking across the carpet, I touched my computer keyboard and—ooops, instant file delete! Apparently, the static electricity charge had been sufficient to alter the data in the computer (luckily, no chips were blown in the process). Fortunately, there are a number of accessories that prevent static build-up, such as anti-static floor mats on which you can place your chair as you mix, and humidifiers. Use them! MAINTAIN YOUR MANUALS With today’s complicated gear and operating systems, you need instant access to manuals. PDFs are a great way to store lots of data in a minimal amount of space, especially if you have an iPad—you can create your own manual library (Fig. 1). Fig. 1: A manual library created in Apple's iPad. The few rare book-size printed manuals can go on a bookshelf, but there are also manuals for signal processors and other smaller pieces of gear are sheets of paper folded over once and stapled in the middle. Remove the staples, use a paper cutter to separate the pages, get a 3-ring punch and voilà—a manual ready for 3-ring binding. If you really need a printed version instead of a PDF, load up on the ink cartridges, hit Office Depot for a ream of paper, and rock on. But again, you’ll want to punch the pages and put them in a 3-ring binder. For those weird little manuals and sheets of paper that aren’t 8.5 X 11, you can buy 3-ring punched pouches at any office supply store. Slip the manual in the pouch, and put the pouch in the binder. File alphabetically by company. Pouches are also good for those little plastic “cheat sheets” and other inserts that come with gear. While we’re on the subject of manuals, if they’re printed, mark ’em up! (You can even do this with PDFs.) If there’s an errata sheet, make the corrections in the manual. Write down tech support phone numbers, serial number, date acquired, any bugs you find, and other useful info. MAINTAIN YOUR CONTROLS Your gear likes to be used. About once a month flip all the switches, press each pushbutton, play each key on a synth, rotate any controls, and slide any sliders over their full travel several times. Many of these parts have self-wiping contacts, and using them prevents oxidation. The one exception is membrane keyboards—these have a tendency to fail by shorting out, and are rated at a certain number of operations. Because they are hermetically sealed and usually use conductive plastic, they are not as subject to oxidation problems. Jack contacts also need to be “worked,” as many jacks use switching contacts that can oxidize. Plug and unplug plugs several times into all jacks not just for the benefit of switching contacts, but also to keep the various pins and contacts free of corrosion. This tip applies to patchbays (yes, people still use them) as well. MAINTAIN A DUST-FREE ENVIRONMENT Gear doesn’t like dust, which is just one reason you don’t find a lot of recording studios on Mars Hopefully your gear has covers that are made out of a non-porous material, such as plastic; but also be sure to keep a small, “Dustbuster”-type vacuum cleaner around (Fig. 2) to remove dust from tables, desks, and other surfaces—but never use it to vacuum the insides of gear, especially computers. For these, compressed air is a better option. However, make sure you read the instructions on the can before you start spraying around indiscriminately. Fig. 2: Black and Decker's "Dustbuster" vacuum cleaners can help keep dust under control in the studio. Endust for Electronics is designed specifically for keeping dust off electronic gear, and it works. As an experiment I got a can and wiped one keyboard in the rack with Endust for Electronics, and another keyboard with a soft, damp cloth. After a week, the keyboard treated with Endust was still pretty dust-free, whereas the untreated ’board had a layer thick enough to write “wash me.” That was enough to convince me as to the product’s efficacy—studio supply places should bundle this with a Dustbuster for those who want to keep dust to a minimum. One other dust tip: If you’ve just finished building a studio space, take a rubber mallet and whack the ceilings and walls. It will shake loose a bunch of dust that would take months to float down otherwise. MAINTAIN YOUR CONTACTS If you don’t have a database in your computer, you should. You can add as many fields as you want—not just phone numbers and addresses but birthdays, names of significant others, whether they received the latest mailing about your studio, how fast they pay their bills, email or web page addresses, etc. Keeping all your records in one easily updatable place can be extremely convenient, especially if it’s in a laptop or other “instant-on” type of computer. Just don’t forget to back up this information religiously! MAINTAIN YOUR POWER Make sure that all electrical outlets are properly wired and grounded; I recently talked to a composer who had a couple of amplifiers break down due to inadequate wiring. The gauge of wire apparently wasn’t thick enough, which caused a voltage drop that simulated “brown-out” conditions and overstressed the amp. And while we’re on the subject of AC power, keep all cords routed away from foot traffic areas. More than one device has been destroyed because someone tripped over a cord and took down a piece of equipment with it. An uninterruptible power supply (Fig. 3) will maintain a constant source of AC power in the face of brownouts, blackouts, and UFOs flying overhead (those pesky little things can cause all sorts of power problems). Fig. 3: APC (American Power Conversion) manufactures a variety of uninterruptible power supplies. Seriously, though, if you’ve ever had the power fail during a write operation to a hard drive, you’ll appreciate the protection this type of device affords. Think of it as insurance…it may be not be cheap, but without it, you could lose anything from your next hit record to pieces of gear. MAINTAIN YOUR SPEAKERS This is a preventive maintenance technique: insert a fast-blow, low amperage fuse (I use 1/2 Amp types) in series with your speaker. Golden-ear types will tell you that pushing current through that little tiny piece of wire will degrade the sound; tell them that if they’re willing to pay for blown tweeters, you’ll follow their advice. Just remember not to use slow-blow fuses, as they won’t blow until it’s too late. MAINTAIN YOUR COOL Many companies with otherwise fine engineers don’t seem to have a good handle on thermal design. Then again, many people unwittingly defeat what intelligent thermal design there might be. For example, equipment should never be set up where it can receive the full impact of the sun’s rays (even when filtered through window glass), and vent holes should never be obstructed. If there are vent holes in the bottom of a piece of equipment, make sure the device sits on a hard surface where air can flow freely into the holes. If you have equipment built in a rack cabinet or recessed into a wall, adequate ventilation is a must; adding a small fan (the ones designed for use with computers are generally quiet) can minimize heat build-up. Another consideration with rack mount equipment is to stagger heat-producing equipment. If there’s a hot-running power amp at the bottom of the rack cabinet, leave one rack space above it for air to circulate. Assuming that other heat-producing rack units are sufficiently light, mount them toward the top of the rack so that as the heat rises upwards, it doesn’t “cook” other units in the rack. But sometimes you don’t need a fan. I have one piece of gear that I get along with very well, but it used to be incapable of working above 85°F. So, I did some thermal engineering the company didn’t do. First, I felt around for heat build-up; the whole rear panel of the device would get very warm, so I simply removed it. This allowed plenty of air to circulate around the back. I then took off the cover and touched the outside of each IC and power transistor package. Some of them seemed excessively hot. Not wanting to add a fan (the last thing any studio needs is more devices that make noise), it seemed like a good idea to beef up the heat sinks that help the semiconductors dissipate heat. For the transistors, I added an aluminum plate that carried heat away from the top of the package. (Incidentally, in the process of doing this I found that one of the power transistors had not been screwed down sufficiently to make good contact with its existing heat sink.) For the ICs, I used thermally conductive epoxy to attach small finned heat sinks (available from electronics supply houses) to the tops of the IC packages. Lo and behold, all thermal problems went away—even when the ambient temperature hit 105 degrees during a recent heat wave. (Yes, I know you shouldn’t run computer-based gear in that kind of heat; but I don’t have air conditioning and a deadline was looming larger than fear of breakdown.) Since the capacitors sitting next to these semiconductors are no longer being baked by the heat, their lives should be extended as well. MAINTAIN YOUR SANITY IN THE STUDIO Actually, I don’t have any tips on this one. Maybe someday... Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Is your monitoring setup honest with you about your music? by Craig Anderton All the effort you put into recording, overdubbing, and mixing is for nothing if your monitoring system isn’t honest about the sounds you hear. The issue isn’t simply the speakers; the process of monitoring is deceptively complex, as it involves your ears, the acoustics of the room in which you monitor, the amp and cables that drive your monitors, and the speakers themselves. All of these elements work together to determine the accuracy of what you hear. If you’ve ever done a mix that sounded great on your system but fell apart when played elsewhere, you’ve experienced what can go wrong with the monitoring process - so let's find out how to make things right. HEARING VARIABLES Ears are the most important components of your monitoring system. Even healthy, young ears aren’t perfect, thanks to a phenomenon quantified by the Fletcher-Munson curve (Fig. 1). Fig. 1: The Fletcher-Munson curve indicates how the ear responds to different frequencies. Simply stated, the ear has a midrange peak around 3-4kHz that’s associated with the auditory canal’s resonance, and does not respond as well to low and high frequencies, particularly at lower volumes. The response comes closest to flat response at relatively high levels. The “loudness” control on hi-fi amps attempts to compensate for this by boosting the highs and lows at lower levels, then flattening out the response as you turn up the volume. Another limitation is that a variety of factors can damage your ears — not just loud music, but excessive alcohol intake, deep sea diving, and just plain aging. I’ve noticed that flying temporarily affects high frequency response, so I wait at least 24 hours after getting off a plane before doing anything that involves critical listening. The few times I’ve broken that rule, mixes that seemed perfectly fine at the time played back too bright the next day. It’s crucial to take care of your hearing so at least your ears aren’t the biggest detriment to monitoring accuracy. Always carry the kind of cylindrical foam ear plugs you can buy at sporting good stores so you’re ready for concerts, using tools (the impulse noise of a hammer hitting a nail is major!), or being anywhere your ears are going to get more abuse than someone talking at a conversational level. (Note that you should not wear tight-fitting earplugs on planes. A sudden change in cabin pressure could cause serious damage to your eardrums.) You make your living with your ears; care for them. ROOM VARIABLES As sound bounces around off walls, the reflections become part of the overall sound, creating cancellations and additions depending on whether the reflections are in-phase or out-of-phase compared to the source signal reaching your ears. These frequency response anomalies affect how you hear the music (Fig. 2). Fig. 2: If a reflection is out of phase with the original signal, there will be some degree of cancellation. Also, placing a speaker against a wall seems to increase bass. This is because any sounds emanating from the rear of the speaker, or leaking from the front (bass frequencies are very non-directional), bounce off the wall. Because a bass note’s wavelength is so long, the reflection will tend to reinforce the main wave (Fig. 3). Fig. 3: Most anomalies with room acoustics happen at low frequencies. As the walls, floors, and ceilings all interact with speakers, it’s important that speakers be placed symmetrically within a room. Otherwise, if (for example) one speaker is 3 feet from a wall and another 10 feet from a wall, any reflections will be wildly different and affect the response. The subject of acoustically treating a room is beyond the scope of this article. Hiring a professional consultant to “tune” your room with bass traps and similar mechanical devices could be the best investment you ever make in your music. WHAT ABOUT TUNING A ROOM WITH GRAPHIC EQUALIZATION? Some studios use graphic equalizers to “tune” rooms, but this is not necessarily a cure-all. Equalizer-based room tuning involves placing a mic where you would normally mix, feeding pink noise or test tones through a system, and tuning an equalizer (which patches in as the last device before the power amp) for flat response. Several companies make products to expedite this process, such as RTAs (Real Time Analyzers) that include the noise generator, along with calibrated mic and readout. You then diddle the sliders on a 1/3 octave graphic EQ to compensate for anomalies that show up on the readout. Some devices combine the RTA and EQ for one-stop analysis and equalization. While this sounds good in theory, there are two main problems: If you deviate from the “sweet spot” where the microphone was placed, the frequency response will change. Heavily equalizing a poor acoustical space simply gives you a heavily-equalized, poor acoustical space. However, newer methods of room tuning have been developed that take advantage of computer power, such as JBL’s MSC-1 and IK Multimedia’s ARC (Fig. 4). Fig. 4: IK Multimedia’s ARC is a more evolved version of standard room tuning; it’s effective over a wider listening area than older methods, and has a more sophisticated feature set. The most important point to remember about any kind of electronic room tuning is that like noise reduction, which works best on signals that don’t have a lot of noise, room tuning works best on rooms that don’t have serious response anomalies. It’s best to make corrections acoustically to minimize standing waves, check for phase problems, experiment with speaker placement, and learn your speaker’s frequency response. Once you have your room as close to ideal as possible, a device like ARC can make it even better. NEAR-FIELD MONITORS Traditional studios have large monitors mounted at a considerable distance (6 to 10 ft. or so) from the mixer, with the front flush to the wall, and an acoustically-treated control room to minimize response variations. The “sweet spot” — the place where room acoustics are most favorable — is designed to be where the mixing engineer sits at the console. However in smaller studios, where space and budget are at a premium, near-field monitors have become the standard way to monitor (Fig. 5). Fig. 5: There are tons of options for near-field monitors; KRK’s Rockit series monitors have been very popular for project studios. With this technique, small speakers sit around 3 to 6 feet from the mixer’s ears, with the head and speakers forming a triangle (Fig. 6). The speakers should point toward the ears and be at ear level; if slightly above ear level, they should point downward toward the ears. Fig. 6: Near-field monitor placement is important to achieve the most accurate monitoring. Near-field monitors minimize the impact of room acoustics on the overall sound, as the speakers’ direct sound is far louder than the reflections coming off the room surfaces. They also do not have to produce a lot of power because of their proximity to your ears, which also relaxes the requirements for the amps feeding them. However, placement in the room is still an issue. If placed too close to the walls, there will be a bass build-up. Although you can compensate with EQ (or possibly controls on the speakers themselves), the build-up will be different at different frequencies. High frequencies are not as affected because they are more directional. If the speakers are free-standing and placed away from the wall, back reflections from the speakers bouncing off the wall could affect the sound. You’re pretty safe if the speakers are more than 6 ft. away from the wall in a fairly large listening space (this places the first frequency null point below the normally audible range), but not everyone has that much room. My crude solution is to mount the speakers a bit away from the wall on the same table holding the mixer, and pad the walls behind the speakers with as much sound-deadening material as possible. Nor are room reflections the only problem; with speakers placed on top of a console, reflections from the console itself can cause inaccuracies. To get around this problem, I use a relatively small main mixer, so the near-fields fit to the side of the mixer, and are slightly elevated. This makes as direct a path as possible from speaker to eardrum. ANATOMY OF A NEAR-FIELD MONITOR Near-field monitors are available in a variety of sizes and at numerous price points. Most are two-way designs, with (typically) a 6” or 8” woofer and smaller tweeter. While a 3-way design that adds a separate midrange driver might seem like a good idea, adding another crossover and speaker can complicate matters. A well-designed two-way system is better than a so-so 3-way system. Although larger speaker sizes may be harder to fit in a small studio, the increase in low-frequency accuracy can be substantial. If you can afford (and your speaker can accommodate) an 8” speaker, it’s worth the stretch. There are two main monitor types, active and passive. Passive monitors consist of only the speakers and crossovers, and require outboard amplifiers. Active monitors incorporate any amps needed to drive the speakers from a line level signal. With powered monitors, the power amp and speaker have hopefully been tweaked into a smooth, efficient team. Issues such as speaker cable resistance become moot, and protection can be built into the amp to prevent blowouts. Powered monitors are often bi-amped (e.g., a separate amp for the woofer and tweeter), which minimizes intermodulation distortion and allows for tailoring the crossover points and frequency response for the speakers being used. If you hook up passive monitors to your own amps, make sure they have adequate headroom. Any clipping generates gobs of high-frequency harmonics, and sustained clipping can burn out tweeters. SO WHICH MONITOR IS BEST? You’ll see endless discussions on the net as to which near-fields are best. In truth, the answer may rest more on which near-field works best with your listening space and imperfect hearing response. How many times have you seen a review of a speaker where the person notes with amazement that some new speaker “revealed sounds not heard before with other speakers”? This is to be expected. The frequency response of even the best speakers differs sufficiently that some speakers will indeed emphasize different frequencies compared to other speakers, essentially creating a different mix. Although it’s a cliché that you should audition several speakers and choose the model you like best, you can’t choose the perfect speaker, because such an animal doesn’t exist. Instead, you choose the one that colors the sound in the way you prefer. Choosing a speaker is an art. I’ve been fortunate enough to hear my music over some hugely expensive systems in mastering labs and high-end studios, so my criterion for choosing a speaker is simple: whatever makes my “test” CD sound the most like it did over the big-bucks speakers wins. If you haven’t had the same kind of listening experiences, book 30 minutes or so at some really good studio (you can probably get a price break since you’re not asking to use a lot of the facilities) and bring along one of your favorite CDs. Listen to the CD and get to know what it should sound like, then compare any speakers you audition to that standard. One caution: if you’re comparing two sets of speakers and one set is even slightly louder than the other, you’ll likely choose the louder one as sounding better. To make a valid comparison, match the speaker levels as closely as possible. A final point worth mentioning is that speakers have magnets which, if placed close to CRT screens, can distort the display. Magnetically shielded speakers solve this problem, although this has become much less of an issue as LCD screens have pretty much taken over from CRTs. LEARNING YOUR SPEAKER AND ROOM Ultimately, because your own listening situation is imperfect, you need to “learn” your system’s response. For example, suppose you mix something in your studio that sounds fine, but sounds bass-heavy in a high-end studio with accurate monitoring. That means your monitoring environment is shy on the bass, so you boosted the bass to compensate (this is a common problem in project studios with small rooms). With future mixes, you’ll know to mix the bass lighter than normal. Compare midrange and treble as well. If vocals jump out of your system but lay back in others, then your speakers might be “midrangey.” Again, compensate by mixing midrange-heavy parts back a little bit. You also need to decide on a standardized listening level to help combat the influence of the Fletcher-Munson curve. Many pros monitor at low levels when mixing, not just to save one’s ears, but also because if something sounds good at low volume, it will sound great when you really crank it up. However, this also means that the bass and treble might be mixed up a bit more than they should be to compensate for the Fletcher-Munson curve. So, before signing off on a mix, check the sound at a variety of levels. If at loud levels it sounds just a hair too bright and boomy, and if at low levels it sounds just a bit bass- and treble-light, that’s probably about right. WHAT ABOUT HEADPHONES? Musicians on a budget often wonder about mixing over headphones, as $100 will buy a quality set of headphones, but not much in the way of speakers. Although mixing exclusively on headphones isn’t recommended by most pros, keep a good set of headphones around as a reality check (not the open-air type that sits on your ear, but the circumaural kind that totally surrounds your ear). Sometimes you can get a more accurate bass reading using headphones than you can with near-fields, and when “proofing” your tracks, phones will show up imperfections you might miss with speakers. Careful, though: it’s easy to blast your ears with headphones and not know it. SATELLITE SYSTEMS “Satellite” systems use tiny monitors that can’t really produce adequate bass in conjunction with a subwoofer, a fairly large speaker that’s fed from a frequency crossover so that it reproduces only the bass region. This speaker usually mounts on the floor, against a wall; placement isn’t overly critical because bass frequencies are relatively non-directional. Although satellite-based systems can make your computer audio sound great or allow a less intrusive hi-fi setup with tight living space, I wouldn’t mix a major label project over them. Perhaps you could learn these systems over time as well, but I personally have difficulty with the disembodied bass for critical mixes. However, using subwoofers with monitors that have decent bass response is another matter (Fig. 7). Fig. 7: The PreSonus Temblor T10 active subwoofer has a crossover that’s adjustable from 50 to 300Hz. The response of near-field monitors often starts to roll off around 50-100 Hz, which diminishes the strength of sub-bass sounds. Sounds in this region are a big part of a lot of dance music, and it’s important to know what’s going on down there. In this case, the subwoofer simply gives a more accurate indication of the bass region sound, STRENGTH IN NUMBERS Before signing off on a mix, listen through a variety of systems — car stereo speakers, hi-fi bookshelf speakers, big-bucks studio speakers, boom boxes, headphones, etc. This gives an idea of how well the mix will translate over a variety of systems. If the mix works, great — mission accomplished. But if it sounds overly bright on 5 out of 8 systems, pull back the brightness just a bit. The mastering process can compensate for some of this, but mastering works best with mixes that are already good. Many “pro” studios will have big, expensive speakers, a pair of near-fields for reality testing, and some “junk” speakers sitting around to check what something will sound like over something like a cheap TV. Switching back and forth among the various systems can help “zero in” on the ultimate mix that translates well over any system. The more you monitor, the more educated your ears will become. Also, the more dependent they will become on the speakers you use (some producers carry their favorite monitor speakers to sessions so they can compare the studio’s speakers to speakers they already know well). But even if you can’t afford the ultimate monitoring setup, with a bit of practice you can learn your system well enough to produce a good-sounding mix that translates well over a variety of systems – which is what the process is all about. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. Lock your bass part to the drums with a few hard disk editing tweaks By Craig Anderton The highest compliment you can pay to a rhythm section is that they’re “tight,” so this article looks at a hard disk editing technique that can tweak a bass part to give a rhythm section that's tighter than a low E string tuned up an octave. VISUAL BASICS One advantage of computer-based, multitrack hard disk recording systems is that you can see multiple tracks simultaneously. For example, you can place the drum and bass tracks next to each other to compare where their notes land. Referring to the upper drum track (blue) in Fig. 1, it's easy to see where the beats fall; note the obvious spikes where different hits occur. Fig. 1: The lower bass notes (in red) lag the drum note (above, in blue). Now look closely at the lower bass track (red)—the bass attacks clearly lag a bit behind the drums. Of course, many timing variations are welcome as they can contribute to a song's groove. What we're concerned about is reigning in any bass notes whose timing is sufficiently different from the drums to sound “off.” GOTTA SPLIT, GOTTA MOVE The solution for locking bass and drums together is to isolate any bass note whose timing is “off,” then move it so that its attack lines up with the beat. Unlike traditional quantizing, which shifts notes to a rhythmic grid, with this technique you have the option to move notes so that they follow the drums exactly, even if the drums themselves lead or lag the beat a bit. Isolating the notes is simple. Most hard disk recording programs offer some sort of split function, where you can divide a piece of audio into separate sections. Upon isolating a note, you can then move it into position. You’ll generally want to make sure that any “snap” or quantize function is off, so you can split the audio anywhere, not just at arbitrary rhythmic points. CORRECTING FOR LATE NOTES Referring to Fig. 2, we'll now correct for a bass note that's late compared to the beat. Here are the steps. Fig. 2: Adding splits around the note make it easy to move. 1. Add three split points. From left to right, these are: The beat where the note attack should fall The beginning of the note to be moved (in Fig. 2, the space between these first two split points is highlighted in black) The beginning of the next bass note. 2. Delete the highlighted space between the desired note start and the current note start. 3. Move the note to the left so that its attack lines up with the correct beat (Fig. 3). Fig. 3: The bass note has now moved forward and lines up perfectly with the drum beat. Now we have a new problem: moving the note earlier has opened up a slight space in front of the next note. You may want to move this note as well, but let's assume it sounds fine as is. Although most of the time whatever else is playing will mask any gap between notes, if you hear any kind of abrupt, audible glitch, there are a couple of possible fixes. The simplest is to add a quick fade toward the end of the note you moved (highlighted in gray in Fig. 4, with a diagonal line showing the fade itself). This makes the gap less abrupt, but may not work if you need the note to sustain up to the beginning of the next note. Fig. 4: Adding a fadeout ensures that the note doesn't have an abrupt ending. Lengthening a note instead of shortening it requires more complex surgery. The following technique may or may not work, but if you're persistent, you can usually achieve success. 1. Split the note so that the sustained tail is separate from the attack. The object is to isolate the note's most consistent, sustained portion. 2. Copy the sustained portion (some programs allow you to copy a region without having to split it first). 3. Crossfade the end of the note that needs to be lengthened with the beginning of the sustained segment. If needed, you can then add a fade to the end of the conjoined note, or have it butt up against the beginning of the next note. Crossfading can often make the grafted note segment sound like it was part of the original note. CORRECTING FOR EARLY NOTES If a note comes in ahead of the beat, split the audio just ahead of that note, and again just ahead of the next note. As we’re going to be moving the note later in time, and don’t want the end of the note being moved to overlap the next note, use the program’s non-destructive “slip” or trim editing function to shorten the end of the note being moved. If the note butts up against the beginning of the next note, odds are it will sound okay. If it doesn't, or if there's an audible click or pop, you may need to add a very quick fade to the end of the note you moved. DON’T OVERDO IT It’s possible to get sucked into being way too concerned about little timing errors. Don’t be – just fix the notes that sound wrong, not the notes that look wrong. You’ll still end up with an ultra-tight rhythm section – and certainly anyone reading this magazine knows how much that can improve a tune’s entire sound. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. Just because you're faking it doesn't mean you have to sound fake... by Craig Anderton I travel, so I stay in a lot of hotels. This means that in the last decade, I’ve seen 9,562 musicians singing/playing to a drum machine, and 3,885 synth duos where a couple of musicians play along with a sequencer or sampler. I’ve even been in that position myself a few times. Audiences have come to accept drum machines, but one person on stage being backed up by strings, horns, pianos, and ethereal choirs rings false, and the crowd knows it. Yet you don’t want to lose the audience due to monotony. Unless you’re a spellbinding performer, hearing the same voice and guitar or keyboard for an entire evening can wear out your welcome. In the process of playing live, I’ve learned a bit about what does — and doesn’t — work when doing a MIDI-based act. Hopefully some of the following ideas will apply to your situation too. SEQUENCERS: NOT JUST NOTES One way to avoid resorting to “fake” sounds is to maximize the “real” sounds you already have. As a guitar player, that involves processing my guitar sound. Switching between a variety of timbres helps keep interest up without having to introduce new instruments. However, this creates a problem: using footswitches and pedals to change sounds diverts your attention from your playing, since you now have to worry about hitting the right button at the right time. For me, the solution is using amp sims that can accept MIDI continuous controllers to change several parameters independently. This is where a sequencer really shines — in addition to driving instrument parts, it can generate MIDI messages that change your sound automatically, with no pedal-pushing required. Amp sims running on a laptop are often ideal for this application because they tend to have very complete MIDI implementations, but many processors (Fig. 1) also accept continuous controller commands. If not, they will likely be able to handle program changes, which can still be useful. Fig. 1: Line 6’s POD HD500 can accept MIDI continuous controller commands that change selected parameters in real time. For example, on one of my tunes the sequencer sends continuous controller data to a single program to vary delay feedback, delay mix, distortion drive, distortion output, and upper midrange EQ. As the song progresses, the various settings “morph” from one setting to another — rhythm guitar with no delay, low distortion drive, and flat EQ all the way to lead guitar with delay, lots of distortion, and a slight upper midrange boost. Within the main guitar solo itself, the delay feedback increases until the solo’s last note, at which point it goes to maximum so the echo “spills” over into the following rhythm part. Not only does this sound cool, it adds an interactive element. It’s not human beings, but still, I can play off some changes. What’s more, it doesn’t seem fake to the audience because all the sounds have a direct correlation to what’s being played. It’s true that using a sequencer ties you to a set arrangement, with very few exceptions. However, although sections of the song are limited to a certain number of measures, you can nonetheless play whatever you want within those measures, so solos can still be different each time you play them. THE VOCAL ANGLE I really like the DigiTech and TC-Helicon series of processors for live vocals. Being able to generate harmonies is cool enough, but there’s a lot of MIDI power in some of these boxes (Fig. 2), and you can do the same type of MIDI program or continuous controller tricks as those mentioned above for guitar. Fig. 2: DigiTech’s Vocalist Live Pro can use MIDI continuous controller and program changes to alter a wide range of parameters. Once again, even though you’re generating a big sound it’s all derived from your voice, so the audience can correlate what it hears to what’s seen on stage. THE SAMPLER CONNECTION A decent sampler (or workstation with sampling capabilities; see Fig. 3) that includes a built-in MIDI sequencer is ideal as a live music backup companion. It can hold any kind of drum sounds, hook up to external storage for fast loading and saving of sounds and songs, and generate the continuous controller data needed to control signal processors with its sequencer. Fig. 3: Yamaha’s Motif XF isn’t just a fine synthesizer/workstation, but includes flash memory for storing and playing back custom samples. Samplers are also great because you can toss in some crowd-pleasing samples when the natives get restless. A few notes from a TV theme song, a politician making a fool of himself, a bit from a 50s movie — they’re all fun. And to conserve memory you can usually get away with sampling them at a pretty low sampling frequency. When sampling bass parts for live use, it’s often best to avoid tones that draw a lot of attention to themselves, like highly resonant synth bass or slap bass. A round, full line humming along in the background fills the space just fine. PLAYING WITH MYSELF When I switch over to playing a lead after playing rhythm guitar, it leaves a pretty big hole. To fill the space without resorting to sequencing other instruments, I sample some power chords and rhythm licks from my guitar, and sequence them behind solos. This doesn’t sound too fake because the audience has already heard these sounds, so they just blend right in. Furthermore, the background sounds don’t have to be mixed very high. Adding just a bit creates a texture that fills out the sound nicely. MULTI-INSTRUMENTALISTS One of my favorite solo acts is a multi-instrumentalist in Vancouver named Tim Brecht who plays guitar, keyboards, drums, flute, and several percussion instruments during the course of his act (he also does some interesting things with hand puppets, but that’s another story). So when the sequenced drums play, people can accept it because they know he can play drums. Similarly, on some songs I’ll play a keyboard part instead of guitar. This not only provids a welcome break, but when I sequence the same keyboard sound as a background part later on, it’s no big deal because the audience has already been exposed to it and seen me play it. FOR BETTER DRUMS, USE A DRUMMER Okay, maybe you can’t convince your favorite drummer friend to come along to the gig. But if can have a real drummer program your drum sequences, it really does make a difference. MIDI GUITAR? I’m seeing more people using MIDI guitar live (Fig. 4), but not in heavy-metal or techno bands: these are typically solo acts in places like restaurants. Fig. 4: Fishman's TriplePlay retrofits existing guitars for MIDI, and transmits the signals wirelessly to a computer. They use MIDI guitar because again, it reduces the fake factor. Even if you’re playing other instrument sounds, people can see that what you’re playing is creating the sound. Some changes can be more subtle, like triggering a sampler with a variety of different guitar samples so you can go from acoustic, to electric, to 12-string, just by calling up different patches. Being able to layer straight guitar and synthesized sounds is a real bonus, as it reinforces the fact that the synth sounds relate to the guitar. IT’S THE MUSIC THAT MATTERS All of these tips have one goal: to make it easier to play live (in spite of the technology!), and to avoid sounding overly fake. People want to see you jumping around and having a good time, not loading sequences and fiddling with buttons. The less equipment you have to lug around, the better — both for reliability and minimal setup hassles. When MIDI came out, it changed my performance habits forever. If nothing else, I haven’t done a footswitch tap dance while balancing on a volume pedal in years — and I hope never to do one again! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Layering makes guitars sound bigger—or does it? Let's find out the complete story by Craig Anderton Ever since multitrack recording became commonplace, guitar players have been doubling guitar parts—and even tripling, quadrupling, and more to build up layer upon layer of sound. Rumor has it that even the Sex Pistols, those prototypical purveyors of punk, double-tracked the guitar parts as they thumbed their collective noses at British society. But more is not always more, and two heads are not always better than one. Although many guitar players think layering can give a bigger sound, that’s not necessarily the case. Also, there’s more than one way to layer guitars—and it’s important to choose the method that works best for the task at hand. IS THIS LAYER REALLY NECESSARY? The more “space” there is around a part, the more impact it has. Layering can make a bigger sound, but also, a less defined one. As a friend of mine (Line 6’s Mark Williams) once said, “As soon as you put on that second guitar part, you’re going in the wrong direction.” While that’s not always true, I understand his point: A single guitar part has definition, and can stand out in a track as a distinct, individually articulated sound. Layers are more indistinct, as the parts will usually “mesh” with each other. If you want a rhythm guitar part to stand out, layering is probably not a good idea. But if you want the rhythm part to sit back further in a track, layering will take off the “sharp edges” and make the oversall sound more diffuse. On the other hand, leads respond differently to layering. Because the parts tend to be highly-defined in the sense of playing mostly single notes, layering will indeed make the sound bigger without taking too much away from the part itself. Just remember to ask yourself whether a part really needs to be layered, and if it doesn’t need to be layered, don’t do it. And if you do layer a part, during mixdown take the part out and see if that adds more to the song than using layers. BIGGER SOUNDS THROUGH LAYERING Create two layered tracks, with each one going into a different setup—for example, different cabs with different mikings. Then, pan them oppositely of each other (of course, this is very easy to do in the “virtual world” with amp sims—see Fig. 1). Fig. 1: Two Waves G|T|R amp/cab/mic combinations are processing two tracks of guitar to create a big, layered sound. Layering in this manner preserves a distinctive character with each sound, which lets them stand on their own—and the two parts will multiply into something bigger. Often when mixing with other instruments (e.g., piano) playing, I’ll pan one amp cab left and the other to center, with the left piano panned center and right piano panned right. This makes for a big, distinct soundstage. SMALLER SOUNDS THROUGH LAYERING As alluded to earlier, layering can give sounds that sit better in the background. To do this, layer by overdubbing the same sound, using the same guitar, panned to the same position. The parts will tend to blur into more of a texture, and if mixed at moderate levels, will sit in a track as more of a background part than a foreground one. To place the layered sound even further back, roll off the highs just a bit for warmth, then reduce the low end a little to leave more room for the bass and kick—this gives the textured part its own sonic space. In mixes where there aren’t a lot of other instruments playing, then you can mix the guitar tracks up, and not roll back highs and lows. This gives a full, churning sound that can drive a song hard when mixed in at a relatively high level yet not distract too much from the other parts. ARTIFICIAL LAYERING ADT (Automatic Double Tracking) effects attempt to produce the sound of a player doing an overdub by adding a slight amount of delay, and modulating it to change the delay dynamically to avoid creating an exact duplicate of the sound (Fig. 2). Fig. 2: Waves’ Abbey Road Reel ADT plug-in emulates the original ADT effect added to many Beatles recordings. However, simply using delay without modulation can also be very effective. Copy the part to two tracks then pan one track left, and the other right through a delay of around 15-20ms with no feedback. Another option is to use a stereo delay effect that allows for different delay adjustments for each channel (Fig. 3). Fig. 3: In this example using the Sonitus fx:delay, setting a delay in one channel of a stereo delay (with no feedback) but not the other produces a wide, layered sound. Because the two parts are identical, they remain distinct and don’t “mesh” with each other, but the delay produces a wide stereo image. There is one caution: Always check the part in mono to make sure no cancellations are occuring. DISSIMILAR LAYERING If you really want to push a song’s chorus, don’t reach for another overdub of fuzzed-out power chords—grab an acoustic guitar, and layer it instead. The percussive, bright nature of the acoustic will serve as a perfect complement to the distorted power chord sludge (Fig. 4). Combining clean acoustic and distorted electric guitars worked for Led Zeppelin—it can work for you, too. Fig. 4: Two electric guitars playing power chords are layered in the top two tracks, and panned oppositely. The layered acoustic guitar part (bottom track) adds a bright, percussive quality on top of the power chords. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Create your own drum loops, and you don't have to settle for what other people think would work well with your music by Craig Anderton Sure, there are some great sample libraries and virtual instruments available with fabulous drum loops. But it always seems that I want to customize them, or do something like add effects to everything but the kick drum. Fortunately, many libraries also include samples of the individual drums used to create the loops, so you can always stick ’em in your sampler and overdub some variations. But frankly, I find that process a little tedious, and decided that it would be easier (and more fun) in the long run just to make my own customizable drum loops. However, there’s more than one way to accomplish that task; I tried several approaches, and here are some that worked for me. ASSEMBLING FROM SAMPLES For loops you can edit easily, you can import samples into a multitrack hard disk recording program, arrange them as desired at whatever tempo you’d like, bounce them together to create loops, then save the bounced tracks as WAV or AIFF file types for use in tunes. Although you can create loops at any tempo, if you plan to turn them into “stretchable” loops with Acidization or REX techniques, I recommend a tempo of 100BPM (see the article “How to Create Your Own Loops from an Audio File”). Let’s go through the process, step-by-step. 1. Collect the drum samples for your loop (Fig. 1) and create the tracks to hold them. Before bringing in any samples, consider saving this project as a template to make life easier if you want to make more loops in the future. (In addition to sample libraries, there’s an ancient, free Windows program called Stomper that can generate some very cool analog drum sounds.) Fig. 1: A template set up in Cakewalk Sonar for one-measure loops. Samples (from the Discrete Drums Series 1 library) can be dragged from the browser (right pane) into the track view. 2. In your DAW, set the desired tempo and “snap” value (typically 16th notes, but your mileage may vary). Even if you plan to “humanize” drum hits instead of having them snap to a grid, I find it’s easier to start with them snapped, and then add variations later. 3. Import and place samples (Fig. 2). Fig. 2: The samples have all been placed to create the desired loop. Volume and pan settings have also been set appropriately. I prefer to place each sound on its own track, although sometimes it’s helpful to spread the same sound on different tracks if specific sounds need to be processed together. For example, if you have a techno-type loop with a 16th note high-hat part and want to accent the hats that fall on each quarter note, place these on their own track while the other hats go on a separate track. That way it’s easy to lower the level of the non-accented hats without affecting the ones on the quarter notes. 4. Bounce and save. This is the final part of the process. One option is to simply bounce all parts together into a mono or stereo track that you can save as a WAV or AIFF file. But I also make a stereo mix of all sounds except kick in case I want to replace the kick in some applications, or add reverb to all drums except the kick. I’ll often save a separate file for each drum sound as well, and all these variations go into a dedicated folder for that loop. THE VALUE OF VARIATIONS The advantage of giving each sound its own file is that it allows lots of flexibility when creating variation loops. Here are a few examples: Slide a track back and forth a bit in time for “feel factor” applications. For example, move the snare ahead in time for a more “nervous” feel, or behind the beat for a more “laid back” effect. Change pitch in a digital audio editor (this assumes you can maintain the original duration) to create timbral variations. Copy and paste to create new parts. For example, a common electronica fill is to have a snare drum on every 16th note that increases linearly in level over one or two measures. If a snare is on 2 and 4, you can copy and offset the track until you have a snare on every 16th note. Premix the tracks together, fade in the level over the required number of measures, and there's your fill. Drop out individual tracks to create remix variations. Having each sound on its own track makes it easy to drop out and add in parts during the remix process. Create “virtual aux busses” by bouncing together only the sounds you want to process. Suppose you want to add ring modulation to the toms and snare, but nothing else. Mute all tracks except toms and snare, premix them together, import the file into a digital audio editing program capable of doing ring modulation, save the processed file, then import it in place of the existing tom and snare tracks. TRICKS WITH COMPLETE LOOPS After you have a collection of loops, it’s time to string them together and create the rhythm track. Here are some suggested variations. Copy a loop, then transpose it down an octave while preserving duration. This really fattens up the sound if you mix the transposed loop behind the main loop. When trying to match loops that aren’t at exactly the same tempo, I generally prefer to shift pitch to change the overall length rather than use time compression/expansion, which usually messes with the sound more (especially with program material). This only works if the tempo variation isn’t too huge. Take a percussion loop (e.g., tambourines, shakers, etc.) that's more accent-oriented than rhythmic, then truncate an eighth-note or quarter note from the beginning. Because the loop duration will be shorter than the main loop, it repeats a little sooner each time the main loop goes around, thus adding variations. If you can't loop individual tracks differently, then copy and paste the truncated loop and place the beginning of the next loop up against the end of the previous loop. Copy, offset, and change levels of loops to create echo effects. Eighth and sixteenth-note echoes work well, but sometimes triplets are the right tool for the job. APPLIED LOOPOLOGY Of course, using drum loops can get a lot more involved than this, such as mixing and matching loops from different sample libraries and such. However, one problem is that loops from different sources are often equalized differently. Now’s a good time to use your digital audio editor or DAW’s spectrum analysis option to check the overall spectral content of each loop, so you can quickly compensate with equalization. Sure, you can do it by ear too, but spectrum analysis can sometimes save you some time by pointing out where the biggest differences lie. Well, those are enough tips for now. The more creative you get with your loops, the more fun you (and your listeners) will have. Happy looping! looping! looping! looping! looping! looping! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Why live in the real world, when the virtual one often lets you do a whole lot more? By Craig Anderton I like amp sims for a lot of reasons, but one of the main ones is it’s possible to create setups that would be really difficult to create in the physical world (or at least require having a chiropractor on retainer—along with a hefty bank account). As just one example it’s easy to stack multiple cabinets, feed them with different amps and effects, and create various kinds of stereo imaging. One way to create a stack takes advantage of the fact that recording through amp sims records a dry track—the effects are added on playback. So you can duplicate this dry track, and add another amp sim setup in parallel. While this is easy to do, it means don’t hear the stacked sound until you’ve actually laid down a track. It’s often better to hear the stacked sound as you play, as that can influence your part as well as how the part (and sound) fits in with the track as a whole. PARALLEL DISCOVERY One option is to split your guitar into two tracks, set up amp sims, monitor through them, and record into both tracks simultaneously. You don’t have to use a Y-cord—simply set each track’s source to the input where the guitar signal enters your interface. An even simpler option is to use an amp sim that allows for parallel amp paths. With IK Multimedia’s AmpliTube series, there are 8 signal routing options; Routing 2 creates two separate, parallel chains. The Line 6 POD Farm has a Dual button that creates two different signal chains, and Overloud’s TH2 also has two separate signal chains (Fig. 1). Fig. 1: Overloud’s TH2 amp sim allows for two signal chains, but also includes a frequency crossover for sending different frequency ranges to the two chains. Peavey’s ReValver and Native Instruments’ Guitar Rig both offer “splitter” modules for their “virtual racks” (Fig. 2). These let you split the input signal into two paths, where you can insert whatever amps, speakers, etc. you want. Then, the splits go into an output mixer for mixing and panning. (However, note that Guitar Rig lets you put splits within splits, while ReValver Mk III is limited to one split module per rack.) Fig. 2: Peavey’s ReValver sends the signal through a splitter, which feeds two chains with “BluesMaker” heads but different cabinet and miking effects. The module at the bottom controls the mix, pan, and phase of the splits. Waves G|T|R has stereo amps, which provide the same basic function as stacked amps. However, if you want a parallel path where you can add effects and such, then you’ll need to use two tracks, and two instances of G|T|R. STACK APPLICATIONS Stacking gives you all kind of sonic options; here are a few of them. What’s better than two stacks? Three stacks, of course! Place a chorused acoustic-type sound in the center with power chord sounds left and right, and you’ll end up with a huge sound. Add drums and bass...done. The frequency crossover modules in Guitar Rig and TH2 are very useful, because they let you do "bi-amping." The lows can go to one stack for heavy distortion, while the highs go to a second stack with less distortion to avoid harshness. Bass works well with stacks, as one stack can handle effects while the other reproduces the full range of the bass. And speaking of bass—if you mix a stereo rhythm guitar part so the channels are panned oppositely, then this opens up a space in the middle for the bass (the traditional stereo placement for bass is center). A rhythm guitar part going through two separate stacks which are panned oppositely is almost like having two guitars, but with the focus of a single guitar part. You can made the stereo width even more dramatic by delaying one of the stack signals by 10-25ms. Insert tempo-synched effects set to different rhythmic values in the two chains.This can give a really interesting feeling of motion, as well as enhance the spread of stereo effects. If there’s another instrument in the guitar’s general frequency range (like keyboard), and the guitar is set up in stereo, pan one guitar channel to center and the other right or left to “tilt” the guitar toward one side of the stereo field. Pan the other instrument oppositely in the stereo field. Now both instruments fill the stereo field, but don’t interfere with each other. Get the picture? You can have a lot of fun with stacked amps, and you won't even have to lug around those big wooden boxes and stack them on top of each other. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. To your CPU, plug-ins can be the ultimate power trip By Craig Anderton You gotta love effects and virtual instrument plug-ins, but they’ve changed the rules of mixing. In the hardware days, the issue was whether you had enough hardware to deal with all your tracks. Now that you can insert the same plug-in into multiple tracks, the question is whether your processor can handle all of them. Does it matter? After all, mixing is about music, balance, and emotional impact—not processing or synthesizer/sampler sounds. But it’s also about fidelity, because you want good sound. And that’s where Mr. Practical gets into a fight with Mr. Power. THE PLUG-IN PROBLEM Plug-ins require CPU power. CPUs can’t supply infinite amounts of power. Get the picture? Run too many plug-ins, and your CPU will act like an overdrawn bank account. You’ll hear the results: Audio gapping, stuttering, and maybe even a complete audio engine nervous breakdown. And in a cruel irony, the best-sounding plug-ins often drain the most CPU power. This isn’t an ironclad rule; some poorly-written plug-ins are so inefficient they draw huge amounts of power, while some designers have developed ultra-efficient algorithms that sound great and don’t place too many demands on your CPU. But in general, it holds true. Fortunately, modern CPUs are quite powerful and don't place the same kind of processing limitations on plug-ins we used to have to endure in the past. Regardless, everything has limits; and sometimes you'll need to use an older machine, or a laptop that doesn't have the same power capacity as your desktop. Also, lower CPU power means you can spend the power you do have on other things, like low latency or having several programs open simultaneously. So the bottom line is if you need to use lots of plug-ins in your mix, you want as much available power as possible. Here are the Top Ten tips to help you make that happen. 1 UPGRADE YOUR CPU Let’s get the most expensive option out of the way first. Because plug-ins eat CPU cycles, the faster your processor can execute commands, the more plug-ins it can handle. Although there are a few other variables, as a rule of thumb higher clock speeds = more power for plug-ins. Cool bonus: Pretty much everything else will happen faster, too. 2 INCREASE LATENCY And in the spirit of equal time, here’s the least expensive option: Increase your system latency. When you’re recording, especially if you’re doing real-time processing (e.g., playing guitar through a guitar amp simulation plug-in) or playing soft synths via keyboard, low latency is essential so that there’s minimal delay between playing a note and hearing it. However, that forces your CPU to work a lot harder. Mixing is a different deal: You’ll never really notice 10 or even 25ms of latency. The higher the latency, the more plug-ins you’ll be able to run. Some apps let you adjust latency from a slider, found under something like “Preferences.” Or, you may need to adjust it in an applet that comes with your sound card or audio interface (Fig. 1). Fig.1: A latency increase might be just the ticket to running more plug-ins. This applet controls a Line 6 interface; the ASIO buffer size is being increased from 128 samples to 512 samples. 3 USE HARDWARE-BASED PLUG-INS When CPUs didn't have the power they have today, outboard DSP cards from companies like Digidesign (now Avid) and Creamware were a popular solution for adding more power to a computer system. It was assumed that over time, native systems would become so powerful extra hardware assitance wouldn't be needed, but that didn't take into account that software developers were more than happy to use the extra power to create more powerful plug-ins. As a result, we still have external hardware solutions like audio interfaces with built-in DSP, or "heavy hitters" like Universal Audio's Powered Plug-Ins and Sonic Core's SCOPE system (formerly from Creamware), These either insert into your computer (like Universal Audio's UAD-2 OCTO), or connect to it via a fast interface, like FireWire or Thunderbolt. The cards run their own proprietary plug-ins (although the SCOPE system enjoys third-party support), so the plugs don’t load down the host CPU—the hardware does the heavy lifting (Fig. 2). Fig. 2: Universal Audio's Satellite card connects to desktops or laptops via FireWire to boost the computer's processing capabilities - as well as deliver some fine-sounding plug-ins. Although these boards will eventually say no mas! as well, one advantage compared to CPU-based processing is you have a finite, known amount of power so you can “red-line” the DSP without fear. With your CPU, sometimes running too close to the edge will cause a meltdown when the CPU has to perform that one extra function. Cool bonus: Hardware-based plug-ins are often platform-independent. 4 AUX BUS BEATS INSERTS Inserting one effect in an aux bus is much more efficient than inserting multiple instances of an effect in multiple tracks (Fig. 3). Of course, there are some cases where an effect must be limited to a single track. But for something like reverb, which tends to draw a lot of juice, see if it isn’t possible to do the aux bus option instead. Fig. 3: In this screen shot from Ableton Live, tracks 7 and 9 are sending signal into Send A. This feeds a single reverb plug-in, which is more efficient than inserting separate reverbs as an insert effect in both tracks 7 and 9. Sometimes, even EQ can work as a bus effect; this may let you use a high-quality "mastering" EQ that takes a lot of CPU instead of individual, lower-power EQs. For example, suppose you miked a bunch of acoustic percussion, and feel all the percussion tracks need to be brightened up a bit. Send them to a stereo bus, and insert a single EQ into that bus. 5 TURN OFF ANYTHING THAT’S NOT NEEDED Anything that’s active is making demands on your CPU. Using only one band of a four-band EQ? See if you can turn off the other bands. Even input and output drivers drain your CPU. When you’re mixing, you probably don’t need any of your sound card’s input drivers to be active (with an exception we’ll cover next) and only one output driver—go ahead and disable them (Fig. 4). Fig. 4: Disabling any I/O that's not needed will save CPU power. In this example, all of Roland's VS-700 interface inputs are turned off except for the main stereo outputs. Although any one of these changes won't make much difference, when added together the difference can be significant. 6 GET HARDWARE INTO THE ACT Reverb is one of the most CPU-intensive effects. A native, high-quality convolution reverb that sounds good will show no mercy to your CPU, which is why some of the best reverbs come from hardware-based plug-ins. But you can also use an external hardware processor. Dedicate one of your sound card output buses to feeding the reverb, and bring it back into an input. Although there will be some latency going into the reverb, think of it as free pre-delay - you probably won't even notice a difference. (If you do, then record the reverb to a track and shift it ahead in time.) Cool bonus: several programs make using external hardware pretty painless (Fig. 5), and compensate for latency. Fig. 5: Cubase is one of several programs that makes it easy to use external effects. 7 SEND “STEMS” TO A MIXERA digital mixer can be an important adjunct to a DAW's setup, not only because it’s useful while tracking, but because it can also serve as a control surface if you can send individual channels or even "stems" (groups of channels) from your DAW to the mixer. One way to do this is if the mixer has a FireWire or USB connection suitable for connecting to your computer (e.g., PreSonus StudioLive series), but mixers with ADAT light pipe inputs are also suitable if your audio interface has ADAT light pipe outputs. Either return the mixer output back to the host, or with some projects, you can do your mixing in the mixer itself, using that old school “move the faders” technique. Cool bonus: The outboard mixer’s aux bus is an ideal place for putting a reverb. And, you get to mix with real faders. 8 FREEZE YOUR TRACKS Soft synths, especially ones that sound good, can really suck power; “aastering quality” signal processing plug-ins also like to drink at the CPU power bar. So, use your host’s “freeze” function to convert tracks that use real-time plug-ins into hard disk tracks, which are far more efficient. 9 USE SNAPSHOT AUTOMATION Plug-ins aren’t the only elements that stress out your CPU: Complex, real-time automation can also eat CPU cycles. So, simplifying your automation curves will leave more power available for the CPU to run plugs. Your host may have a “thinning” algorithm; use it, as you generally don’t need that much automation data to do the job (particularly if you did real-time automation with fader moves, which will add quite a few extraneous automation nodes). But the ultimate CPU saver is using snapshot automation (which in many cases is all you really need anyway) instead of continuous curves. This process basically takes a “snapshot” of all the settings at a particular point on the DAW’s timeline, and when the DAW passes through that time, the settings are recalled and applied. 10 CHECK YOUR PLUG-IN’S AUTOMATION PROTOCOL Our last tip doesn’t relate to saving CPU power, but to preserving sound quality. Many plug-ins and soft synths offer multiple ways to automate: By recording the motion of on-screen controls, driving with MIDI controller data, using host automation (like VST), etc. However, not all automation methods are created equal. For example, moving panel controls may give higher internal resolution than driving via MIDI, which may be quantized into 128 steps. Using the right automation can make for smoother filter sweeps, less stair-stepping, and other benefits. Okay . . . there are your Top Ten tips, but here’s a bonus one: Any time you start to insert a plug-in, ask yourself if you really need to use it. A lot of people start their mix a track at a time, and optimize the sound for that track by adding EQ, reverb, etc. Then they bring in other tracks and optimize those. Eventually, you end up with an overprocessed, overdone sound that’s just plain annoying. Instead, try setting up a mix first with your instruments more or less “naked.” Only then, start analyzing where any problems might lie, then go about fixing them. Often tracks that may not sound that great in isolation mesh well when played together. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Keep your tracking session on an even keel with these tips for smoother sessions By Craig Anderton As the all-important first step of the recording process, laying down tracks is crucial. No matter how well you can mix and master, you’re hosed if the tracks aren’t good. But tracking is an elusive art. Some feel it’s pretty much a variation on performing; others step enter tracks via MIDI, one note at a time. Yet regardless of how you approach tracking, you want to create a recording environment where inspiration can flourish — troubleshooting your setup in the middle of the creative process can crush your muse. There are valid psychological reasons why this is so, based on the way that our brain processes information; suffice it to say you don’t want to mix creative flights of fancy with down-to-earth analytical thinking. So, let’s investigate a bunch of tips on how to track as efficiently — and creatively — as possible. 1 HAVE EVERYTHING READY TO GO I’m a fanatic when miking an acoustic instrument: I need one person to adjust the mics, and another to play the instrument, while I listen in the control room. But I also want all this setup to be done before the session begins, so the artist can be as fresh as possible. True, sometimes it’s necessary to make some compensations due to differences in “touch,” but those compensations don’t take very long. 2 CREATE A SCREEN LAYOUT THAT’S OPTIMIZED FOR TRACKING Most sequencers let you save specific “views” or window sets (Fig. 1). For example, you certainly don’t need to do waveform editing when you’re tracking (and if you do, we need to talk!). Fig. 1: Logic was one of the first DAWs to really exploit screen presets. As you’ll likely not be sitting right next to your computer as you play an instrument, go for large fonts, big readouts, wide instead of narrow channel strips—anything that makes the recording and track assignment process more obvious. 3 ZERO THE CONSOLE If you’re using a hardware mixer, center all the EQ controls, turn all the sends to zero, make sure anything that can be bypassed is in the bypass mode, and so on. Many mixer modules have some kind of reset option; take advantage of them. You want to make sure that any changes you make start from a consistent point, as well as insure that there aren’t any spurious noise contributions (like from an open mic preamp). 4 LEARN SOFTWARE SHORTCUTS Anytime you can hit a keyboard key instead of move a mouse, you’re saving time, effort, and staying in the right-brain (creative) frame of mind. For example if you don’t use the top octave of an 88-note keyboard much, your software might allow you to assign these keys to the record buttons on the first 12 channels of your tracking setup—or at the very least, use the top few notes for transport control. 5 CONTROLLERS CAN BE A BEAUTIFUL THING Once upon a time in a galaxy far, far away, DigiTech made a guitar processor called the GNX4. One of its features was “hands-free recording” when used with Cakewalk hosts like Sonar, where you could initiate playback, record, arm tracks, create new tracks, and other operations simply by pushing footswitches. While intended for guitar players, I found it very helpful for general recording applications and never abandoned a quest for footswitches. Fig. 2: The three jacks toward the right are for two footswitches and an expression pedal. The footswitches defaul to transport functions, but can be reassigned. If you have a MIDI keyboard, chances are you can use a sustain pedal to do something useful, like initiate recording. The Mackie Control Universal Pro (Fig. 2) has two footswitch jacks, which default to start/stop and record, and you can take this to the max with X-Tempo Designs’ wireless POK footswitch bank. 6 KNOW WHEN TO TAKE A BREAK If someone cutting a track starts running into a wall, it’s seldom worth continuing. It’s better to take a break and let the player (that means you, too!) come back refreshed and with a slightly different perspective. 7 TAKE ADVANTAGE OF LOOP RECORDING Loop recording, also called composite recording (Fig. 3), can help put together the perfect performance. For more information on loop recording, check out this article. Fig. 3: Sonar X3's "speed comping" merges loop recording with keyboard navigation. But loop recording is something best done at one time. If you record a bunch of takes, edit the best parts together, then try to add more parts, the newer takes seldom match up well with the older ones. If you need to add more parts, consider starting over or make sure you record enough takes in the first place. 8 DON’T EDIT WHILE YOU TRACK Because you read all the way to the end, your reward is the most important tip here. With loop recording, it might be tempting to edit the parts together right after recording them. But don’t — that can really disrupt the session’s flow if more tracking is on the agenda. As long as you know that you have enough good takes to put together a part, move on. The same applies to any editing. Even with MIDI, I’ll usually leave a track “as is,” and use real-time MIDI plug-ins (which don’t alter the file) to do any quantization if a part has some rough spots. Tracking is tracking; editing is editing. Do just enough editing (if needed) so that other players have something decent to follow, and worry about doing any polishing later. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Here's how to transfer the signals from your analog world into the computer's digital world By Craig Anderton A guitar, mic, or electric piano doesn’t just plug into a computer the way it would plug into an amplifier. This is because instruments are analog devices that generate a continuously varying audio signal, while a computer is a digital device that processes data. So, the audio must be converted into data before the computer can use it, and then must be converted back into audio so we can hear it – we can’t “hear” data any more than a computer can “hear” a microphone. THE COMPUTERIZED SIGNAL CHAIN Here’s a “map” (Fig. 1) of the computerized signal chain:Fig. 1: Block diagram of the computerized signal chain. Signal source (instrument) > Preamp (usually necessary) > Analog to Digital Converter (translates analog in to digital out) > Computer > Digital to Analog converter (translates digital in to analog out) > Monitoring system Let’s explain each block in general terms. Signal source (instrument): This is the analog output from your instrument or other electronic signal generator (CD player, Minidisc, etc.). Preamp: Many instruments, such as guitar, electric bass, microphones, old electric pianos, and the like put out low-level signals that need to be amplified in order to provide enough level for subsequent stages. Analog-to-Digital converter: This is built within a device called an audio interface. The audio interface accepts an analog input, and converts it to a data format the computer can understand. Its input will be audio connectors, and its output will be a digital signal cable that hooks into the computer. Think of the interface as a “bridge” between the analog and digital worlds. Computer: This runs the software that processes the data. That software can be anything from a sequencer that acts like the computerized equivalent of a tape recorder, to guitar amp simulations that turn your computer into a virtual amp, cabinet, and pedalboard. And of course, those are just two examples! There are complete virtual studios (like Propellerheads’ Reason), sophisticated tuners like Peterson’s Strobosoft, entry-level programs for beginnings, DJ tools, and much more. Digital-to-Analog converter: This converts the processed data back to analog so we can feed it into a monitoring system and hear it. It’s often built into the same audio interface that sends the signal into the computer. Note: Sometimes the audio interface is referred to as “I/O” because it provides the input to the computer and takes the computer’s output. Monitoring system: This could be a headphone jack in your audio interface, an amplifier with a set of speakers, powered speakers, a PA system, or anything that lets us hear the results of what the computer does. THE CONVERSION PROCESS Now we need to make a brief journey into the world of digital audio theory. Although this isn’t the world’s most entertaining subject, understanding a bit about how the process works will allow you to make intelligent decisions when setting up your computer-based studio, as well as make better quality recordings…and that’s exactly what we’re trying to do. Analog-to-digital conversion is crucial, as the conversion has to be extremely accurate for the highest possible signal quality. (Digital-to-analog conversion is equally important, but technically easier to do.) As a result, your ultimate sound quality depends to a huge extent on the conversion process. Conversion takes advantage of the fact that all sound, from a dog bark to a symphony orchestra, is simply a change in air pressure that varies over time. Expressed in electrical terms, this change in air pressure is analogous to a voltage that changes over time. For example, when air pressure hits a dynamic microphone, a diaphragm moves back and forth, and generates a voltage because the diaphragm cuts across a magnetic field. A guitar pickup works similarly, but instead of responding to the vibrations of air, it responds to the vibrations of your guitar strings. They also cut across a magnetic field and generate a voltage. However, a computer cannot understand a changing voltage unless it’s presented as a series of numbers. So, a converter works by taking a series of “snapshots” of an incoming analog voltage, measuring the voltage of each snapshot, then converting that number into digital data (binary numbers) the computer can understand. This process is called sampling. To accurately convey the input signal level, a converter takes thousands of snapshots every second. This is called the sampling rate. A CD uses audio that was sampled at 44,100 times a second (44,100 Hertz, or 44.1 kHz), and that’s the most common sample rate that most digital studios use. However, there are other popular sample rates, including 48,000 kHz, 88,200 kHz, 96,000 kHz, and even higher (Fig. 2). Fig. 2: The upper window shows a waveform sampled at 44.1 kHz, while the lower one is sampled at 96 kHz. Clearly, the lower window has greater resolution; but practically speaking, this doesn’t make a huge difference in the sound quality if the rest of the system uses quality components and engineering practices. Lower sample rates are used too, but mostly for “lo-fi” applications like answering machines, toys, and the like. 44,100 kHz is considered the minimum sampling rate for professional audio applications. There is currently a big debate about whether rates higher than 44,100 kHz yield a significant sonic improvement. The only reason for the debate is because you don’t get something for nothing, otherwise everyone would use the highest sample rate possible. One tradeoff is that all the numbers that make up digital audio have to be stored somewhere, and obviously, the more samples per second, the more data there is to be stored. Another tradeoff is that if you double the sample rate, the computer has to work harder because it has to deal with twice as much data. So, you may not be able to record as many tracks, or insert as many plug-ins (virtual signal processors) as you might want. Higher sample rates have some other advantages aside from sound quality, but these are mostly technical in nature and probably not worth the time to examine them. So here’s my summary on sample rates, but note that some audio professionals (and certainly, some marketing departments of companies trying to sell devices with higher sample rates!) would disagree: 44.100 kHz. This “lowest common denominator’ sample rate works just fine (CDs sound okay, right?) and is the default standard for most people. 48 kHz. Although not used much for purely audio products, a lot of video projects run audio at 48 kHz. If you do audio for video, you might be requested to provide music in this format. Otherwise, the slightly higher sampling rate doesn’t offer much of a sound quality improvement (if any) compared to 44.1 kHz. 88.2 kHz. Many people claim this sounds better than 44.1kHz, while others don’t hear much of a difference. If it sounds better to your ears and your gear can handle it, 88.2kHz is a good choice for audio work as it’s easy to convert back down to a CD’s 44.1kHz rate. 96 kHz. This is the most common “high” sample rate, and is sometimes used in DVDs and other high-end audio recording processes. Like 88.2kHz, if you can hear a difference and your gear is up to the task, it’s probably worth using…although as with 48/44.1kHz, 96kHz provides no significant advantage over 88.2kHz. 176.4 and 192 kHz. I don’t think these ultra-high sample rates are worth the effort or extra storage although of course, some people don't agree. Another aspect of the conversion process is called bit resolution. This simply states the accuracy with which the converter can measure the input signal. A good analogy is a mosaic: The more tiles in a mosaic of a fixed size, the more defined the picture will be. CDs are 16 bit, which means that audio voltages are defined with an accuracy of 1 part in 65,536. However, most audio programs let you record with 24-bit resolution, which provides an accuracy of about 1 part in 16,000,000. Obviously, this is a lot more precise – assuming you’re also using converters with 24 bits of resolution. As with higher sample rates, higher bit resolution requires more storage: a 24-bit file is 50\% larger than a 16-bit file, assuming they use the same sample rates. Unlike the controversy involving higher sample rates, though, few dispute that 24-bit recording sounds better than 16-bit recording. If you can’t decide whether to use higher sample rates or higher bit resolutions, the latter will make a greater sonic difference. Most people record at 44.1kHz with 24-bit resolution, although some use higher sample rates. Just remember that ultimately, the music you play will always be the most important element of all. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Build the perfect part by recording multiple takes, then editing them into a single, composite take By Craig Anderton Composite recording lets you record multiple takes of a part, usually in quick succession so you can get into a “groove,” then edit the best sections together into a single, cohesive composite part (e.g., take one’s beginning, take three’s end, and take two’s middle). Different DAWs handle the mechanics differently, but there are two crucial artistic factors: Don’t obsess.I’ve seen engineers piece together parts on an almost note-by-note basis, but a great part evolves time. Generally, the longer the chunks you piece together, the more coherent the part. Pasting together too many little bits can create a Frankenpart. You can’t create a great take; you record a great take. Editing exists to give something that’s already magical the extra 10\% to make it transcendent—it’s not about turning garbage into better garbage. THE COMPOSITE RECORDING PROCESS Define the region you want to loop (i.e., where playback or recording jumps back to the beginning after reaching the end, then continues). Allow a few measures before and after the part so you can “regroup” after each take. With most DAWs, turning on looping prior to hitting record is sufficient to enable loop recording. Some DAWs place each take in its own track, while others create separate layers within one track. Here’s why it’s good to record no more than a half dozen takes at a time: You won’t have to wade through too many takes when trying to locate the best sections. If you can’t get a good performance in six or seven takes, you may need to practice the part more, or re-think it. It’s good to hear what you’ve done before committing to too many tracks, so you don’t waste your time if they’re going in the wrong direction. EDITING After recording, compare takes to choose the best sections. Various programs employ different workflows; I use Sonar's "speed comping" mode to assemble a collection of “good” clips (see Fig. 1). Selecting a clip automatically mutes other clips in the same column, and you can use the arrow keys to navigate among rows and columns. Fig. 1: With Cakewalk Sonar’s composite recording, the highlighted clips have been selected, while the gray clips are muted. The waveform at the top shows the overall composite waveforms from the various selected clips. Once you’ve isolated the clips, delete anything you don’t want to keep. For smooth transitions between sections, you may need to add fades at the beginning or end of the “winning” clips. Sometimes crossfading two clips yields the best transition. Finally, bounce all the best bits into a single track. DOUBLING, TOO Doubling your part can create a more animated sound. Although you can do this electronically, “real” playing usually sounds better. Check if any other takes are equal to, or almost as good as, the chosen takes. If so, drag the “secondary” takes to another track to create the doubled part. Another option is to learn the composite part, then play it again to create the doubled track. In any event, always remember is to choose individual phrases based on musical continuity, not just musical perfection. The technology should serve you—not the other way around. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...