Jump to content

Anderton

Administrators
  • Content Count

    18,246
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. Amp sims aren't only about distortion... By Craig Anderton I’ve seen several comments online that amp sims are okay for distorted sounds, but not clean ones. However, it’s very easy to get good, clean guitar sounds, sometimes even with that “tube sparkle”...you just have to know these six secrets. 1. Record at an 88.2 or 96kHz sample rate. The lack of “cleanliness” you hear might not be due to excessive levels that cause clipping, but aliasing or foldover distortion. Recording at a higher sample rate can minimize the odds of this happening (note that several guitar amp sims offer an “oversampling” option that accomplishes the same basic result, even if the project’s base sampling rate is 44.1 or 48kHz). 2. Choose the right amp model. This may seem obvious, but not all clean models are as expected. For example, many “clean” emulations have a little bit of crunch, just like the original. Some sim manufacturers create clean amps that aren’t designed to emulate classic amps (Fig. 1); try these first. Fig. 1: AmpliTube 3’s Custom Solid State Clean model doesn’t have to emulate anything, so it’s designed to be as clean as possible. 3. Turn down the drive, turn up the master. It’s possible to get cleaner sounds with some amp models by dialing back dramatically on the input drive control, and boosting the output level to compensate (Fig. 2). Fig. 2: POD Farm 2’s Blackface Lux model can give clean sounds that ooze character. Here’s how: Turn down the amp Drive and input gain, turn the amp Volume all the way up, and set the output gain high enough to give a suitable output level. 4. Compress or limit on the way into the amp. Building on the previous tip, if you’re pulling down the level, then the guitar might sound wimpoid. Insert some compression or limiting between the guitar and amp model to keep peaks under control, and allow getting a higher average level to the amp without distortion. 5. Watch your headroom. Guitars have a huge dynamic range, so don’t let the peaks go much above -6 to -10dB if you want to stay clean. Yes, we’re used to making those little red overload LEDs wink at us, but that’s not a good strategy with digital audio—especially these days, when 24-bit resolution gives you plenty of dynamic range. 6. Beware of inter-sample clipping. With most DAWs, you can go well into the red on individual channels because their audio engines have virtually unlimited headroom (thanks to 32-bit floating-point math or better, in case your inner geek wondered). However, when those signals hit the output converters to become audio, headroom goes back to the real world of 16 or 24 bits, and any overloads may turn into distortion. So if the meters don’t show clipping you’re okay, right? Not so fast. Most meters measure the actual values of the digital waveform’s samples, prior to reconstruction into analog. But that reconstruction process might create signal peaks that are higher than the samples themselves, and which don’t register on your meters (Fig. 3). Fortunately, you can download SSL’s free metering plug-in that shows inter-sample clipping from the Solid State Logic web site. Fig. 3: Waves’ G|T|R is set to a clean amp. The DAW’s master output meter (left) shows that the signal is just below clipping, but SSL’s X-ISM meter that measures inter-sample distortion shows that clipping has actually occurred. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. It's a stereo world, so it's time for your guitar to join in by Craig Anderton Since the dawn of time, with a very few exceptions electric guitar outputs have been mono. This made sense for when the main purpose of guitar players (aside from picking up members of the opposite sex) was to take an amp to a gig and plug in to it. But with more guitar players opting for stereo in the studio, and sometimes even for live use, it’s natural to want to turn that mono output into something with a wider soundstage. So, here are six tips (one for each string, of course) about how to obtain stereo from mono guitars. But first, our most important tip: Don’t automatically assume a guitar part needs to be stereo; sometimes a focused, mono guitar part will contribute more to a mix than stereo. On occasion, I even end up converting the output from a stereo effect back into mono because it ends up making a major improvement. 1 EFFECTS THAT SYNTHESIZE STEREO Reverb, chorusing, stereo delay, and other effects can often synthesize a stereo field from a mono input. This is particularly effective with reverb, as the dry guitar maintains its mono focus while reverb billows around it in stereo. Some delays offer choices for handling stereo—like ping-pong delay, where each delay bounces between the left and right channels, LCR (left/center/right, with three separate taps for left, center, and right delay times), and the ability to set different delay sounds for the two channels. 2 EQUALIZATION I wrote an article for Harmony Central regarding “virtual miking” for acoustic guitar parts (particularly nylon string guitar), which uses EQ to split a mono guitar part into highs on the right, lows on the left, and the rest in between. As this needs only one mic there are no phase cancellation issues, yet you still hear a stereo image. Another EQ-based option uses a stereo graphic EQ plug-in. In one channel, set every other band to full cut and the remaining bands to full boost; in the other channel, set the same bands oppositely (Fig. 1). For a less drastic effect, don’t cut/boost as much (e.g., try -6dB and +6dB respectively). Fig. 1: A graphic equalizer plug-in can provide pseudo-stereo effects. 3 DOUBLE DOWN ON THE CABS With hardware amps, split the guitar into two separate cabinets and mic them separately to create two channels. Doing so “live” will usually create leakage issues unless you have two isolated spaces, but re-amping takes care of that problem because you can create the other channel during mixdown. Remember to align the two tracks so that they don’t go out of phase with each other. 4 CREATE A VIRTUAL ROOM Speaking of amp sims, many of them include “virtual rooms” (Fig. 2) with a choice of virtual mics and mic placements. These can produce a sophisticated stereo field, and are great for experimentation. Fig. 2: MOTU’s Digital Performer includes several guitar-oriented effects, as well as virtual rooms for both guitar and bass with multiple miking options and cabinets. 5 PARALLEL PROGRAM PATHS Amp sims often create stereo paths from a mono input. For example IK’s AmpliTube has several stereo routing options, while Native Instruments’ Guitar Rig includes a “split mix” module that divides a mono path into stereo. You can then insert amps and effects as desired into each path, and at the splitter’s output, set the balance between them and pan them in the stereo field (Fig. 3). Fig. 3: Although you can use Guitar Rig to create mono effects, its signal path is inherently stereo. This makes it easy to convert mono sounds to stereo. 6 DELAY My favorite plug-in for this is the old standby Sonitus fx: Delay, because it has crossfeed as well as feedback parameters. Crossfeed can help create a more complex sound by sending some of one channel’s signal into the other (Fig. 4). Fig. 4: The ancient Sonitus fx: Delay is excellent for create a stereo spread from a mono input. Here it’s used as part of a custom FX chain in Cakewalk Sonar to add width to guitar parts. However, there are plenty of other options. One is to duplicate a mono guitar track, then process the copy through about 15-40 ms of delay sound only (no dry). Pan the two tracks oppositely for a wide stereo image. Make sure you check the mix in mono; if the guitar sounds thinner, re-adjust the delay setting until the sound regains its fullness. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. If you want analog sounds in a digital age, try these simple techniques by Craig Anderton Fancy signal processors aren’t always necessary to emulate some favorite guitar sounds and effects. In today’s digital world, a variety of programs and effects can be made to do your bidding. Want proof? Check out these five examples. For example... VINTAGE WA-WA EFFECTS Many people try to obtain a vintage wa sound simply by sweeping a highly resonant parametric EQ set to bandpass response. This isn’t possible because vintage analog wa pedals have steep response rolloffs that reduce both high and low frequencies, but there is a way to use modern parametric EQs to re-create this effect (Fig. 1). Copy the guitar track so you have two “cloned” tracks set to the same level. In track 1, insert a parametric EQ set to bandpass (peak/dip) mode with about 6dB gain and Q (resonance) of around 8. Flip track 2 out of phase. Sweep the EQ over a range of about 200Hz – 2.2kHz. Fig. 1: The mixer channel on the left is going through a parametric stage of EQ. The channel on the right doesn’t go through an equalizer, but is flipped out of phase (the phase button is circled in red). Throwing one track out of phase causes the high and low frequencies to cancel, so all you hear is the filtered midrange sound—just like a real wa-wa. ADDING AMBIENT “AIR” Recording guitar direct and be simple and produce a clean sound, but sometimes it’s too clean because there aren’t any mics to pick up the room reflections that give a sense of realism. To model these reflections, feed your guitar track through a multi-tap delay plug-in, or send it to at least two stereo buses with stereo delays where you can set independent delay times for the two channels. Next, set the delay times for short, prime number delays (e.g., 3, 5, 7, 11, 13, 17, 19, and 23 milliseconds) to avoid resonant build-ups. Four delays is often all you need; I generally use 7, 11, 13, and 17ms, or 13, 17, 19, and 23 ms, depending on the desired room size (Fig. 2). Fig. 2: Finding delay lines that can give short, precise delays isn’t that easy, but Native Instruments’ Guitar Rig—shown here using two splits, each with its own stereo delay—can do the job. More delays provide a more complex ambience, but sometimes a simple ambience effect actually works better. If you want more “air,” try adding some feedback within the delay, but make sure it’s not enough to hear individual echoes. Experiment with the delay levels and pans, then mix the delayed sound in at a low level. THE CLOSED-BACK TO OPEN-BACK CABINET TRANSFORMATION With open-back cabinets, low-frequency waveforms exiting through the cabinet back partially cancel the low-frequency waveforms coming out the front. Emulate this effect by reducing bass somewhat; a low-frequency shelving filter works well, as does a high-pass filter. OUT-OF-PHASE PICKUP EMULATION Don’t have an out-of-phase switch? You can come close with a studio-type EQ (Fig. 3). Select both pickups at the guitar itself, and feed its output into a mixer channel. For the EQ, dial in a notch filter around 1,200Hz with a fairly broad Q (0.6 or so) and severe cut—around -15 to -18dB. Use a high shelf to boost about 8dB starting at 2kHz, and a low shelf to cut by -18dB starting at 140Hz. Tweak as needed for your particular guitar and pickups. Boost the level—like a real out-of-phase switch, this thins out the sound. Fig. 3: The Sonitus EQ set for a sound that emulates an out-of-phase sound with guitar pickups. THE BIG BASS ROOM BUILD-UP When a cabinet’s close to a wall, bass waves bouncing off the wall reinforce the waves coming out the cab’s front. This can produce a “rumble” due to walls and objects resonating, which EQ can’t imitate. For a killer rumble, split your guitar signal through an octave divider, then follow the octave divider with a lowpass EQ set to cut highs starting at 120Hz; this muddies the bass frequencies further. Then, mix the octave sound about -15dB below the main signal—just enough to give a “feel” of super-low bass. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Whether for mixing or synth programming, touchscreens are having a major impact by Craig Anderton Mixers used to be so predictable: Sheet metal top, faders, knobs, switches, and often, a pretty hefty price tag. Sure, DAWs started including virtual mixers, but unless you wanted to mix with a mouse (you don’t), then you needed a control surface with . . . a sheet metal top, faders, knobs, switches, and a slightly less hefty price tag. Enter the touchscreen—and the paradigm changed. Costly and noisy moving faders have been replaced by the touch of finger on screen, and the controller’s digital soul provides more functionality at lower cost. And if your application can talk to a a wireless network, iOS devices can provide wireless control. GENERAL CONTROL SURFACES iPads now replace expensive mechanical control surfaces. For example, Far Out Labs’ ProRemote for the iPad is Mackie Control Universal-compatible, and offers up to 32 channels (16 simultaneous on an iPad) with metering and 100mm “virtual moving faders.” Ableton Live fans can use Liine’s Griid, a control surface for Live’s clip grid, while Neyrinck’s V-Control Pro serves Pro Tools users but is compatible with several other programs as well. The cross-platform DAW Remote HD from EUM Lab supports the Mackie Control and HUI control surface protocols, and handles pretty much any DAW that can respond to those protocols. MIXER-SPECIFIC CONTROL SURFACES PreSonus is big on straddling the hardware/software worlds with their StudioLive mixers. First came Virtual Studio Live software for computer control; then SL Remote (Fig. 1), which links the computer to iOS devices for wireless mixer remote control. Yes, you can play a CD over your sound system, and tweak your mixer to optimize the sound as you walk around the venue—or control your own monitor mix, EQ, compression, and a lot more from onstage. Fig. 1: PreSonus provides extensive software support for their StudioLive mixers, including an iPad remote and personal monitoring app for all iOS devices. PreSonus also introduced QMix, an iPhone/iPod touch app that basically replaces personal monitoring systems by letting you monitor from the mixer itself through their ingenious “wheel of me”—dial in the proportion of your channel to the rest of the mix (“more me!”). Lots of companies, including high-end ones, like iPad control—Yamaha, Allen & Heath, Behringer, Soundcraft, MIDAS, and others provide remotes for their digital mixers. iPAD ASSISTANCE Some mixers use the iPad as an accessory. Behringer’s XENYX USB Series mixers include an iPad dock; the mixer can send signal both to and from the iPad—use effects processing apps, spectrum analyzers, record into GarageBand, and the like. Alto Professional’s MasterLink Live mixer also has an iPad dock, with the iPad used for mix analysis, recording, and replacing a bunch of rack gear with iPad-controlled signal processing. MIXER MEETS RECORDING Why stop with mixing? The Alesis iO Mix looks like a dock, but it’s a four-channel recorder with an iPad control surface. Take the concept even further with WaveMachine Labs’ brilliant Auria, which packs a full-function 48-track recorder, with a complete mixer interface and plug-ins from PSP Audioware, into an iPad. It works with several tested interfaces; this sounds like science fiction, but it really works. Windows 8 enabled multi-touch for compatible laptops and touch monitors, and Cakewalk’s SONAR adapted the technology to a DAW environment (Fig. 2). Mixing with a touchscreen monitor is an interesting experience—I found it worked best if I laid the monitor on my desk, titled it up at a slight angle like a regular mixer surface, and combined “swiping” for general moves and mixing along with a mouse for precise changes. Fig. 2: Starting with Windows 8, Cakewalk SONAR supported touchscreen control. In the “huge and not exactly cheap” touchscreen category there’s Slate Pro Audio’s Raven MTX (available exclusively from GC Pro) that has not only the same functionality of the big hardware mixers of old, but pretty much the same size as well. And for DJs, SmithsonMartin’s Emulator ELITE is a tour de force of touch control for programs like Native Instruments’ Traktor and Ableton Live. TOUCHSCREEN “SUPERMIXERS” Mackie’s DL1608 (Fig. 3) builds a rugged, pro-level hardware mixer exoskeleton around an iPad brain—although you can also slip out the iPad for wireless remote control. Fig. 3: Mackie’s DL1608 builds a hardware exoskeleton around an iPad brain, with the hardware handling all I/O and audio mixing/processing. It’s a serious mixer with the Mackie pedigree: 16 Onyx preamps with +48V phantom power, balanced outs (XLR mains, 1/4” TRS for the six auxes), and hardware DSP for the mixing and hardware effects—the iPad is solely about control. Each input has 4-band EQ, gate, and compression; the outputs have a 31-band graphic EQ and compressor/limiter, along with global reverb and delay. If you don’t need as many inputs, the 8-channel DL806 also offers iPad control. Line 6’s StageScape M20d (Fig. 4) uses a custom 7” touchscreen for visual mixing based on a graphic, stage-friendly paradigm with icons representing performers or inputs; touching an icon opens up the channel parameters and DSP (including parametric EQs, multi-band compressors, feedback suppression on every input, and more). Fig. 4: The Line M20d uses a custom touch screen whose icons represent an actual stage setup rather than simply showing conventional channel strips. There are also four master stereo effects engines with reverbs, delays and a vocal doubler. You can even do multi-channel recording to a computer, SD card, or USB drive, and it accepts an iPad for remote control. Like the Mackie, it’s serious: 12 mic/line ins (with automatic mic gain setting), four additional mic ins, and balanced XLR connectors for the auto-sensing main and monitor outputs. But the M20d also incorporates the L6 LINK digital networking protocol, so the mixer can communicate with Line 6’s StageSource speakers for additional setup and configuration options. ARE WE THERE YET? Although touch control hasn’t quite taken over the world yet, it’s making rapid strides in numerous areas. Of course smart phones and iPads started the trend, but we’re now seeing applications from those consumer items creeping into our recording- and live performance-oriented world. Granted, sometimes touch isn’t the perfect solution—there’s something about grabbing and moving a hardware fader that’s tough to beat—so the future will likely be a continuing combination of tactile hardware and virtual software. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Don’t give up on that garage sale special yet! by Craig Anderton So you finally tracked down an ultra-rare, ultra-retro Phase Warper stomp box manufactured back in the mid-’70s. Not surprisingly, it doesn’t seem to work very well (if at all); sitting unused in someone’s garage for over a decade has taken its toll. But if you know a few basic procedures, you can often restore that antique and give it a new life. Here are some ways that have worked well for me to restore vintage effects. OXIDATION ISSUES One of your biggest problems will likely be oxidation, here metal surfaces become corroded due to stuff in the air (whether pollution in LA or salt spray in Maine). Oxidation shows up as scratchy sounds in pots, intermittent problems with switches, and occasional circuit failure. Fortunately, chemicals called contact cleaners can solve a lot of these problems. I’ve had good luck with DeoxIT from Caig Laboratories; they also make an Audio Survival Kit with cleaners for plastic faders and contact restoration as well as cleaning. but there are many other types (such as “Blue Shower” contact cleaner). Here are some ways you’d typically use contact cleaners. Scratchy pots. Pots work by having a metal wiper rub across a resistive strip, so the pot can become an “open circuit” if oxidation or film prevents these from making contact. To solve this, spray a small amount of contact cleaner into the pot’s case. With unsealed rotary pots, there’s usually an opening next to the pot’s three terminals (Fig. 1). Fig. 1: The red line points to an opening in the pot where you can squirt contact cleaner (photo by Petteri Aimonen). Slider (fader) pots have an obvious opening. Sealed pots are more difficult to spray; sometimes the pot can be disassembled, sprayed, and reassembled, and sometimes you can dribble contact cleaner down the side of the pot’s shaft, and hope some of it makes it to the innards. Once sprayed, you have to rotate the pot several times to “smear” the cleaner, and also flush away the gunk it’s dissolving. After rotating it about 20 times or so, spray in a little more contact cleaner. If the problem returns, spray again and see if that solves things. However, at some point a pot’s resistive element becomes so worn that no contact cleaner can restore it—you then need to replace the pot with one of equivalent value. Incidentally, people often forget that trimpots need attention too—even more so, given that they’re more exposed than regular pots. Spray them the way you would regular pots, but be very careful not to spray any trimpots that adjust internal voltages. If you have any doubts, it’s probably best to leave trimpots alone. IC sockets. IC sockets are also subject to oxidation. A quick fix is to simply take an IC extractor (these cost about $3), clamp its sides around the chip, and pull up very slightly on the chip (Fig. 2; just enough to loosen it—about 1/16”). Fig. 2: An IC extractor can pull an IC out of its socket, but that’s not what you want to do—just pull up very slightly. This picture shows a digital chip so it’s easier to see the pins; older effects boxes will likely have smaller analog chips. Spray some contact cleaner sparingly on the IC’s pins. Now push the IC back into its socket. Repeat this pull-push routine one more time, and the scraping of the chip pins against the socket in conjunction with the cleaner should have cleaned things enough to make good electrical contact. Afterward, it’s important to check that all the IC pins are not bent and go straight into the socket (Fig. 3). Fig. 3: Verify that the pins are not bent or compromised before re-applying power. However, use extreme caution—IC pins are fragile, which is why you don’t want to pull the chip out too far, nor do this procedure too often. If you destroy an ancient IC, you may not be able to find a replacement. Toggle switches. Rotary and pushbutton switches respond best to contact cleaners, but toggle switches are often sealed. These are not worth attempting to disassemble, but you may luck out and find a switch that does have some openings where you can squirt some contact cleaner. As with pots, work the switch several times to spread the cleaner. Other connectors. Some effects used nylon “Molex” connectors or similar multipin connectors. Connector pins in general can develop oxidation, and are also candidates for spraying. Sometimes they lift right up from their sockets, but often there are little plastic hooks or tabs to hold the connector in place. If you encounter resistance while trying to remove the connector, don’t force it—look for whatever might be impeding its movment. Battery connectors. Because these connectors carry the most current of anything in the effect, any oxidation here can be a real problem. Spray the connector, and snap/unsnap a battery several times. Two other battery tips: Check the battery connector tabs that mate with the battery’s positive terminal; if it doesn’t make good contact with the battery, push inward on the connector tabs with a pliers or screwdriver to encourage firmer contact. And if the battery has leaked over the connector, forget about trying to salvage it—solder in a new connector. BLOW IT AWAY Most older effects usually come free with large amounts of dust. Take the effect outside, plug a vacuum cleaner’s hose into the exhaust end, let the vacuum blow for a minute or so to clear out any dust stuck in the hose, then blow air on the effect to get rid of as much dust as possible. If you don’t do this, cleaning your pots and connectors may end up being a short-term solution as dust shakes loose over time and works its way back into various components. LOOSE SCREWS While you still have the unit apart, check whether any internal screws are loose—especially if they’re holding circuit boards in place. Enough vibration can loosen screws, and that could mean bad ground connections (many vintage effects use screws to provide an electrical path between circuit board and ground, or panel and ground). Try to turn each screw to determine if there’s any play. If there is, before tightening the screw check to see if there’s a lockwasher between the nut and the panel or other surface. If not, add a lockwasher before tightening the screw—providing the lockwasher teeth don’t contact something they shouldn’t. THOSE #@$$#^ FOOTSWITCHES Many old stomp boxes used push-on, push-off DPDT footswitches that were expensive then, and are even more expensive (and difficult to find) now. One source for replacements is Stewart-McDonald’s Guitar Shop Supply. ELECTROLYTIC CAPACITORS Electrolytic capacitors (Fig. 4), which tend to have a blue or black “jacket” and are polarized (i.e., they have a + and – end, like a battery) contain a chemical that dries up over time. Fig. 4: The two capacitors on the right are typical electrolytic capacitors. The three on the left are variations on ceramic capacitors. With very old effects, or ones that have been subject to environmental extremes (e.g., being on the road with a rock and roll band), it can make a major sonic difference to replace old electrolytic capacitors with newer ones of the same value and voltage rating. Note that ceramic capacitors (which are usually disc-shaped), tantalum caps (like electrolytics, but generally smaller for a given value and with a lower voltage rating), and polyester caps like Orange Drops or mylar capacitors don’t dry up and last a long time. SAFER POWER Many older AC-powered boxes did not use fuses or three-conductor AC cords. Although I’m loathe to modify a vintage box too much, making a concession to safety is a different matter. Fig. 5 shows wiring for a two-wire cord compared to a fused, three-wire type. A qualified technician should be able to modify your effect to use a three-wire power cord. Fig. 5: The 3-wire cord’s ground typically connects to the effect’s main ground point (usually located near the power supply). Good luck! Your toughest task will be finding obsolete parts such as old analog delay chips, custom-made optoisolators, and dealing with effects where they sanded off the IC identification (a primitive form of copy protection). But once you restore an effect, it’s a great feeling…and when it’s closer to like-new condition, it will probably sound better as well. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. By Craig Anderton Cakewalk’s cross-platform, step-sequencing-oriented synthesizer has a ton of hidden features and shortcuts. Here are some favorites; the numbers correspond to the numbers in the screen shot. 1 BETTER SOUND QUALITY Each Element has a Quality parameter that defaults to Std. If the patch uses pitch sweeps, change this to Hi to minimize aliasing. To further minimize aliasing, click on the Options button (the Screwdriver icon toward the upper right) and check “Use sinc interpolation when freezing/rendering.” 2 RAPTURE MEETS MIDI GUITAR Click on the Options button. Check “Set Program as Multitimbral” so Rapture elements 1-6 receive MIDI channels 1-6, which can correspond to guitar strings 1-6. For the most realistic feel where playing a new note cuts off an existing note sounding on the same string, set each element’s Polyphony to 0 (monophonic with legato mode), and Porta Time to 0.0. 3 ENABLING PORTAMENTO Portamento is available only if an Element’s Polyphony = 0. If Polyphony = 1, only one voice can sound (monophonic mode), but without legato or the option to add portamento. 4 MULTI OPTION DETAILS An Element’s Multi option can thicken an oscillator without using up polyphony. However, it works only with short wavetables, not longer samples or SFZ files. 5 ACCEPTABLE FILE FORMATS Each Element can consist of a WAV, AIF, or SFZ multisample definition file. SFZ files can use WAV, AIF, or OGG files. Samples can be virtually any bit depth or sample rate, mono or stereo, and looped or one-shot. 6 MELODIC SEQUENCES When step sequencing Pitch, quantize to semitones by snapping to 12 levels or 24 levels (right-click in the sequencer to select). If you simply click within the step sequencer, each time you type “N” it generates a new random pattern. 7 CHAINING ELEMENTS FOR COMMON FX You can route an oscillator (with its own DSP settings) through the next-higher-numbered Element’s EQ and Effects by right-clicking on the lower-numbered Element number and selecting “Chain to Next Element.” (You can’t do this with Element 6 because there is no higher-numbered element.) 8 KNOB DEFAULT VALUES Double-click on a knob to return it to its default value. 9 THE PROGRAMMER’S FRIEND: THE LIMITER When programming sounds with high resonance or distortion, enable the Limiter to prevent unpleasant sonic surprises. 10 FIT ENVELOPE TO WINDOW If the envelope goes out of range of the window, click on the strip just above the envelope graph, and choose Fit. 11 SET ENVELOPE LOOP START POINT Place the mouse over the desired node and type “L” on your QWERTY keyboard. Similarly, to set the Loop End/Sustain point, place the mouse over a node and type “S.” 12 CHANGE AN ENVELOPE LINE TO A CURVE Click on an envelope line segment, and drag to change the curve. 13 CHANGE LFO PHASE Hold down the Shift key, click on the LFO waveform, and drag left or right. 14 CHOOSING THE LFO WAVEFORM Click to choose the next higher-numbered waveform or right-click to choose the next lower-numbered waveform. But it’s faster to right-click above the LFO waveform display, and choose the desired LFO waveform from a pop-up menu. 15 PARAMETER KEYTRACKING The Keytracking window under the LFO graph affects a selected parameter (Pitch, Cut 1, Res 1, etc.) based on the keyboard note. Adjust keytracking by dragging the starting and ending nodes. Example: If Cut 1 is selected and the keytracking line starts low and goes high, the cutoff will be lower on lower keys and higher with higher keys. If the line starts high and goes low, the cutoff will be higher on lower keys and lower with higher keys. 16 CHANGE KEYTRACKING CURVE Click on the Keytrack line and drag up or down to change the shape. 17 CHOOSE AN ALTERNATE TUNING Click on the Pitch button for the Element you want to tune. Click in the Keytrack window and select the desire Scala tuning file. ADDING CUSTOM LFO WAVEFORMS Store WAV files (8 to 32-bit, any sample rate or length) in the LFO Waveforms folder (located in the Rapture program folder). Name each WAV consecutively, starting with LfoWaveform020.wav, then LfoWaveform021.wav, etc. SMOOTHER HALL REVERB If you select Large Hall as a Master FX, create a smoother sound by loading the Small Hall into Global FX 1 and the Mid Hall into Global FX 2. Trim the reverb filter cutoffs to “soften” the overall reverb timbre. THE MOUSE WHEEL The wheel can turn a selected knob up or down, change the level of all steps in a step sequence, scroll quickly through LFO waveforms, zoom in and out on envelopes, and more. Hold the Shift key for finer resolution, or the Ctrl key for larger jumps. FINEST KNOB RESOLUTION Use the left/right arrow keys to edit a knob setting with fives times the resolution of just click/dragging with the mouse. NEW LOOK WITH NEW SKINS In the Rapture folder under Program Files, the Resources folder has bit-mapped files for Rapture graphic elements (e.g., background, knobs, etc.). Modify these to give Rapture a different look. COLLABORATING ON SOUNDS To exchange files with someone who doesn’t have the same audio files used for an SFZ definition file, send the audio files separately and have your collaborator install them in Rapture’s Sample Pool library. This is where Rapture looks for “missing” SFZ files. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Get hands-on control over your DAW by Craig Anderton The Mackie Control became such a common hardware controller that most DAWs included “hooks” to allow for them to be controlled by Mackie’s hardware. But that also created another trend—other hardware controllers emulating the Mackie protocol so that non-Mackie controllers could work with these same DAWs because, from the DAW’s standpoint, they appeared identical to the Mackie Control. These controllers hook up through MIDI. So, the basic procedure for having a DAW work with a Mackie-compatible device is: Assign a MIDI input to receive messages from the controller. If the controller is bi-directional (e.g., it has moving faders so they need to receive position data from the DAW), you’ll need to assign a MIDI output as well; this may also be the case if the DAW expects to see a bi-directional controller. Choose Mackie Control as a control surface within the DAW itself. If a program says there’s no Mackie Control connected (e.g., Acid Pro), there will often be an option to tell the program it’s an emulated Mackie Control. Any controller faders usually control channel level, while rotaries control panpots. Buttons typically handle mute or solo, but may handle other functions, like record enable; this depends on how the DAW interprets the Mackie Control data. Also, there are typically Bank shift up/down and Track (also called Channel) shift up/down buttons (labeled Page and Data respectively in the Graphite 49). The Bank buttons change the group of 8 channels being controlled (e.g., from 1-8 to 9-16), while the Track buttons move the group one channel at a time (e.g., from 1-8 to 2-9). Many controllers have transport buttons as well (play, stop, rewind, etc.). This article tells how to set up a basic Mackie Control that doesn’t use motorized faders. The Mackie Control protocol is actually quite deep, and some programs allow for custom assignments for various controller controls. That requires much more elaboration, so we’ll just concentrate on the basics here. We’ll use Samson’s Graphite 49 controller as our typical Mackie Control, but these same procedures work with pretty much any Mackie Control-compatible device. Note that the Graphic 49 has five virtual MIDI ports, and all remote control data is transmitted over Graphite’s virtual MIDI port #5. This allows the other ports to carry data like keyboard notes and controller positions to instruments and other MIDI-aware software. We’ll assume you’ve loaded the preset that corresponds to the programs listed below. However, note that you may be able to call up a different preset for slightly different functionality. For example, if a preset’s upper row of buttons controls solo, they can often control record enable if you call up a preset where the upper row of buttons controls record enable (e.g., the Logic preset). APPLE LOGIC PRO Graphite 49 looks like a Logic Control; as that’s the default controller, you usually won’t have to do any setup. However if this has been changed for some reason, go Logic Pro > Preferences > Control Surfaces > Setup. In the Setup window, click the New pop-up menu button and choose Install. Click on the Mackie Logic Control entry, click on the Add button, click OK, and you’re done. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Record Enable, and the lower switches control Mute. AVID PRO TOOLS Go Setup > MIDI > Input Devices. Make sure MIDIIN5 (Samson Graphite 49) is checked, then click OK. Then go Setup > Peripherals. Click the MIDI Controllers tab. For Type, choose HUI. Set Receive From to MIDIIN5 (Samson Graphite 49). Send To must be set to something, so choose MIDIOUT2 (Samson Graphite 49). The faders, rotaries, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. However, the Bank and Channel buttons don’t work with the HUI protocol. ABLETON LIVE In Options > Preferences, choose MackieControl for Control Surface, and set Input to MIDIIN5 (Samson Graphite 49); Output doesn’t need to be assigned. In the MIDI Ports section, turn Remote On for the input that says MackieControl Input MIDIIN5 (Samson Graphite 49). The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Track Activator buttons. CAKEWALK SONAR In Edit > Preferences > MIDI Devices, set the MIDI In port to MIDIIN5 (Samson Graphite 49) and the MIDI Out port to MIDIOUT2 (Samson Graphite 49). Click Apply. Click on Control Surfaces under MIDI, then click the Add New Controller button in the upper right. For Controller/Surface, choose Mackie Control and verify that the Input and Output Ports match your previous MIDI port selections. Click OK, click Apply, click Close. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. MOTU DIGITAL PERFORMER Go Setup > Control Surface Setup. Click the + sign to add a driver, and select Mackie Control. Under Input Port, choose Samson Graphite 49 Controller (channel 1). Click OK. The faders, rotaries, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. PRESONUS STUDIO ONE PRO Under Studio One > Options > External Devices, choose Add. Select Mackie Control. Set Receive From to MIDIIN5 (SAMSON Graphite 49). Send To can be set to None. Click on Okm then click on OK again. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. PROPELLERHEAD REASON Mackie Control works somewhat differently with Reason from a conceptual standpoint, because until Record was integrated with Reason in Version 6, Reason was not a traditional DAW. As a result, Graphite sends out specific control signals that apply to whatever device has the focus. It’s easiest if you also use Graphite 49 as the master keyboard controller, and go Options > Surface Locking and for Lock to Device, select Follow Master Keyboard. Also, create a track for any device you want to control, including processors or devices like the Mixer 14:2. When you click on that track, Graphite 49 will control the associated device. If you choose an Audio Track, slider S1 controls level, the F1 button controls solo, F9 controls mute, and rotary E8 controls pan. For example, if the 14:2 Mixer has the focus, the faders, rotaries, and buttons work as expected. (as does the transport) although Bank and Channel Shift commands aren’t recognized. If SubTractor has the focus, the controls affect various SubTractor parameters. There’s a bit of trial and error involved with the various devices to find which Graphite 49 controls affect which parameters; you can always create custom presets to control specific instruments, but this goes beyond the scope of this article, as it involves delving into Reason’s documentation and assigning specific controls to specific MIDI channels and controller numbers. Go Edit > Preferences and click the Control Surfaces tab. Click the Add button; select Mackie as the manufacturer, and Control for the model. Under input, select MIDIIN5 (Samson Graphite 49). For output, select MIDIOUT2 (Samson Graphite 49). Click OK, and make sure Standard is checked. Note that you can also lock the Graphite 49 to a specific device so that it will control that device, regardless of which track is selected. Go Options > Surface Locking and choose the device to be locked. SONY ACID PRO Under Options, check External Control. Under Options > Preferences, click the MIDI tab, check the MIDIIN5 (Samson Graphite 49) box under “Make these devices available for MIDI input,” then click Apply. In the External Control and Automation tab, under Available Devices choose Mackie Control and click on Add. Double-click in the Status field and in the dialog box that opens, in the Device Type field choose Emulated Mackie Control Device. Select MIDIIN5 (Samson Graphite 49) for the MIDI input if it is not already selected. Click on OK, then click on OK in the next dialog box. The faders, rotaries, Bank, Track, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. The faders, rotaries, and Transport buttons work as expected but only the first eight channels can be controlled and it is not possible to do Bank or Track shifting. Graphite 49’s upper switches control Solo, and the lower switches control Track Activator buttons. SONY VEGAS PRO The procedure is identical to Acid Pro, except that the Status field in the External Control and Automation page updates correctly after selecting Emulated Mackie Control Device instead of saying “No Mackie Devices Detected.” Note that only audio channels are controlled. STEINBERG CUBASE Go Devices > Device Setup. Click the + sign in the upper left corner and select Mackie Control from the pop-up menu. Under MIDI Input, select MIDIIN5 (Samson Graphite 49) then click on Apply. Click OK. The faders, rotaries, and Transport buttons work as expected. Graphite 49’s upper switches control Solo, and the lower switches control Mute. However, I couldn’t figure out how to get Cubase to recognize Graphite 49’s Bank and Channel buttons; if anyone knows, please add a comment, and I’ll modify this article. Cubase offers a very cool feature: If you check Enable Auto Select, when you move a Graphite 49 fader it automatically selects that channel. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Is your monitoring setup honest with you about your music? by Craig Anderton All the effort you put into recording, overdubbing, and mixing is for nothing if your monitoring system isn’t honest about the sounds you hear. The issue isn’t simply the speakers; the process of monitoring is deceptively complex, as it involves your ears, the acoustics of the room in which you monitor, the amp and cables that drive your monitors, and the speakers themselves. All of these elements work together to determine the accuracy of what you hear. If you’ve ever done a mix that sounded great on your system but fell apart when played elsewhere, you’ve experienced what can go wrong with the monitoring process - so let's find out how to make things right. HEARING VARIABLES Ears are the most important components of your monitoring system. Even healthy, young ears aren’t perfect, thanks to a phenomenon quantified by the Fletcher-Munson curve (Fig. 1). Fig. 1: The Fletcher-Munson curve indicates how the ear responds to different frequencies. Simply stated, the ear has a midrange peak around 3-4kHz that’s associated with the auditory canal’s resonance, and does not respond as well to low and high frequencies, particularly at lower volumes. The response comes closest to flat response at relatively high levels. The “loudness” control on hi-fi amps attempts to compensate for this by boosting the highs and lows at lower levels, then flattening out the response as you turn up the volume. Another limitation is that a variety of factors can damage your ears — not just loud music, but excessive alcohol intake, deep sea diving, and just plain aging. I’ve noticed that flying temporarily affects high frequency response, so I wait at least 24 hours after getting off a plane before doing anything that involves critical listening. The few times I’ve broken that rule, mixes that seemed perfectly fine at the time played back too bright the next day. It’s crucial to take care of your hearing so at least your ears aren’t the biggest detriment to monitoring accuracy. Always carry the kind of cylindrical foam ear plugs you can buy at sporting good stores so you’re ready for concerts, using tools (the impulse noise of a hammer hitting a nail is major!), or being anywhere your ears are going to get more abuse than someone talking at a conversational level. (Note that you should not wear tight-fitting earplugs on planes. A sudden change in cabin pressure could cause serious damage to your eardrums.) You make your living with your ears; care for them. ROOM VARIABLES As sound bounces around off walls, the reflections become part of the overall sound, creating cancellations and additions depending on whether the reflections are in-phase or out-of-phase compared to the source signal reaching your ears. These frequency response anomalies affect how you hear the music (Fig. 2). Fig. 2: If a reflection is out of phase with the original signal, there will be some degree of cancellation. Also, placing a speaker against a wall seems to increase bass. This is because any sounds emanating from the rear of the speaker, or leaking from the front (bass frequencies are very non-directional), bounce off the wall. Because a bass note’s wavelength is so long, the reflection will tend to reinforce the main wave (Fig. 3). Fig. 3: Most anomalies with room acoustics happen at low frequencies. As the walls, floors, and ceilings all interact with speakers, it’s important that speakers be placed symmetrically within a room. Otherwise, if (for example) one speaker is 3 feet from a wall and another 10 feet from a wall, any reflections will be wildly different and affect the response. The subject of acoustically treating a room is beyond the scope of this article. Hiring a professional consultant to “tune” your room with bass traps and similar mechanical devices could be the best investment you ever make in your music. WHAT ABOUT TUNING A ROOM WITH GRAPHIC EQUALIZATION? Some studios use graphic equalizers to “tune” rooms, but this is not necessarily a cure-all. Equalizer-based room tuning involves placing a mic where you would normally mix, feeding pink noise or test tones through a system, and tuning an equalizer (which patches in as the last device before the power amp) for flat response. Several companies make products to expedite this process, such as RTAs (Real Time Analyzers) that include the noise generator, along with calibrated mic and readout. You then diddle the sliders on a 1/3 octave graphic EQ to compensate for anomalies that show up on the readout. Some devices combine the RTA and EQ for one-stop analysis and equalization. While this sounds good in theory, there are two main problems: If you deviate from the “sweet spot” where the microphone was placed, the frequency response will change. Heavily equalizing a poor acoustical space simply gives you a heavily-equalized, poor acoustical space. However, newer methods of room tuning have been developed that take advantage of computer power, such as JBL’s MSC-1 and IK Multimedia’s ARC (Fig. 4). Fig. 4: IK Multimedia’s ARC is a more evolved version of standard room tuning; it’s effective over a wider listening area than older methods, and has a more sophisticated feature set. The most important point to remember about any kind of electronic room tuning is that like noise reduction, which works best on signals that don’t have a lot of noise, room tuning works best on rooms that don’t have serious response anomalies. It’s best to make corrections acoustically to minimize standing waves, check for phase problems, experiment with speaker placement, and learn your speaker’s frequency response. Once you have your room as close to ideal as possible, a device like ARC can make it even better. NEAR-FIELD MONITORS Traditional studios have large monitors mounted at a considerable distance (6 to 10 ft. or so) from the mixer, with the front flush to the wall, and an acoustically-treated control room to minimize response variations. The “sweet spot” — the place where room acoustics are most favorable — is designed to be where the mixing engineer sits at the console. However in smaller studios, where space and budget are at a premium, near-field monitors have become the standard way to monitor (Fig. 5). Fig. 5: There are tons of options for near-field monitors; KRK’s Rockit series monitors have been very popular for project studios. With this technique, small speakers sit around 3 to 6 feet from the mixer’s ears, with the head and speakers forming a triangle (Fig. 6). The speakers should point toward the ears and be at ear level; if slightly above ear level, they should point downward toward the ears. Fig. 6: Near-field monitor placement is important to achieve the most accurate monitoring. Near-field monitors minimize the impact of room acoustics on the overall sound, as the speakers’ direct sound is far louder than the reflections coming off the room surfaces. They also do not have to produce a lot of power because of their proximity to your ears, which also relaxes the requirements for the amps feeding them. However, placement in the room is still an issue. If placed too close to the walls, there will be a bass build-up. Although you can compensate with EQ (or possibly controls on the speakers themselves), the build-up will be different at different frequencies. High frequencies are not as affected because they are more directional. If the speakers are free-standing and placed away from the wall, back reflections from the speakers bouncing off the wall could affect the sound. You’re pretty safe if the speakers are more than 6 ft. away from the wall in a fairly large listening space (this places the first frequency null point below the normally audible range), but not everyone has that much room. My crude solution is to mount the speakers a bit away from the wall on the same table holding the mixer, and pad the walls behind the speakers with as much sound-deadening material as possible. Nor are room reflections the only problem; with speakers placed on top of a console, reflections from the console itself can cause inaccuracies. To get around this problem, I use a relatively small main mixer, so the near-fields fit to the side of the mixer, and are slightly elevated. This makes as direct a path as possible from speaker to eardrum. ANATOMY OF A NEAR-FIELD MONITOR Near-field monitors are available in a variety of sizes and at numerous price points. Most are two-way designs, with (typically) a 6” or 8” woofer and smaller tweeter. While a 3-way design that adds a separate midrange driver might seem like a good idea, adding another crossover and speaker can complicate matters. A well-designed two-way system is better than a so-so 3-way system. Although larger speaker sizes may be harder to fit in a small studio, the increase in low-frequency accuracy can be substantial. If you can afford (and your speaker can accommodate) an 8” speaker, it’s worth the stretch. There are two main monitor types, active and passive. Passive monitors consist of only the speakers and crossovers, and require outboard amplifiers. Active monitors incorporate any amps needed to drive the speakers from a line level signal. With powered monitors, the power amp and speaker have hopefully been tweaked into a smooth, efficient team. Issues such as speaker cable resistance become moot, and protection can be built into the amp to prevent blowouts. Powered monitors are often bi-amped (e.g., a separate amp for the woofer and tweeter), which minimizes intermodulation distortion and allows for tailoring the crossover points and frequency response for the speakers being used. If you hook up passive monitors to your own amps, make sure they have adequate headroom. Any clipping generates gobs of high-frequency harmonics, and sustained clipping can burn out tweeters. SO WHICH MONITOR IS BEST? You’ll see endless discussions on the net as to which near-fields are best. In truth, the answer may rest more on which near-field works best with your listening space and imperfect hearing response. How many times have you seen a review of a speaker where the person notes with amazement that some new speaker “revealed sounds not heard before with other speakers”? This is to be expected. The frequency response of even the best speakers differs sufficiently that some speakers will indeed emphasize different frequencies compared to other speakers, essentially creating a different mix. Although it’s a cliché that you should audition several speakers and choose the model you like best, you can’t choose the perfect speaker, because such an animal doesn’t exist. Instead, you choose the one that colors the sound in the way you prefer. Choosing a speaker is an art. I’ve been fortunate enough to hear my music over some hugely expensive systems in mastering labs and high-end studios, so my criterion for choosing a speaker is simple: whatever makes my “test” CD sound the most like it did over the big-bucks speakers wins. If you haven’t had the same kind of listening experiences, book 30 minutes or so at some really good studio (you can probably get a price break since you’re not asking to use a lot of the facilities) and bring along one of your favorite CDs. Listen to the CD and get to know what it should sound like, then compare any speakers you audition to that standard. One caution: if you’re comparing two sets of speakers and one set is even slightly louder than the other, you’ll likely choose the louder one as sounding better. To make a valid comparison, match the speaker levels as closely as possible. A final point worth mentioning is that speakers have magnets which, if placed close to CRT screens, can distort the display. Magnetically shielded speakers solve this problem, although this has become much less of an issue as LCD screens have pretty much taken over from CRTs. LEARNING YOUR SPEAKER AND ROOM Ultimately, because your own listening situation is imperfect, you need to “learn” your system’s response. For example, suppose you mix something in your studio that sounds fine, but sounds bass-heavy in a high-end studio with accurate monitoring. That means your monitoring environment is shy on the bass, so you boosted the bass to compensate (this is a common problem in project studios with small rooms). With future mixes, you’ll know to mix the bass lighter than normal. Compare midrange and treble as well. If vocals jump out of your system but lay back in others, then your speakers might be “midrangey.” Again, compensate by mixing midrange-heavy parts back a little bit. You also need to decide on a standardized listening level to help combat the influence of the Fletcher-Munson curve. Many pros monitor at low levels when mixing, not just to save one’s ears, but also because if something sounds good at low volume, it will sound great when you really crank it up. However, this also means that the bass and treble might be mixed up a bit more than they should be to compensate for the Fletcher-Munson curve. So, before signing off on a mix, check the sound at a variety of levels. If at loud levels it sounds just a hair too bright and boomy, and if at low levels it sounds just a bit bass- and treble-light, that’s probably about right. WHAT ABOUT HEADPHONES? Musicians on a budget often wonder about mixing over headphones, as $100 will buy a quality set of headphones, but not much in the way of speakers. Although mixing exclusively on headphones isn’t recommended by most pros, keep a good set of headphones around as a reality check (not the open-air type that sits on your ear, but the circumaural kind that totally surrounds your ear). Sometimes you can get a more accurate bass reading using headphones than you can with near-fields, and when “proofing” your tracks, phones will show up imperfections you might miss with speakers. Careful, though: it’s easy to blast your ears with headphones and not know it. SATELLITE SYSTEMS “Satellite” systems use tiny monitors that can’t really produce adequate bass in conjunction with a subwoofer, a fairly large speaker that’s fed from a frequency crossover so that it reproduces only the bass region. This speaker usually mounts on the floor, against a wall; placement isn’t overly critical because bass frequencies are relatively non-directional. Although satellite-based systems can make your computer audio sound great or allow a less intrusive hi-fi setup with tight living space, I wouldn’t mix a major label project over them. Perhaps you could learn these systems over time as well, but I personally have difficulty with the disembodied bass for critical mixes. However, using subwoofers with monitors that have decent bass response is another matter (Fig. 7). Fig. 7: The PreSonus Temblor T10 active subwoofer has a crossover that’s adjustable from 50 to 300Hz. The response of near-field monitors often starts to roll off around 50-100 Hz, which diminishes the strength of sub-bass sounds. Sounds in this region are a big part of a lot of dance music, and it’s important to know what’s going on down there. In this case, the subwoofer simply gives a more accurate indication of the bass region sound, STRENGTH IN NUMBERS Before signing off on a mix, listen through a variety of systems — car stereo speakers, hi-fi bookshelf speakers, big-bucks studio speakers, boom boxes, headphones, etc. This gives an idea of how well the mix will translate over a variety of systems. If the mix works, great — mission accomplished. But if it sounds overly bright on 5 out of 8 systems, pull back the brightness just a bit. The mastering process can compensate for some of this, but mastering works best with mixes that are already good. Many “pro” studios will have big, expensive speakers, a pair of near-fields for reality testing, and some “junk” speakers sitting around to check what something will sound like over something like a cheap TV. Switching back and forth among the various systems can help “zero in” on the ultimate mix that translates well over any system. The more you monitor, the more educated your ears will become. Also, the more dependent they will become on the speakers you use (some producers carry their favorite monitor speakers to sessions so they can compare the studio’s speakers to speakers they already know well). But even if you can’t afford the ultimate monitoring setup, with a bit of practice you can learn your system well enough to produce a good-sounding mix that translates well over a variety of systems – which is what the process is all about. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Just because you're faking it doesn't mean you have to sound fake... by Craig Anderton I travel, so I stay in a lot of hotels. This means that in the last decade, I’ve seen 9,562 musicians singing/playing to a drum machine, and 3,885 synth duos where a couple of musicians play along with a sequencer or sampler. I’ve even been in that position myself a few times. Audiences have come to accept drum machines, but one person on stage being backed up by strings, horns, pianos, and ethereal choirs rings false, and the crowd knows it. Yet you don’t want to lose the audience due to monotony. Unless you’re a spellbinding performer, hearing the same voice and guitar or keyboard for an entire evening can wear out your welcome. In the process of playing live, I’ve learned a bit about what does — and doesn’t — work when doing a MIDI-based act. Hopefully some of the following ideas will apply to your situation too. SEQUENCERS: NOT JUST NOTES One way to avoid resorting to “fake” sounds is to maximize the “real” sounds you already have. As a guitar player, that involves processing my guitar sound. Switching between a variety of timbres helps keep interest up without having to introduce new instruments. However, this creates a problem: using footswitches and pedals to change sounds diverts your attention from your playing, since you now have to worry about hitting the right button at the right time. For me, the solution is using amp sims that can accept MIDI continuous controllers to change several parameters independently. This is where a sequencer really shines — in addition to driving instrument parts, it can generate MIDI messages that change your sound automatically, with no pedal-pushing required. Amp sims running on a laptop are often ideal for this application because they tend to have very complete MIDI implementations, but many processors (Fig. 1) also accept continuous controller commands. If not, they will likely be able to handle program changes, which can still be useful. Fig. 1: Line 6’s POD HD500 can accept MIDI continuous controller commands that change selected parameters in real time. For example, on one of my tunes the sequencer sends continuous controller data to a single program to vary delay feedback, delay mix, distortion drive, distortion output, and upper midrange EQ. As the song progresses, the various settings “morph” from one setting to another — rhythm guitar with no delay, low distortion drive, and flat EQ all the way to lead guitar with delay, lots of distortion, and a slight upper midrange boost. Within the main guitar solo itself, the delay feedback increases until the solo’s last note, at which point it goes to maximum so the echo “spills” over into the following rhythm part. Not only does this sound cool, it adds an interactive element. It’s not human beings, but still, I can play off some changes. What’s more, it doesn’t seem fake to the audience because all the sounds have a direct correlation to what’s being played. It’s true that using a sequencer ties you to a set arrangement, with very few exceptions. However, although sections of the song are limited to a certain number of measures, you can nonetheless play whatever you want within those measures, so solos can still be different each time you play them. THE VOCAL ANGLE I really like the DigiTech and TC-Helicon series of processors for live vocals. Being able to generate harmonies is cool enough, but there’s a lot of MIDI power in some of these boxes (Fig. 2), and you can do the same type of MIDI program or continuous controller tricks as those mentioned above for guitar. Fig. 2: DigiTech’s Vocalist Live Pro can use MIDI continuous controller and program changes to alter a wide range of parameters. Once again, even though you’re generating a big sound it’s all derived from your voice, so the audience can correlate what it hears to what’s seen on stage. THE SAMPLER CONNECTION A decent sampler (or workstation with sampling capabilities; see Fig. 3) that includes a built-in MIDI sequencer is ideal as a live music backup companion. It can hold any kind of drum sounds, hook up to external storage for fast loading and saving of sounds and songs, and generate the continuous controller data needed to control signal processors with its sequencer. Fig. 3: Yamaha’s Motif XF isn’t just a fine synthesizer/workstation, but includes flash memory for storing and playing back custom samples. Samplers are also great because you can toss in some crowd-pleasing samples when the natives get restless. A few notes from a TV theme song, a politician making a fool of himself, a bit from a 50s movie — they’re all fun. And to conserve memory you can usually get away with sampling them at a pretty low sampling frequency. When sampling bass parts for live use, it’s often best to avoid tones that draw a lot of attention to themselves, like highly resonant synth bass or slap bass. A round, full line humming along in the background fills the space just fine. PLAYING WITH MYSELF When I switch over to playing a lead after playing rhythm guitar, it leaves a pretty big hole. To fill the space without resorting to sequencing other instruments, I sample some power chords and rhythm licks from my guitar, and sequence them behind solos. This doesn’t sound too fake because the audience has already heard these sounds, so they just blend right in. Furthermore, the background sounds don’t have to be mixed very high. Adding just a bit creates a texture that fills out the sound nicely. MULTI-INSTRUMENTALISTS One of my favorite solo acts is a multi-instrumentalist in Vancouver named Tim Brecht who plays guitar, keyboards, drums, flute, and several percussion instruments during the course of his act (he also does some interesting things with hand puppets, but that’s another story). So when the sequenced drums play, people can accept it because they know he can play drums. Similarly, on some songs I’ll play a keyboard part instead of guitar. This not only provids a welcome break, but when I sequence the same keyboard sound as a background part later on, it’s no big deal because the audience has already been exposed to it and seen me play it. FOR BETTER DRUMS, USE A DRUMMER Okay, maybe you can’t convince your favorite drummer friend to come along to the gig. But if can have a real drummer program your drum sequences, it really does make a difference. MIDI GUITAR? I’m seeing more people using MIDI guitar live (Fig. 4), but not in heavy-metal or techno bands: these are typically solo acts in places like restaurants. Fig. 4: Fishman's TriplePlay retrofits existing guitars for MIDI, and transmits the signals wirelessly to a computer. They use MIDI guitar because again, it reduces the fake factor. Even if you’re playing other instrument sounds, people can see that what you’re playing is creating the sound. Some changes can be more subtle, like triggering a sampler with a variety of different guitar samples so you can go from acoustic, to electric, to 12-string, just by calling up different patches. Being able to layer straight guitar and synthesized sounds is a real bonus, as it reinforces the fact that the synth sounds relate to the guitar. IT’S THE MUSIC THAT MATTERS All of these tips have one goal: to make it easier to play live (in spite of the technology!), and to avoid sounding overly fake. People want to see you jumping around and having a good time, not loading sequences and fiddling with buttons. The less equipment you have to lug around, the better — both for reliability and minimal setup hassles. When MIDI came out, it changed my performance habits forever. If nothing else, I haven’t done a footswitch tap dance while balancing on a volume pedal in years — and I hope never to do one again! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Create your own drum loops, and you don't have to settle for what other people think would work well with your music by Craig Anderton Sure, there are some great sample libraries and virtual instruments available with fabulous drum loops. But it always seems that I want to customize them, or do something like add effects to everything but the kick drum. Fortunately, many libraries also include samples of the individual drums used to create the loops, so you can always stick ’em in your sampler and overdub some variations. But frankly, I find that process a little tedious, and decided that it would be easier (and more fun) in the long run just to make my own customizable drum loops. However, there’s more than one way to accomplish that task; I tried several approaches, and here are some that worked for me. ASSEMBLING FROM SAMPLES For loops you can edit easily, you can import samples into a multitrack hard disk recording program, arrange them as desired at whatever tempo you’d like, bounce them together to create loops, then save the bounced tracks as WAV or AIFF file types for use in tunes. Although you can create loops at any tempo, if you plan to turn them into “stretchable” loops with Acidization or REX techniques, I recommend a tempo of 100BPM (see the article “How to Create Your Own Loops from an Audio File”). Let’s go through the process, step-by-step. 1. Collect the drum samples for your loop (Fig. 1) and create the tracks to hold them. Before bringing in any samples, consider saving this project as a template to make life easier if you want to make more loops in the future. (In addition to sample libraries, there’s an ancient, free Windows program called Stomper that can generate some very cool analog drum sounds.) Fig. 1: A template set up in Cakewalk Sonar for one-measure loops. Samples (from the Discrete Drums Series 1 library) can be dragged from the browser (right pane) into the track view. 2. In your DAW, set the desired tempo and “snap” value (typically 16th notes, but your mileage may vary). Even if you plan to “humanize” drum hits instead of having them snap to a grid, I find it’s easier to start with them snapped, and then add variations later. 3. Import and place samples (Fig. 2). Fig. 2: The samples have all been placed to create the desired loop. Volume and pan settings have also been set appropriately. I prefer to place each sound on its own track, although sometimes it’s helpful to spread the same sound on different tracks if specific sounds need to be processed together. For example, if you have a techno-type loop with a 16th note high-hat part and want to accent the hats that fall on each quarter note, place these on their own track while the other hats go on a separate track. That way it’s easy to lower the level of the non-accented hats without affecting the ones on the quarter notes. 4. Bounce and save. This is the final part of the process. One option is to simply bounce all parts together into a mono or stereo track that you can save as a WAV or AIFF file. But I also make a stereo mix of all sounds except kick in case I want to replace the kick in some applications, or add reverb to all drums except the kick. I’ll often save a separate file for each drum sound as well, and all these variations go into a dedicated folder for that loop. THE VALUE OF VARIATIONS The advantage of giving each sound its own file is that it allows lots of flexibility when creating variation loops. Here are a few examples: Slide a track back and forth a bit in time for “feel factor” applications. For example, move the snare ahead in time for a more “nervous” feel, or behind the beat for a more “laid back” effect. Change pitch in a digital audio editor (this assumes you can maintain the original duration) to create timbral variations. Copy and paste to create new parts. For example, a common electronica fill is to have a snare drum on every 16th note that increases linearly in level over one or two measures. If a snare is on 2 and 4, you can copy and offset the track until you have a snare on every 16th note. Premix the tracks together, fade in the level over the required number of measures, and there's your fill. Drop out individual tracks to create remix variations. Having each sound on its own track makes it easy to drop out and add in parts during the remix process. Create “virtual aux busses” by bouncing together only the sounds you want to process. Suppose you want to add ring modulation to the toms and snare, but nothing else. Mute all tracks except toms and snare, premix them together, import the file into a digital audio editing program capable of doing ring modulation, save the processed file, then import it in place of the existing tom and snare tracks. TRICKS WITH COMPLETE LOOPS After you have a collection of loops, it’s time to string them together and create the rhythm track. Here are some suggested variations. Copy a loop, then transpose it down an octave while preserving duration. This really fattens up the sound if you mix the transposed loop behind the main loop. When trying to match loops that aren’t at exactly the same tempo, I generally prefer to shift pitch to change the overall length rather than use time compression/expansion, which usually messes with the sound more (especially with program material). This only works if the tempo variation isn’t too huge. Take a percussion loop (e.g., tambourines, shakers, etc.) that's more accent-oriented than rhythmic, then truncate an eighth-note or quarter note from the beginning. Because the loop duration will be shorter than the main loop, it repeats a little sooner each time the main loop goes around, thus adding variations. If you can't loop individual tracks differently, then copy and paste the truncated loop and place the beginning of the next loop up against the end of the previous loop. Copy, offset, and change levels of loops to create echo effects. Eighth and sixteenth-note echoes work well, but sometimes triplets are the right tool for the job. APPLIED LOOPOLOGY Of course, using drum loops can get a lot more involved than this, such as mixing and matching loops from different sample libraries and such. However, one problem is that loops from different sources are often equalized differently. Now’s a good time to use your digital audio editor or DAW’s spectrum analysis option to check the overall spectral content of each loop, so you can quickly compensate with equalization. Sure, you can do it by ear too, but spectrum analysis can sometimes save you some time by pointing out where the biggest differences lie. Well, those are enough tips for now. The more creative you get with your loops, the more fun you (and your listeners) will have. Happy looping! looping! looping! looping! looping! looping! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Keep your tracking session on an even keel with these tips for smoother sessions By Craig Anderton As the all-important first step of the recording process, laying down tracks is crucial. No matter how well you can mix and master, you’re hosed if the tracks aren’t good. But tracking is an elusive art. Some feel it’s pretty much a variation on performing; others step enter tracks via MIDI, one note at a time. Yet regardless of how you approach tracking, you want to create a recording environment where inspiration can flourish — troubleshooting your setup in the middle of the creative process can crush your muse. There are valid psychological reasons why this is so, based on the way that our brain processes information; suffice it to say you don’t want to mix creative flights of fancy with down-to-earth analytical thinking. So, let’s investigate a bunch of tips on how to track as efficiently — and creatively — as possible. 1 HAVE EVERYTHING READY TO GO I’m a fanatic when miking an acoustic instrument: I need one person to adjust the mics, and another to play the instrument, while I listen in the control room. But I also want all this setup to be done before the session begins, so the artist can be as fresh as possible. True, sometimes it’s necessary to make some compensations due to differences in “touch,” but those compensations don’t take very long. 2 CREATE A SCREEN LAYOUT THAT’S OPTIMIZED FOR TRACKING Most sequencers let you save specific “views” or window sets (Fig. 1). For example, you certainly don’t need to do waveform editing when you’re tracking (and if you do, we need to talk!). Fig. 1: Logic was one of the first DAWs to really exploit screen presets. As you’ll likely not be sitting right next to your computer as you play an instrument, go for large fonts, big readouts, wide instead of narrow channel strips—anything that makes the recording and track assignment process more obvious. 3 ZERO THE CONSOLE If you’re using a hardware mixer, center all the EQ controls, turn all the sends to zero, make sure anything that can be bypassed is in the bypass mode, and so on. Many mixer modules have some kind of reset option; take advantage of them. You want to make sure that any changes you make start from a consistent point, as well as insure that there aren’t any spurious noise contributions (like from an open mic preamp). 4 LEARN SOFTWARE SHORTCUTS Anytime you can hit a keyboard key instead of move a mouse, you’re saving time, effort, and staying in the right-brain (creative) frame of mind. For example if you don’t use the top octave of an 88-note keyboard much, your software might allow you to assign these keys to the record buttons on the first 12 channels of your tracking setup—or at the very least, use the top few notes for transport control. 5 CONTROLLERS CAN BE A BEAUTIFUL THING Once upon a time in a galaxy far, far away, DigiTech made a guitar processor called the GNX4. One of its features was “hands-free recording” when used with Cakewalk hosts like Sonar, where you could initiate playback, record, arm tracks, create new tracks, and other operations simply by pushing footswitches. While intended for guitar players, I found it very helpful for general recording applications and never abandoned a quest for footswitches. Fig. 2: The three jacks toward the right are for two footswitches and an expression pedal. The footswitches defaul to transport functions, but can be reassigned. If you have a MIDI keyboard, chances are you can use a sustain pedal to do something useful, like initiate recording. The Mackie Control Universal Pro (Fig. 2) has two footswitch jacks, which default to start/stop and record, and you can take this to the max with X-Tempo Designs’ wireless POK footswitch bank. 6 KNOW WHEN TO TAKE A BREAK If someone cutting a track starts running into a wall, it’s seldom worth continuing. It’s better to take a break and let the player (that means you, too!) come back refreshed and with a slightly different perspective. 7 TAKE ADVANTAGE OF LOOP RECORDING Loop recording, also called composite recording (Fig. 3), can help put together the perfect performance. For more information on loop recording, check out this article. Fig. 3: Sonar X3's "speed comping" merges loop recording with keyboard navigation. But loop recording is something best done at one time. If you record a bunch of takes, edit the best parts together, then try to add more parts, the newer takes seldom match up well with the older ones. If you need to add more parts, consider starting over or make sure you record enough takes in the first place. 8 DON’T EDIT WHILE YOU TRACK Because you read all the way to the end, your reward is the most important tip here. With loop recording, it might be tempting to edit the parts together right after recording them. But don’t — that can really disrupt the session’s flow if more tracking is on the agenda. As long as you know that you have enough good takes to put together a part, move on. The same applies to any editing. Even with MIDI, I’ll usually leave a track “as is,” and use real-time MIDI plug-ins (which don’t alter the file) to do any quantization if a part has some rough spots. Tracking is tracking; editing is editing. Do just enough editing (if needed) so that other players have something decent to follow, and worry about doing any polishing later. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. Sometimes the "right" way to do things is nowhere near as much fun as the "wrong" way By Craig Anderton Whether giving seminars or receiving emails, I’m constantly asked about the “right” way to record, as if there was some committee on standards and practices dedicated to the recording industry (“for acoustic guitar, you must use a small diaphragm condenser mic, or your guitar will melt”). Although I certainly don’t want to demean the art of doing things right, some of the greatest moments in recording history have come about because of ignorance, unbridled curiosity, luck, trying to impress the opposite sex, or just plain making a mistake that became a happy accident. When Led Zeppelin decided to buck the trend at that time of close miking drums, the result was the Olympian drum sound in “When the Levee Breaks.” Prince decided that sometimes a bass simply wasn’t necessary in a rock tune, and the success of “When Doves Cry” proved he was right. Distortion used to be considered “wrong,” but try imaging rock guitar without it. A lot of today’s gear locks out the chance to make mistakes. Feedback can’t go above 99, while “normalized” patching reduces the odds of getting out of control. And virtual plug-ins typically lack access points, like insert and loop jacks, that provide a “back door” for creative weirdness. But let’s not let that stop us—it’s time to reclaim some our heritage as sonic explorers, and screw up some of the recording process. Here are a few suggestions to get you started. UNINTENDED FUNCTIONS One of my favorite applications is using a vocoder “wrong.” Sure, we’re supposed to feed an instrument into the synthesis input, and a mic into the analysis input. But using drums, percussion, or even program material for analysis can “chop” the instrument signal in rhythmically interesting ways. Got a synth, virtual or real, with an external input (Fig. 1)? Turn the filter up so that it self-oscillates (if it lets you), and mix the external signal in with it. Fig. 1: Arturia’s miniV has an external input. Insert it into a track as an effect, and you can process a signal with the synth’s various modules. The sound will be dirty, rude, and somewhat like FM meets ring modulation. To take this further, set up the VCA so you can do gated/stuttering techniques by pressing a keyboard key to turn it on and off. And we all know headphones are for outputting sound, right? Well, DJs know you can hook it up reverse, like a mic. Sure, the sound is kinda bassy because the diaphragm is designed to push air, not react to tiny vibrational changes. But no problem! Kick the living daylights out of the preamp gain, add a ton o’ distortion, and you’ll generate enough harmonics to add plenty of high frequencies. I was reluctant to include the following tip, as it relies on the ancient Lexicon Pantheon reverb (a DirectX format plug-in included in Sonar, Lexicon Omega, and other products back in the day). I really tried to find a more contemporary reverb that can do the same thing, but I couldn’t. However, this does give a fine example of unintended functionality: having a reverb iprovide some really cool resonator effects. If you have a Pantheon, try these settings (Fig. 2): Reverb type: custom Pre-delay, Room Size, RT60, Damping: minimum settings Mix: 100\% (wet only) Level: as desired Density Regen: +90\% Density Delay: between 0 and 20ms Echo Level (Left and Right): off Spread, Diffusion: 0 Bass boost: 1.0X Fig. 2: The plug-in says it’s a reverb, but here Pantheon is set up as a resonator. Vary the Regen and Delay controls, but feel free to experiment with the others. You can even put two Pantheons in series set for highly resonant, totally spooky sounds. PARAMETER PUSHING The outer edges of parameter values are meant for exploration. For example, digital audio pitch transposition can provide all kinds of interesting effects. Tune a low tom down to turn it into a thuddy kick drum, or transpose slap bass up two octaves to transform it into a funky clav. Or consider the “acidization” process in Acid and Sonar. Normally, you set slice points at every significant transient. But if you set slice points at 32nd or 64th notes, and transpose pitch up an octave or two, you’ll hear an entirely different type of sound. I also like to use Propellerheads’ ReCycle as a “tremolo of the gods” (Fig. 3). Fig. 3: ReCycle can do more than simply convert WAV or AIFF files into stretchable audio—it can also create novel tremolo effects. Load in a sustained sound like a guitar power chord, set slice points and decay time to chop it into a cool rhythm, then send it back to the project from which it came. GUITAR WEIRDNESS For a different type of distortion, plug your guitar directly into your mixer (no preamp or DI box), crank the mic pre, then use EQ to cut the highs and boost the mids to taste. Is this the best distortion sound in the world? No. Will it sound different enough to grab someone’s attention? Yes. When you play compressed or highly distorted guitar through an amp (or even studio monitors, if you like to live dangerously), press the headstock up against the speaker cabinet and you’ll get feedback if the levels are high enough. Now work that whammy bar... Miking guitar amps is also a fertile field for weirdness. Try a “mechanical bandpass filter” with small amps—set up the mic next to the speaker, then surround both with a cardboard box. One of the weirdest guitar sounds I ever found was when I re-amped the guitar through a small amp pointed at a hard wall, set up two mics between the amp and the wall, then let them swing back and forth between the amp and wall. It created a weird stereo phasey effect that sounded marvelous (or at least strange) on headphones. DISTORT-O-DRUM Distortion on drums is one of those weird techniques that can actually sound not weird. You can put a lot of distortion on a kick and not have it sound “wrong”—it just gains massive amounts of punch and presence. One of my favorite techniques is copying a drum track, putting it in parallel with the original drum track, then running the copy through a guitar amp plug-in set for a boxy-sounding cabinet. It gives the feeling of being in a really funky room. Replacing drum sounds can also yield audio dividends. My musical compatriot Dr. Walker, a true connoisseur of radical production techniques, once replaced the high-hat in his drum machine with sampled vinyl noise. That was a high-hat with character, to say the least. If you want a sampled drum sound to have an attack that cuts through a track like a machete, load the sample into a digital audio editor that has a pencil tool. Then, within the first 2 or 3ms of the signal, add a spike (shown in red in the diagram for clarity; see Fig. 4). Fig. 4: Messing up a drum sample’s initial attack adds a whole new kind of flavor. When you play back the sound, the attack will now be larger than life, loaded with harmonics, and ready to jump out the speaker. However, it all happens so fast you don’t really perceive it as distortion. (You can even add more spikes if you dare.) Another drum trick that produces a ton of harmonics at the attack is to normalize a drum sample, then increase gain by a few dB — just enough to clip the first few milliseconds of the signal. Again, the drum sound will slam out of the speakers. FUN WITH FEEDBACK A small hardware mixer is a valuable tool in the quest for feedback-based craziness. Referring to Fig. 5, if you have a hardware graphic equalizer, patch it after the mixer output, split the EQ’s output so that one split returns back into the mixer input, monitor the EQ’s other split from the output, and feed in a signal (or not — you can get this to self-oscillate). Fig. 5: Here’s a generalized setup for adding feedback to a main effect. The additiona effect in the feedback loop isn’t essential, but changing the feedback loop signal can create more radical results. With the EQ’s sliders at 0, set the mixer to just below unity. As you increase the sliders, you’ll start creating tones. This requires some fairly precise fader motion, so turn down your monitors if the distortion runs away—or add a limiter to clamp the output. If you have a hardware pitch shifter, then feed some of the output back to the input (again, the mixer will come in handy) through a delay line at close to unity gain. Each echo will shift further downward or upward, depending on your pitch transposer’s setting. With some sounds, this can produce beautiful, almost bell tree-like effects. Feedback can also add unusual effects with reverb, as the resonant peaks tend to shift. At some settings, the reverb crosses over into a sort of tonality. You may need to tweak controls in real time and ride everything very carefully, but experiment. Hey, that’s the whole message of this article anyway! PREFAB NASTINESS? Lately there’s been a trend to “formalize” weird sounds, like bit reducers, vinyl emulators, and magnetic tape modelers. While these are well-intentioned attempts to screw things up, there’s a big difference between a plug-in that reduces your audio to 8 bits, and playing back a sample on a Mirage sampler, which is also 8 bits. The Mirage added all kinds of other oddities — noises, aliasing, artifacts — that the plug-in can’t match. Playing a tune through a filter, or broadcasting it to a transistor radio in front of a mic (try it sometime!) produce very different results. Bottom line: Try to go to the source for weirdness, or create your own. Once weirdness is turned into a plug-in with 24/96 resolution, I’m not sure it’s really weirdness anymore. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. As we close out MIDI’s 30th anniversary, it’s instructive to reflect on why it has endured and remains relevant By Craig Anderton The MIDI specification first saw the light of day at the 1981 AES, when Dave Smith of Sequential Circuits presented a paper on the “Universal Synthesizer Interface.” It was co-developed with other companies (an effort driven principally by Roland’s Ikutaro Kakehashi, a true visionary of this industry), and made its prime time debut at the 1983 Los Angeles NAMM show, where a Sequential Circuits Prophet-600 talked to a Roland keyboard over a small, 5-pin cable. I saw Dave Smith walking around the show and asked him about it. “It worked!” he said, clearly elated—but I think I detected some surprise in there as well. Sequential Circuits' Prophet-600 talking to a Roland keyboard at the 1983 NAMM show (photo courtesy the MIDI Manufacturers Association and used with permission) “It” was the Musical Instrument Digital Interface, known as MIDI. Back in those days, polyphonic synthesizers cost thousands of dollars (and “polyphonic” meant 8 voices, if you were lucky and of course, wealthy). The hot computer was a Commodore-64, with a whopping 64 kilobytes of memory—unheard of in a consumer machine (although a few years before, an upstart recording engineer named Roger Nichols was stuffing 1MB memory boards in a CompuPro S-100 computer to sample drum sounds). The cute little Macintosh hadn’t made its debut, and as impossible as it may seem today, the PC was a second-class citizen, licking its wounds after the disastrous introduction of IBM’s PCjr. Tom Oberheim had introduced his brilliant System, which allowed a drum machine, sequencer, and synthesizer to talk together over a fast parallel bus. Tom feared that MIDI would be too slow. And I remember talking about MIDI at a Chinese restaurant with Dave Rossum of E-mu systems, who said “Why not just use Ethernet? It’s fast, it exists, and it’s only about $10 to implement.” But Dave Smith had something else in mind: An interface so simple, inexpensive, and foolproof to implement that no manufacturer could refuse. Its virtues would be low cost, adequate performance, and ubiquity in not just the pro market, but the consumer one as well. Bingo. But it didn’t look like success was assured at the time; MIDI was derided by many pros who felt it was too slow, too limited, and just a passing fancy. 30 years later, though, MIDI has gone far beyond what anyone had envisioned, particularly with respect to the studio. No one foresaw MIDI being part of just about every computer (e.g., the General MIDI instrument sets). This trend actually originated on the Atari ST—the first computer with built-in MIDI ports as a standard item (see "Background: When Amy Met MIDI" toward the end of this article). EVOLUTION OF A SPEC Oddly, the MIDI spec officially remains at version 1.0, despite significant enhancements over the years: the Standard MIDI File format, MIDI Show Control (which runs the lights and other effects at Broadway shows like Miss Saigon and Tommy), MIDI Time Code to allow MIDI data to be time-stamped with SMPTE timing information, MIDI Machine Control for integration with studio gear, microtonal tuning standards, and a lot more. And the activity continues, as issues arise such as how best to transfer MIDI over USB, with smart phones, and over wireless. The guardian of the spec, the MIDI Manufacturers Association (MMA), has stayed a steady course over the past several decades, holding together a coalition of mostly competing manufacturers with a degree of success that most organizations would find impossible to pull off. The early days of MIDI were a miracle: in an industry where trade secrets are jealously guarded, manufacturers who were intense rivals came together because they realized that if MIDI was successful, it would drive the industry to greater success. And they were right. The MMA has also helped educate users about MIDI, through books and online materials such as "An Introduction to MIDI." I had an assignment at the time from a computer magazine to write a story about MIDI. After turning it in, I received a call from the editor. He said the article was okay, but it seemed awfully partial to MIDI, and was unfair because it didn’t give equal time to competing protocols. I tried to explain that there were no competing protocols; even companies that had other systems, like Oberheim and Roland, dropped them in favor of MIDI. The poor editor had a really hard time wrapping his head around the concept of an entire industry willingly adopting a single specification. “But surely there must be alternatives.” All I could do was keep replying, “No, MIDI is it.” Even when we got off the phone, I’m convinced he was sure I was holding back information on MIDI’s competition. MIDI HERE, MIDI THERE, MIDI EVERYWHERE Now MIDI is everywhere. It’s on the least expensive home keyboards, and the most sophisticated studio gear. It’s a part of signal processors, guitars, keyboards, lighting rigs, smoke machines, audio interfaces…you name it. It has gone way beyond its original idea of allowing a separation of controller and sound generator, so people didn’t have to buy a keyboard every time they wanted a different sound. SO WHERE’S IT GOING? “Always in motion, the future…” Well, Yoda does have a point. But the key point about MIDI is that it’s a hardware/software protocol, not just one or the other. Already, the two occasionally take separate vacations. The MIDI data in your DAW that drives a soft synth doesn’t go through an opto-isolators or cables, but flies around inside your computer. One reason why MIDI has lasted so long is because it’s a language that expresses musical parameters, and these haven’t changed much in several centuries. Notes are still notes, tempo is still tempo, and music continues to have dynamics. Songs start and end, and instruments use vibrato. As long as music is made the way it’s being made, the MIDI “language” will remain relevant, regardless of the “container” used to carry that data. However, MIDI is not resting on its laurels, and neither is the MMA—you can find out what they're working on for the future here. Happy birthday, MIDI. You have served us well, and we all wish you many happy returns. For a wealth of information about MIDI, check out The MIDI Association web site. Background: When Amy Met MIDI [attachment=139991:name]After MIDI took off, many people credited Atari with amazing foresight for making MIDI ports standard on their ST series of computers. But the inclusion of MIDI was actually a matter of practicality. Commodore was riding high with the C-64, in large part because of the SID (Sound Interface Device) custom IC, a very advanced audio chip for its time. (Incidentally, Bob Yannes, one of Ensoniq’s founders and also the driving force behind the Mirage sampler, played the dominant role in SID’s development.) Atari knew that if it wanted to encroach on Commodore’s turf, they needed something better than SID. They designed an extremely ambitious sound chip, code-named Amy, that was supposed to be a “Commodore killer.” But Amy was a temperamental girl, and Atari was never able to get good enough yields to manufacturer the chips economically. An engineer suggested putting a MIDI port on the machine, so it could drive an external sound generator; then they wouldn’t have to worry about an onboard sound chip. Although this solved the immediate Amy problem, it also turned out to be a fortuitous decision: Atari dominated the European music-making market for years, and a significant chunk of the US market as well. To this day, a hardy band of musicians still use their aging ST and TT series Atari computers because of the exceptionally tight MIDI timing – a result of integrating MIDI into the core of the operating system. Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. It's a whole new world for DJs - and there's a whole new world of DJing options by Craig Anderton If the term “DJ” makes you think “someone playing Barbra Streisand songs at my cousin’s wedding,” then you might think gas is $1.20 a gallon, and wonder how Ronald Reagan will turn out as president. DJing has changed radically over the past two decades, fueled by accelerating world-wide popularity, technological advances, and splits into different styles. While some musicians dismiss DJs because they “just play other peoples’ music, not make their own,” DJing demands a serious skill set that’s more like a conductor or arranger. Sets are long, and there are no breaks—you not only have to pace the music perfectly to match the audience’s mood, but create seamless transitions between cuts that are probably not at the same tempo or key. On top of that, DJs require an encyclopedic knowledge of the music they play so they can always choose the right music at the right time, and build the dynamics of the music into an ongoing series of peaks and valleys—with each peak taking the audience higher than the previous one. What’s more, the bar is always being raised. DJs are no longer expected just to play music, but use tempo-synched effects, and sometimes even trade off with other DJs on the same stage or integrate live musicians—or play instrumental parts themselves on top of what they’re spinning. Quite a few DJs have gotten into not just using other tracks, but creating their own with sophisticated software DAWs. Let’s take a look at some of the variant strains of DJing. These apply to both mobile DJs, the closest to the popular (mis)conception of the DJ as they typically bring their own sound systems, music, and play events including everything from art opening to weddings; and club DJs, who are attractions at dance clubs and renowned for sophisticated DJing techniques (like effects and scratching). VINYL AND TURNTABLES This is where it all started, where DJs have to beat-match by listening carefully to one turntable while the other is spinning, line up the music, then release the second turntable at the precise moment to sync up properly with the current turntable, and crossfade between the two. Vinyl is where scratching originated by moving the record back and forth under the needle. Vinyl is still popular among traditionalists, but there are many more alternatives now. The Stanton STR8-150 is a high-torque turntable with a "skip-proof" straight tone arm, key correction, reverse, up to 50\\\% pitch adjustment, and S/PDIF digital outputs. DJING WITH CDS As CDs replaced vinyl, DJs started looking for DJing solutions involving CDs. Through digital technology, it became possible to DJ with CDs, as well as use vinly record-like controllers to simulate the vinyl DJ experience (scratching and beat-matching) with CDs. Originally frowned on by traditional DJs, CD-based DJs developed their own skill set and figured out how to create an end result with equal validity to vinyl. THE OTHER MP3 REVOLUTION As MP3s replaced CDs, DJs again followed suit. But this time, the full power of the computer started being brought into play. Many MP3-based DJing packages now combine hardware controllers with computer programs that not only play back music, but include effects and allow seeing visual representations of waveforms to facilitate beat-matching. What’s more, effects often sync to tempo and map to controls, so the DJ can add these effects in creative ways that become part of the performance. Native Instruments’ Traktor Kontrol is designed specifically as a match for their Traktor DJing software. MP3-based DJing also meant that DJs were freed forever from carrying around records or CDs, as they could store gigabytes of music on the same laptop running the DJ program itself. ABLETON LIVE: THE DAW FOR DJS This article isn’t really about mentioning products, but in this case, there’s no other option: Live occupies a unique position as a program that straddles the line between DAW and DJ. It’s hard to generalize about how people use Live, because different DJs have very different approaches. Some bring in complete songs and use Live’s “warping” capabilities to beat-match, then crossfade between them on-fly-while bringing in other music; others construct entire compositions out of loops, which they trigger, solo, mute, and arrange in real time. Live’s “Session View” is the main aspect of the program used to create DJ sets out of loops and other digital audio files. Although a runaway favorite of DJs, Live isn’t the only program used by DJs—Propellerhead Reason, Sony Acid, and Apple Logic are three other mainstream programs that are sometimes pressed into service as DJ tools. NONE OF THE ABOVE: OTHER DJ TOOLS A variety of musical instruments are also used for DJing. Although the best-known are probably Akai’s MPC-series beatboxes, people use everything from sampling keyboards to Avid’s Venom synth in multi-timbral mode to do, if not traditional DJing, beats-oriented music that is closer to DJing than anything else. Akai’s MPC5000 is a recent entry in the MPC series, invented by Roger Linn, which popularized the trend of DJs using “beatbox”-type instruments. I've even used M-Audio's Venom synthesizer to do a DJ-type set by calling up Multis and soloing/muting/mixing drum, bass, arpeggiator patterns, and playng lead lines on top of all that. Here's a video whose soundtrack illustrates this application. If you haven’t done any DJing, it’s fun—and if you haven’t heard good DJ sets, internet radio is a great place to find them being played out of Berlin, Paris, Holland, Bangkok, and other musical hotbeds. But be forewarned: You may find a brand new musical addiction. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. There’s much more to mixing than just levels By Craig Anderton When mixing, the usual way to make an instrument stand out is to raise its level. But there are other ways to make an instrument leap out at you, or settle demurely into the background, that don’t involve level in the usual sense. These options give you additional control over a mix that can be very helpful. CHANGING START TIMES CHANGES PERCEIVED LOUDNESS The ear is most interested in the first few hundred milliseconds of a sound, then moves on to the next sound. This may have roots that go way back into our history, when it was important to know if a new sound was leaves rustling in the wind – or a sabre-tooth tiger about to pounce. What happens during those first few hundred milliseconds greatly affects the perception of how “loud” that signal is, as well as the relationship to other sounds happening at the same time. Given two sounds that play at almost the same time, the one that started first will appear to be more prominent. For example, suppose you have kick drum and bass playing together. If you want the bass to be a little more prominent than the kick drum, move it ahead of the kick. To push the bass behind the kick, move it late compared to the kick. The way to move sounds depends on your recording medium. With MIDI sequencers, a track shift function will do the job. With hard disk recorders, you can simply grab a part on-screen and shift it, or use a “nudge” function (if available). Even a few milliseconds of shift can make a big difference. CREATIVE USE OF DISTORTION If you want to bring just a couple instruments out from a mix, patch an exciter or “tube distortion” device set for very little distortion (depending on whether you’re looking for a cleaner or grittier sound, respectively) into an aux bus during mixdown. Now you can turn up the aux send for individual channels to make them jump out from a mix to a greater or lesser degree. TUBES AS PROCESSORS Many members of the “anti-digital” club talk about how tube circuitry creates a mellower, warmer sound compared to solid state devices. Whether you agree or not, one thing is clear: the sound is at the very least different. Fortunately, you can use this to your advantage if you have a digital recorder. As just one example of how to change the mix with tubes, try recording background vocals through a tube preamp, and the lead vocal through a solid-state preamp (or vice-versa). Assuming quality circuitry, the “tubed” vocals will likely sound a little more “in the background” than the solid-state ones. Percussion seems to work well through tubes too, especially when you want the sound to feel less prominent compared to trap drums. PITCH CHANGES IN SYNTH ENVELOPES This involves doing a little programming at your synth, but the effect can be worth it. As one example, take a choir patch that has two layered chorus sounds (the dual layering is essential). If you want this sound to draw more attention to itself, use a pitch envelope to add a slight downward pitch bend to concert pitch on one layer, and a slight upward pitch bend to concert pitch on the other layer. The pitch difference doesn’t have to be very much to create a more animated sound. Now remove the pitch change, and notice how the choir sits further back in the track. Click here for an audio example that plays a short choir part first without the pitch bend, then adds pitch bend. MINI FADE-INS With a hard disk recorder, you can do little fade-ins to make an attack less prominent, thus putting a sound more in the background. However, if you do a fade starting from the beginning of a sound, you’ll lose the attack altogether. Instead, extend the start of the fade to before the sound begins (Fig. 1). Fig. 1: Starting a fade before a sound begins softens the attack without eliminating it. After applying the fade-in operation, the audio doesn’t come up from zero, and the attack will be reduced. VOCAL PANNING One common technique used to strengthen voices is doubling, where a singer sings a part then tries to duplicate it as closely as possible. The slight timing variations add a fuller effect than doubling the sound electronically. However, panning or centering these two tracks makes a big difference during mixing. When centered, the vocal lays back more in the track, and can tend to sound not as full. When panned out to left and right (this needn’t be an extreme amount), the sound seems bigger and more prominent. Some of this is also due to the fact that when panned together, one voice might cover up the other a bit. This doesn’t happen as much when panned. CHORUSING AS KRYPTONITE If you want to weaken a signal, a chorus/flanger can help a lot if it has the option to throw the delayed signal out of phase with the dry signal. Set the chorus/flanger for a short delay (under 10 ms or so), no modulation depth, and use an out of phase output mix (e.g., the output control that blends straight and delayed sounds says -50 instead of +50, or there's an option to invert the signal – see Fig. 2). Fig. 2: A chorus/flanger, when adjusted properly, can "weaken" a sound by applying comb filtering. Alter the mix by starting with the straight sound, then slowly adding in the delayed sound. As the delayed sound’s level approaches the straight sound’s level, a comb-filtering effect comes into play that essentially knocks a bunch of holes in the signal’s frequency spectrum. If you’re trying to make a piano or guitar take up less space in a track, this technique works well. MIXING VIA EQ EQ is a very underutilized resource for mixing. Turning the treble down instead of the volume can bring a track more into the background without having it get “smaller,” just less “present.” A lot of engineers go for really bright sounds for instruments like acoustic guitars, then turn down the volume when the vocals come in (or some other solo happens). Try turning the brightness down a tad instead. And of course, being able to automate EQ changes makes the process go a lot more easily. Overall, when it comes to mixing you have a lot of options other than just changing levels – and implementing changes in this way can make a big difference to the “character” of a mix. Have fun adding some of the above tips to your repertoire. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. It's not the same as a double-neck, but it does let you do some of the same tricks by Craig Anderton I’ll admit it: I’ve always lusted after a double-neck 6 string/12-string guitar. I love the big, rich, “chorused” sound of a 12-string, but I also like to bend notes and hit those six-string power chords. However, I don’t like the weight or the cost of a double-neck, and there’s a certain inconvenience—there are more strings to change, and let’s not even talk about carrying a suitable case around. So my workaround is to “undouble” the top two strings, turning the 12-string into a 10-string. Remove the E string closest to the B strings, and the B string closest to the G strings. This allows bending notes on the top two strings, but you’ll still have a plenty rich sound when hitting chords. Besides, it’s easy enough to add a chorus pedal afterwards, and get additional richness on strings—producing the same kind of effect on the top two strings that you get from doubling them. Sure, it’s not a real double-neck—but it gets you much of the way there, and best of all, wearing it for a couple hours during a performance won’t turn you into the hunchback of Notre Dame over time. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. Reason’s Combinator is a great way to create a “building block” that consists of multiple modules and controls By Craig Anderton Reason’s Combinator device (Combi for short), introduced in Reason 3, provides a way to build workstation-style combi programs with splits, velocity-switched layers, integral processing, and more—then save the combination for later recall. However, note that Combis aren’t limited to creating keyboard instruments (one Combi factory patch combines Reason’s “MClass” mastering processors into a mastering suite). Basically, anything you create in Reason can be “combinated.” Furthermore, four knobs and four buttons on the Combi front panel are assignable to multiple parameters. For example, if you have a stack with five synthesizers, one of the knobs could be a “master filter cutoff” control for all the synths. The knobs and buttons can be recorded as automation in the sequencer, or tied to an external controller. CREATING A COMBINATOR PATCH Let’s look at a real-world application that uses Reason’s Vocoder 512. A Vocoder has two inputs: Modulator and Carrier. Both go through filter banks; the modulator filters generate control signals that control the amplitude of the equivalent filter bands that process the carrier. Thus, The modulator impresses its frequency spectrum onto the carrier. The more filters (bands) in the filter banks, the greater the resolution. Typically, vocoders have a mic plugged into the modulator, so speaking into it impresses speech-like characteristics onto the carrier, and thus creates “talking instrument” sounds. However, no law says you have to use a mic, and my fave vocoder setup uses a big, sustained synth sound as the carrier, and a drum machine (rather than voice) as the modulator. The Combi is ideal for creating this setup. Rather than include the synth within the Combi, we’ll design the “DrumCoder Combi” as a signal processor that accepts any Reason sound generator. The Combi includes a Vocoder, ReDrum drum machine, and Spider Audio Merger (Fig. 1). Remember to load the ReDrum with a drum kit, and create some Patterns for modulating the vocoder. To hear only the patterns, set the Vocoder Dry/Wet control to dry. Fig. 1: “DrumCoder” Combi patching. ReDrum has a stereo out but the vocoder’s input is mono, so a Spider merger combines the drum outs. The Combi out goes to the hardware interface, while the input is available for plugging in a sound source. Let’s program the Combi knobs. Open the Combinator’s programmer section, then click on the Vocoder label in the Combi Programmer. Using Rotary 1’s drop-down menu, assign it to Vocoder Decay. Assign Rotary 2 to Vocoder Shift, and Rotary 3 to HF Emphasis. Rotary 4 works well for Wet/Dry, but if you want to use it to select ReDrum patterns instead, click on ReDrum in the programmer and assign Knob 4 to Pattern Select. I’ve programmed the buttons to mute particular ReDrum drums. Now let’s create a big synth stack Combi (Fig. 2) to provide a signal to the DrumCoder. Layer two SubTractors, then a third transposed down an octave. Assign the Combi knobs to control the synth parameters of your choice; Amp Env Decay for all three is useful. Fig. 2: Two SubTractors each feed a CF-101 Chorus. The “Bass” SubTractor feeds a UN-16 Unison. All three effect outs feed a 6:2 line mixer, which patches to the “Big SubTractor” Combi out. TESTING, TESTING Patch the Super SubTractor Combi out to the Vocoder Combi in, and the Vocoder Combi out to the appropriate audio interface output. Start the sequencer to get ReDrum going, then play your keyboard (which should be feeding MIDI data to the Big SubTractor Combi). You’ll hear the keyboard modulated by the drum beat – cool! Now diddle with some of the Vocoder Combi front panel controls, and you’ll find out why Combis rule. RESOURCES These files are useful for checking out the Combinator examples described in this article. DrumCoder.mp3 is an audio example of drumcoding. BigSubTractor.cmb and DrumCoder.cmb are Combis for Reason, as described in the article. DrumCoder.rns is a Reason song file that contains both Combis and sends the output to Reason’s mixed output. If you don’t have a keyboard handy, you can audition this patch by going to the sequencer and unmuting the Big SubTractor track, which plays a single note into the Big SubTractor instrument. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. An analog tool from yesteryear transitions to digital—and learns a few new tricks in the process By Craig Anderton Step sequencing has aged gracefully. Once a mainstay of analog synths, step sequencing has stepped into a virtual phone booth, donned its Super Sequencer duds, and is now equally at home in the most cutting-edge dance music. In a way, it’s like a little sequencer that runs inside of a bigger host sequencer, or within a musical instrument. But just because it’s little doesn’t mean it isn’t powerful, and several DAWs include built-in step sequencers. Early analog step sequencers were synth modules with 8 or 16 steps, and driven by a low-frequency clock. Each step produced a control voltage and trigger, and could therefore trigger a note just as if you’d triggered a keyboard. The clock determined the rate at which each successive step occurred. As a result, you could set up a short melodic sequence, or feed the control voltage to a different parameter, such as filter cutoff. Step sequencing in a more sophisticated form was the basis of drum machines and boxes like the Roland TB-303 BassLine, and is also built into today’s virtual instruments, such as Cakewalk’s Rapture, and even as a module in processors like Native Instruments’ Guitar Rig (Fig. 1). Fig. 1: Guitar Rig’s 16-step “Analog Sequencer” module is controlling the Pro Filter’s cutoff frequency. Reason’s patch-cord oriented paradigm makes it easy to visualize what’s happening with a typical step sequencer (Fig. 2). Fig. 2: This screen shot, cut and pasted for clarity, shows Reason’s step sequencer graphic interface, as well as how it’s “patched” into the SubTractor synthesizer. The upper Matrix view (second “rack” up from the bottom) shows the page generating a stepped control voltage that’s quantized to a standard musical scale as well as a gate signal; these create notes in the SubTractor and trigger their envelopes, as shown by the patch connections on the rear. The lower Matrix view is generating a control voltage curve from the Curve page, and sending this to the SubTractor synth filter. The short, red vertical strips on the bottom of either Matrix front panel view indicate where triggers occur. THIS YEAR’S MODEL Analog step sequencers typically had little more than a control for the control voltage level, and maybe a pushbutton to advance through the steps manually. Modern step sequencers add a lot of other capabilities, such as . . . Pattern storage. Once you tweaked an analog step sequencer, there was nothing you could do to save its settings other than write them down. Today’s sequencers usually do better. For example, the Matrix module in Reason stores four banks of 8 patterns, which can be programmed into the sequencer to play back as desired. Controller sequencing. Step sequencers aren’t just for notes anymore, and it’s usually possible to generate sequences of controllers along with notes (Fig. 3). Fig. 3: A row in Sonar’s Step Sequencer triggers notes, but you can expand the row to show other controller options. This example shows velocity editing. Variable number of steps. Freed from the restrictions of hardware, software step sequencers can provide any number of steps, although you’ll seldom find more than 128—if you need more, use the host’s sequencing capabilities. Step resolution. Typically, with a 16-step sequencer, each step is a 16th note. Variable step resolution allows each step to represent a different value, like a quarter note, eighth note, 32nd note, etc. Step quantization. With analog sequencers, it seemed almost impossible to “dial in” particular pitches; and when you did, they’d eventually drift off pitch anyway. With today’s digital versions, you can quantize the steps to particular pitches, making it easy to create melodic lines. The step sequencers in Rapture even allow for MIDI note entry, so you can play your line and the steps will conform to what you entered. Smoothing. This “rounds off” the sharp edges of the step sequence, producing a more rounded control characteristic. WHAT ARE THEY GOOD FOR? Although step sequencers are traditionally used to sequence melody lines, they have many other uses. Complex LFO. Why settle for the usual triangle/sawtooth/random LFO waveforms? Control a parameter with a step sequencer instead, and you can create pretty whacked waveforms by drawing them in the step sequencer. Apply smoothing, and the resulting waveform will sound more continuous rather than stepped. Create rhythmic patterns with filters. Feeding the filter cutoff parameter with a step sequencer can provide serious motion to the filter sound. This is the heart of Roger Linn’s AdrenaLinn processor, which imparts rhythmic effects to whatever you send into the input. If the step level is all the way down, the cutoff is all the way down and no sound comes out. Higher-level steps kick the filter open more, thus letting the sound “pulse” through. Polyrhythms. Assuming your step sequencer has a variable number of steps, you can create some great polyrhythmic effects. For example, consider setting up a 4-step sequence (1 measure of 4/4) in one step sequencer, and a 7-step sequence (1 measure of 7/4) in a second step sequencer, each driving different parameters (e.g., filter sweeps in opposite channels, or two different oscillator pitches). They play against each other, but “meet up” every seven measures (28 beats). Double-time and half-time sequences. By changing step resolution in the middle of a sequence, such as switching from 8th notes to 16th notes or vice-versa, it’s possible to change the sequence to double-time or half-time respectively. Complex panning. Imagine a step sequencer generating a percussive sequence by triggering a sound with a very quick decay. Now imagine a step sequencer altering the pan position for each hit – this can add an incredible amount of animation to a percussion mix. Live performance options. The original step sequencers were “set-and-forget” type devices. But nowadays, playing with a step sequencer in real time can turn it into a bona fide instrument (ask the TB-303 virtuosos). Change pitch, alter rhythms, edit triggers . . . the results can be not only hypnotic, but inspiring Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. When you're about to lay down a vocal, one of these tips just might help save a take By Craig Anderton These 16 tips can be helpful while recording, but many are also suitable for live performance...check ‘em out. TO POP FILTER OR NOT TO POP FILTER? Some engineers feel pop filters detract from a vocal, but pops detract from a vocal even more. If the singer doesn’t need a pop filter, fine. Otherwise, use one (Fig, 1). Fig. 1: Don’t automatically assume you need a pop filter, but have one ready in case you do. NATURAL DYNAMICS PROCESSING The most natural dynamics control is great mic technique—moving closer for more intimate sections, and further away when singing more forcefully. This can go a long way toward reducing the need for drastic electronic compression. COMPRESSOR GAIN REDUCTION When compressing vocals, pay close attention to the compressor’s gain reduction meter as this shows the amount by which the input signal level is being reduced. For a natural sound, you generally don’t want more than 6dB of reduction (Fig. 2) although of course, sometimes you want a more “squashed” effect. Fig. 2: The less gain reduction, as illustrated here with Cakewalk’s PC2A Leveler, the less obvious the compression effect. To lower the amount of peark or gain reduction, either raise the threshold parameter, or reduce the compression ratio. NATURAL COMPRESSION EFFECTS Lower compression ratios (1.2:1 to 3:1) give a more natural sound than higher ones. USE COMPRESSION TO TAME PEAKS WHILE RETAINING DYNAMICS To clamp down on peaks while leaving the rest of the vocal dynamics intact, choose a high ratio (10:1 or greater) and a relatively high threshold (around –1 to –6dB; see Fig. 3). Fig. 3: A high compression ratio, coupled with a high threshold, provides an action that’s more like limiting than compression. This example shows Native Instruments’ VC160. To compress a wider range of the vocal, use a lower ratio (e.g., 1.5 or 2:1) and a lower threshold, like –15dB. COMPRESSOR ATTACK AND DECAY TIMES An attack time of 0 clamps peaks instantly, producing the most drastic compression action; use this if it’s crucial that the signal not hit 0dB, yet you want high average levels. But consider using an attack time of 5 - 20ms to let through some peaks. The decay (release) setting is not as critical as attack; 100 - 250ms works well. Note: Some compressors can automatically adjust attack and decay times according to the signal passing through the system. This often gives the optimum effect, so try it first. SOFT KNEE OR HARD KNEE? A compressor’s knee parameter, if present, controls how rapidly the compression kicks in. With soft knee, when the input exceeds the threshold, the compression ratio is less at first, then increases up to the specified ratio as the input increases. With hard knee, once the input signal crosses the threshold, it’s subject to the full amount of compression. Use hard knee when controlling peaks is a priority, and soft knee for a less colored sound. TOO MUCH OF A GOOD THING Compression has other uses, like giving a vocal a more intimate feel by bringing up lower level sounds. However, be careful not to use too much compression, as excessive squeezing of dynamics can also squeeze the life out of the vocals. NOISE GATING VOCALS Because mics are sensitive and preamps are high-gain devices, there may be hiss or other noises when the singer isn’t singing. A noise gate can help tame this, but if the action is too abrupt the voice will sound unnatural. Use a fast attack and moderate decay (around 200ms). Also, instead of having the audio totally off when the gate is closed, try attenuating the gain by around 10dB or so instead. This will still cut most of the noise, but may sound more natural. SHIFT PITCHES FOR RICHER VOCALS One technique for creating thicker vocals is to double the vocal line by singing along with the original take, then mixing the doubled take at anywhere from –0 to –12dB behind the original. However, sometimes it isn’t always possible to cut a doubled line—like when you’re mixing, and the vocalist isn’t around. One workaround is to copy the original vocal, then apply a pitch shift plug-in (try a shift setting of –15 to –30 cents, with processed sound only—see Fig. 4). Fig. 4: Studio One Pro’s Inspector allows for easy “de-tuning.” Mix the doubled track so it doesn’t compete with, but instead complements, the lead vocal. FIXING A DOUBLED VOCAL Sometimes an occasional doubled word or phrase won’t gel properly with the original take. Rather than punch a section, copy the same section from the original (non-doubled) vocal. Paste it into the doubled track about 20 - 30ms late compared to the original. As long as the segment is short, it will sound fine (longer segments may sound echoed; this can work, but destroys the sense of two individual parts being played). REVERB AND VOCALS Low reverb diffusion settings work well with vocals, as the sparser number of reflections prevents the voice from being overwhelmed by a “lush” reverb sound. 50 - 100ms pre-delay works well with voice, as the first part of the vocal can punch through without reverb. INCREASING INTELLIGIBILITY A slight upper midrange EQ boost (around 3 - 4kHz) adds intelligibility and “snap” (Fig. 5). Fig. 5: Sonar’s ProChannel EQ set for a slight upper midrange boost (circled in yellow). Note the extreme low frequency rolloff (circled in red) to get rid of sounds below the range of the vocal, like handling noise. Be very sparing; the ear is highly sensitive in this frequency range. Sometimes a slight treble boost, using shelving EQ, will give equal or better results. NUKE THE LOWS A really steep, low-frequency rolloff (Fig. 5) that starts below the vocal range can help reduce hum, handling noise, pops, plosives, and other sounds you usually don’t want as part of the vocal. “MOTION” FILTERING For more “animation” than a static EQ boost, copy the vocal track and run it through an envelope follower plug-in (processed sound only, bandpass mode, little resonance). Sweep this over 2.5 to 4kHz; adjust the envelope to follow the voice. Mix the envelope-followed signal way behind the main vocal track; the shifting EQ frequency highlights the upper midrange in a dynamic, changing way. Note: If the effect is obvious, it’s mixed in too high. RE-CUT, DON’T EDIT Remember, the title was “16 Quick Vocal Fixes.” Many times, having a singer punch a problematic part will solve the issue a whole lot faster than spending time trying to edit it using a DAW’s editing tools. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Here are some secrets behind getting those wide, spacious, pro-sounding mixes that translate well over any system By Craig Anderton We know them when we hear them: wide, spacious mixes that sound larger than life and higher than fi. A great mix translates well over different systems, and lets you hear each instrument clearly and distinctly. Yet judging by a lot of project studio demos that pass across my desk, achieving the perfect mix is not easy…in fact, it's very hard. So, here are some tips on how to get that wide open sound whenever you mix. The Gear: Keep It Clean Eliminate as many active stages as possible between source and recorder. Many times, devices set to "bypass" may not be adding any effect but are still in the signal path, which can add some slight degradation. How many times do line level signals go through preamps due to lazy engineering? If possible, send sounds directly into the recorder—bypass the mixer altogether. For mic signals, use an ultra-high quality outboard preamp and patch that directly into the recorder rather than use a mixer with its onboard preamps. Although you may not hear much of a difference when monitoring a single instrument if you go directly into the recorder, with multiple tracks the cumulative effect of stripping the signal path to its essentials can make a significant difference in the sound's clarity. But what if you're after a funky, dirty sound? Just remember that if you record with the highest possible fidelity, you can always mess with the signal later on during mixdown. The Arrangement Before you even think about turning any knobs, scrutinize the arrangement. Solo project arrangements are particularly prone to "clutter" because as you lay down the early tracks, there's a tendency to overplay to fill up all that empty space. As the arrangement progresses, there's not a lot of space left for overdubs. Here are some suggestions when tracking: Once the arrangement is fleshed out, go back and recut tracks that you cut earlier on. Try to play these tracks as sparsely as possible to leave room for the overdubs you've added. Like many others, I write in the studio, and often the song will have a slightly tentative feel because it wasn't totally solid prior to recording it. Recutting a few judicious tracks always seems to both simplify and improve the music. Try building a song around the vocalist or other lead instrument instead of completing the rhythm section and then laying down the vocals. I often find it better to record simple "placemarkers" for the drums, bass, and rhythm guitar (or piano, or whatever), then immediately get to work cutting the best possible vocal. When you re-record the rhythm section for real, you'll be a lot more sensitive to the vocal nuances. As Sun Ra once said, "Space is the place." The less music you play, the more weight each note has, and the more spaciousness this creates in the overall sound. Proofing the Tracks Before mixing, listen to each track in isolation and check for switch clicks, glitches, pops, and the like, then kill them. These low-level glitches may not seem that important, but multiply them by a couple dozen tracks, and they can definitely muddy things up. If you don't want to get too heavily into editing, you can do simple fixes by punching in and out over the part to be erased. DAWs may or may not have sophisticated enough editing options to solve particular problems; for example, they'll probably let you cut and paste, but if something like sophisticated noise reduction is not available in a plug-in, this may require opening the track in a digital audio editing program, applying the appropriate processing, then bringing the track back into the DAW. Also note that some recording programs can "link" to a particular digital audio editor. In this case, all you may need to do is, for example, double-click on a track, and you're ready to edit. Equalization The audio spectrum has only so much space, and you need to make sure that each sound occupies its own turf without fighting with other parts. This is one of the jobs of EQ. For example, if a rhythm instrument interferes with a lead instrument, reduce the rhythm instrument's response in the part of the spectrum that overlaps the lead. One common mistake I hear with recordings done by singer/songwriters is that they (naturally) feature themselves in the mix, and worry about "details" like the drums later. However, as drums cover so much of the audio spectrum (from the low-frequency thud of the kick to the high-frequency sheen of the cymbals), and because drums tend to be so upfront in today's mixes, it's usually best to mix the drums first, then find "holes" in which you can place the other instruments. For example, if the kick drum is very prominent, it may not leave enough room for the bass. So, boost the bass at around 800 to 1,000 Hz to bring up some of the pick noise and brightness. This is mostly out of the range of the kick drum, so the two won't interfere as much. Try to think of the song as a spectrum, and decide where you want the various parts to sit, and their prominence (see Fig. 1). I often use a spectrum analyzer when mixing, not because your ears don't work well enough for the task, but because it provides invaluable ear training and shows exactly which instruments take up which parts of the audio spectrum. This can often alert you to a buildup of excessive level in a particular region. Fig. 1: Different instruments sit in different portions of the spectrum (of course, this depends on lots of factors, and this illustration is only a rough approximation). Use EQ to distribute the energy from various instruments so that they use the full spectrum rather than bunch up in one specific range. If you really need a sound to "break through" a mix, try a little bit of boost in the 1 to 3 kHz region. Just don't do this with all the instruments; the idea is to use boosts and cuts to differentiate one instrument from another. To place a sound further back in the mix, sometimes switching in a high-cut filter will do the job by "dulling" the sound somewhat—you may not even need to switch in the main EQ. Also, using the low-pass filter on instruments that veer toward the bass range, like guitar and piano, can help trim their low end to open up more space for the all-important bass and kick drum. Compression When looking for the biggest mix, compression can actually makes things sound "smaller" (but louder) by squeezing the dynamic range. If you're going to use compression, try applying compression on a per-channel basis rather than on the entire mix. Compression is a whole other subject (check out the article Compressors Demystified), but suffice it to say that many people have a tendency to compress until they can "hear the effect." You want to avoid this; use the minimum amount of compression necessary needed to tame unruly dynamic range. If you do end up compressing the stereo two-track, here's a tip to avoid getting an overly squeezed sound: Mix in some of the straight, non-compressed signal. This helps restore a bit of the dynamics yet you still have the thick, compressed sound taking up most of the available dynamic range. Mastering Mastering is the Supreme Court of audio—if you can't get a ruling in your favor there, you have nowhere else to go. A pro mastering engineer can often turn muddy, tubby-sounding recordings into something much clearer and defined. Just don't expect miracles, because no one can squeeze blood from a stone. But a good mastering job might be just the thing to take your mix to the next level, or at least turn a marginal mix into a solid one. The main point of this article is that there is no button you can click on that says "press here for wide open mixes." A good mix is the cumulative result of taking lots of little steps, such as the ones detailed above, until they add up to something that really works. Paying attention to detail does indeed help. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Don't Miss Out on the Next Big Thing in Guitar Distortion By Craig Anderton If you're a guitarist and you're not into multiband distortion...well, you should be. Just as multiband compression delivers a smoother, more transparent form of dynamics control, multiband distortion delivers a "dirty" sound like no other. Not only does it give a smoother effect with guitar, it's a useful tool for drums, bass, and believe it or not, program material – some people (you know who you are!) have even used it with mastering to add a distinctive, unique "edge." As far as I know, the first example of multiband distortion was a do-it-yourself project, the Quadrafuzz, that I wrote up in the mid-'80s for Guitar Player magazine. It remains available from PAiA Electronics (www.paia.com), and is described in the book "Do It Yourself Projects for Guitarists" (BackBeat Books, ISBN #0-87930-359-X). I came up with the idea because I had heard hex fuzz effects with MIDI guitar, where each string was distorted individually, and liked the sound. But it was almost too clean, yet I wasn't a fan of all the intermodulation problems with conventional distortion. Multiband distortion was the answer. However, we've come a long way since the mid-'80s, and now there are a number of ways to achieve this effect with software. HOW IT WORKS Like multiband compression, the first step is to split the incoming signal into multiple frequency bands (typically three or four). These usually have variable crossover points, so each band can cover a variable frequency range. This is particularly important with drums, as it's common to have the low band zero in on the kick and distort it a bit, while leaving higher frequencies (cymbals etc.) untouched. Then, each band is distorted individually (incidentally, this is where major differences show up among units). Then, each band will usually have a volume control so you can adjust the relative levels among bands. For example, it's common to pull back on the highs a bit to avoid "screech," or boost the upper midrange so the guitar "speaks" a little better. With guitar, you can hit a power chord and the low strings will have minimal intermodulation with the high strings, or bend a chord's higher strings without causing beating with the lower ones. SOFTWARE PLUG-INS The first multiband distortion plug-in was a virtual version of the Quadrafuzz, coded as a VST/DX plug-in by Spectral Design for Steinberg. Although I was highly skeptical that software could truly emulate the sound of the hardware design, fortunately a guitarist was on the design team, and he nailed the sound. The Quadrafuzz was included with Cubase SX, and is a currently available from Steinberg as a "legacy" plug-in. But they took it further than the hardware version, offering variable frequency bands (the hardware version is "tuned" specifically for guitar), as well as five different distortion curves for each band, from heavy clipping to a sort of "soft knee" distortion. As a result, it's far more versatile than the original version. A free plug-ins, mda's Bandisto, is basic but a fine way to get started. It offers three bands, with two variable crossover points, and distortion as well as level controls for each of the three bands. There are two distortion modes, unipolar (a harsh sound) and bipolar, which clips both sides of the waveform and gives a smoother overall effect. While the least sophisticated of these plug-ins, you can't beat the price. Bandisto is as good a way as any to get familiar with multiband distortion. Ohm Force's Predatohm provides up to four bands, each of which includes four controls to change the distortion's tonality as well as the channel's overall tone and character. Unique to Predatohm is a feedback option that can add an extremely aggressive edge (it's all over my "Turbulent Filth Monsters" sample CD of hardcore drum loops), as well as a master tone section. Wild, wacky, and wonderful, this plug-in has some serious attitude. Under its spell, even nylon-string guitars can become hardcore dirt machines. iZotope's Trash uses multiband distortion as just one element of a comprehensive plug-in that also incorporates pre- and post-distortion filtering, amp cabinet modeling, multi-band compression, and delay. The number of bands is variable from one to four, but each band can have any one of 47 different algorithms. Also, there are two distortion stages, so you can emulate (for example) a fuzzbox going into an overdriven amp (however, the bands are identical for each of the two stages). The pre- and post-distortion filter options are particularly useful for shaping the distortion's tonal quality. This doesn't just make trashy sounds, it revels in them. Sophisticated trash may be an oxymoron, but in this case, it's appropriate due to the complement of highly capable modules. ROLLING YOUR OWN You're not constrained to dedicated plug-ins. For example, Native Instruments' Guitar Rig has enough options to let you create your own multiband distortion. A Crossover module allows splitting a signal into two bands; placing a Split module before two Crossover modules gives the required four bands. Of course, you can go nuts with more splits and create more bands. You can then apply a variety of amp and/or distortion modules to each frequency split. Yet another option is to copy a track in your DAW for as many times as you want bands of distortion. For each track, insert the filter and distortion plug-ins of your choice. On advantage to this approach is each band can have its own aux send controls, as well as panning. Spreading the various bands from left to right (or all around you, for surround fans!) adds yet another level of satisfying mayhem. In terms of filtering, the simplest way to split a signal into multiple bands is to use a multiband compressor, but set to no compression and with individual bands soloed (most multiband compressors will let you solo or bypass individual bands). For example with three tracks, you could have a high, middle, and low band from each crossover feeding its own distortion plug-in. Here a guitar track has been "cloned" three times in Cakewalk Sonar, with each instance feeding a multiband crossover followed by an amp sim plug-in (Native Instruments' Guitar Rig). The multiband compressors have been edited to act as crossovers, thus feeding different frequency ranges to the amp sims. AND BEST OF ALL... Thanks to today's fast computers, sound cards, and drivers, you can play guitar through plug-ins in near-real time, so you can tweak away while playing crunchy power chords that rattle the walls. Happy distorting! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Before you play with your new gear, make sure you keep a record of its vital stats by Craig Anderton When you buy a piece of gear, of course the first thing you want to do is have fun with it! But think about the future: At some point, it's going to need repairs, or you might want to sell it, or it might (and I sure hope it doesn't) get stolen. As a result, it's a good idea to plan ahead and do the following. 1. Buy some kind of storage system for saving all the various things that come packed with the gear. This includes rack ears you might use someday if you rack mount it, the owner's manual or a CD-ROM containing any documentation, any supplementary "read me" pieces of paper, that audio or MIDI adapter you don't think you'll use but you'll need someday, and the like. For storage, I use stackable sets of plastic drawers you can buy inexpensively just about anywhere; for gear that comes only with paper and no bulky accessories, I have files in a filing cabinet packed with manuals and such. A more modern solution for downloadable files is to have a “manual bookshelf” in your iPad. 2. Register your purchase. Sometimes it's a hassle to do this, but it's important to establish a record for warranty work. For software, it can mean the difference between paying for an upgrade and getting one for free, because a new version came out within a short period of time after you purchased the program. I always check the "Keep me notified of updates" box if available; sure, you'll get some commercial offers and such, but you'll also be among the first to find out that an update is available. 3. Record any serial numbers, authorization codes, etc. Also record your user name and password for the company's web site, as with software, that's often what you need to access downloads and upgrades. Also record when and where you purchased the gear, and how much you paid. I keep all this information on my computer, and copy it to a USB stick periodically as backup. 4. For software, retain all firmware and software updates. If you ever have to re-install a program, it may not be possible to upgrade from, say, Version 1 to Version 3—you may need to go through Version 2 first. I keep all upgrades on an data drive in my computer, and backed up to an external hard drive. With all this info at your fingertips, if you ever go to sell the gear, you'll be very glad you had these records. What's more, if any problems crop up with your gear, you'll be well-prepared to deal with them. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Exploring the Art of Filthy Signal Mutation by Craig Anderton I like music with a distinctly electronic edge, but also want a human "feel." Trying to resolve these seemingly contradictory ideals has led to some fun experimentation, but one of the more recent "happy accidents" was finding out what happens when you apply heavy signal processing to multitracked drums played by a human drummer. I ended up with a sound that slid into electronic tracks as easily as a debit card slides into an ATM machine, yet with a totally human feel. This came about because Discrete Drums, who make rock-oriented sample libraries of multitracked drums (tracks are kick, snare, stereo toms, stereo room mic tracks, and stereo room ambience), received requests for a more extreme library for hip-hop/dance music. I had already started using their CDs for this purpose, and when I played some examples of loops I had done, they asked whether I'd like to do a remixed sample CD with stereo loops. Thus, the "Turbulent Filth Monsters" project was born, which eventually became a sample library (originally distributed by M-Audio, and now by Sonoma Wire Works). Although I used the Discrete Drums sample library CDs and computer-based plug-ins, the following techniques also apply to hardware processors used in conjunction with drum machines that have individual outs, or multitracked drums recorded on a multitrack recorder (or sample CD tracks bounced over to a multitrack). Try some of these techniques, and you'll create drum sounds that are as unique as a fingerprint - even if they came from a sample CD. EFFECTS AUTOMATION AND REAL TIME CONTROL Editing parameters in real time lets you "play" an effect along with the beat. This is a good thing. However, it's unlikely that you'll be able to vary several parameters at once while mixing the track down to a loop, so you'll want to record these changes as automation. Hardware signal processors can often accept MIDI controllers for automation. If so, you can sync a sequencer up to whatever is playing the tracks. Then, deploy a MIDI control surface (like the Mackie Control, Novation Nocturn, etc.) to record control data into the sequencer. Once in the sequencer, edit the controller data if needed. If the processor cannot accept control signals, then you'll need to make these changes in real time. If you can do this as you mix, fine. Otherwise, bounce the processed signal to another track so it contains the changes you want. Software plug-ins for DAWs are a whole other matter, as there are several possible automation scenarios: Use a MIDI control surface to alter parameters, while recording the data to a MIDI track (hopefully this will drive the effect on playback) Twiddle the plug-in's virtual knobs in real time, and record those changes within the host program Use non-real time automation envelopes Record data that takes the form of envelopes, which you can then edit Use no automation at all. In this case, you can send the output through a mixer and bounce it to another track while varying the parameter. This can require a little after-the-fact trimming to compensate for latency (i.e., delay caused by going through the mixer then returning back into the computer) issues. For example, with VST Automation (Fig. 1), a plug-in will have Read and Write Automation buttons. Fig. 1: Click on the Write Automation button with a VST plug-in, and when you play or record, tweaking controls will write automation into your project. If you click on the Write Automation button, any changes you make to automatable parameters will be written into your project. This happens regardless of whether the DAW is in record or playback mode. PARALLEL EFFECTS In many cases, you want any effects to be in parallel with the main drum sound. For example, if you put ring modulation or wah-wah on a kick drum, you'll lose the essential "thud" that fills out the bottom. With a hard disk recorder, parallel effects are easy to do: Copy the track and add the effects to the copy (Fig. 2). Fig. 2: Ring Thing, a free download from DLM, is processing a copy of the drum track. The processed track is mixed in with the original drum track at a lower level. With a hardware mixer, it's also not hard to do parallel processing because you can split the channel to be processed into two mixer inputs, and insert the effect into one of the input channel strips. THESE ARE A FEW OF MY FAVORITE FX Okay, we're set up for real time control and are playing back some drum tracks. Here are some of my favorite nasty drum processors. Ring Modulator. A ring modulator has two inputs, for a carrier and modulator. The output provides the sum and difference of the two signals while suppressing the originals. For example, if you feed in a 400 Hz carrier and 1 kHz modulator, the output will consist of a 600 Hz and 1.4 kHz tone mixed together. Most plug-in ring modulators dedicate the carrier input to an oscillator that's part of the plug-in, with the track providing the modulator input. A hardware ring modulator - if you can find one - may include a built-in carrier waveform, or have two "open" inputs where you can plug in anything you want. The ring modulator produces a "clangorous," metallic, enharmonic sound (sounds good already, eh?). I like to use it mostly as a parallel effect on toms and kick; a snare signal, or room sounds, are complex enough that adding further complexity usually doesn't help. Having a steady carrier tone can get pretty annoying (although it has its uses for electro-type music), so I like to vary the frequency in real time. Envelope followers and LFOs - particularly tempo-synched LFOs - are good choices, although you can always tweak the frequency manually. With higher frequencies, the sound becomes kind of toy-like; lower frequencies can give more power if you zero in on the right frequency range. Envelope-Controlled Filter. This is another favorite for individual drum sounds. Again, you'll probably want to run this in parallel unless you seek a thinner sound. High resonance settings make the sound more "dinky," whereas low resonance can give more "thud" and depth. For hardware, you'll likely need a stomp box, where envelope-controlled filters are plentiful (the Boss stomp boxes remain a favorite, although if you can find an old Mutron III or Funk Machine, those work too). For plug-ins, many guitar amp sims have something suitable (e.g,, the Wah Wah module in Waves GTR Solo; see Fig. 3). Fig. 3: This preset for Waves GTR Solo adds funkified wah effects to drum tracks. The Delay adds synched echos, the Amp module adds some grit, and the Compressor at the output keeps levels under control. I also like using the wah effect in IK Multimedia's AmpliTube 2 guitar amp plug-in, which is also great for... Distortion. Adding a little bit of grit to a kick drum can make it punch through a track, but I've also added heavy distortion to the room mic sound while keeping the rest of the drums clean. This "muddies up" the sound in an extremely rude way, yet the clean sounds running in parallel keep it from becoming a hopeless mess. Distortion doesn't do much for snares, which are already pretty dirty anyway. But it can increase the snare's apparent decay by bringing up the low-level decay at the end. Guitar amp distortion seems particularly useful because of the reduced high end, which keeps the sound from getting too "buzzy," and low end rolloff, which avoids muddiness. Guitar amp plug-ins really shine here as well; I particularly like iZotope's Trash (Fig. 4), as it's a multiband (up to four bands) distortion unit. Fig. 4: In this preset, iZotope's Trash is set up to deliver three bands of distortion. This means you can go heavy on, say, lower midrange distortion, while sprinkling only a tiny bit of dirt on the high end. It's also good for mixed loops because multiband operation prevents excessive intermodulation distortion. Feedback. And you thought this technique was just for guitarists...actually, there are a couple ways to make drums feed back. For hardware, one technique is to send an aux bus out to a graphic equalizer, then bring the graphic EQ back into the channel, and turn up the channel's aux send so some signal goes back into the EQ. Playing with individual sliders can cause feedback in the selected frequency range, but this requires a really light touch - it's easy to get speaker-busting runaway feedback. Adding a limiter in series with the EQ is a good idea. My favorite feedback technique uses the Ohm Force Predatohm plug-in, which was already shown in Fig. 1. This is a multiband distortion/compression plug-in with feedback frequency and amount controls. But the killer feature is that all parameters are automatable. You can tweak the amount control rhythmically to give a taste of feedback before it retreats. Similarly, you can alter the frequency with amount set fairly high. As the frequency sweeps through a range where there's lots of audio energy, feedback will kick in - but as it sweeps past this point, the feedback disappears. LET'S NOT FORGET THE TRULY WEIRD A vocoder (Fig. 5) is a great processor for drums, as there are several possible ways to use it. Fig. 5: The Vocoder in Ableton Live. In this example, drums are modulating a guitar's power chord. You have several choices of carriers for the vocoder (circled in green), including internal noise, the modulator (so the modulator signal feeds both the modulator and carrier ins), or pitch tracking, where the carrier is a monophonic oscillator that tracks the modulator signal's pitch. One is to use the room ambience as the carrier, and a submix of the kick, snare, and toms as the modulator. As the drums hit, they bring in sections of the ambience, which if you've been paying attention so far, is probably being run through some weird effect of its own. Another trick I did was bring in an ambience track from a different drum part and modulate that instead. You can also use the drums to "drumcode" something like a bunch of sawtooth waves, a guitar power chord, whatever. These sounds then lose their identities and become an extension of the drums. Both hardware and software vocoders are fairly common. Generally the most whacked-out processors come in plug-in form, such as the GRM Tools series, the entire Ohm Force line (their Hematohm frequency shifter is awesome with drums), Waves' tasty modulation effects like the Enigma and MondoMod, PSP's Vintage Warmer (a superb general-purpose distortion device), and too many others to mention here - go online, and download some demos. Also, let's not forget some of those old friends that can learn new tricks, like flanger, chorus, pitch shifters, and delay - extreme amounts of modulation or swept delays can go beyond their stereotyped functions. Emagic's Logic is also rich in plug-ins, many of which can be subverted into creating filthy effects. The possibilities they open up are so mind-boggling I get tingly all over just thinking about it. SO WHAT'S THE PAYOFF? Drum loops played by a superb human drummer, with all those wonderful little timing nuances that are the reason drum machines have not taken over the world, will give your tracks a "feel" that you just can't get with drum machines. But if you add on really creative processing, the sounds will be so electronified that they'll fit in perfectly with more radical instruments synths, highly processed vocals, and technoid guitar effects. So, get creative - you'll have a good time doing it, and your recordings won't sound like million others. What good are all these great new toys if you don't exploit them? Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Optimize Your Reverberant Space for the Best Possible Sound By Craig Anderton There's nothing like the sound of real reverb, such as what you hear in a cathedral or symphonic hall. That's because reverb is made up of a virtually infinite number of waves bouncing around within a space, with ever-changing decay times and frequency responses. For a digital reverb to synthesize this level of complexity is a daunting task, but the quality and realism of digital reverb continues to improve. Today's reverbs come in two flavors: convolution and synthetic (also called algorithmic). A convolution reverb is sort of like the reverb equivalent of a sampling keyboard, as it's based on capturing a sonic "fingerprint" of a space (called an impulse), and applying that fingerprint to a sound. Convolution reverbs are excellent at re-creating the sound of a specific acoustical space. Synthetic reverbs model a space via reverberation algorithms. These algorithms basically set up "what if" situations: what would a reverb tail sound like if it was in a certain type of room of a certain size, with a certain percentage of reflective surfaces, and so on. You can change the reverb sound merely by plugging in some different numbers—for example, by deciding the room is 50 feet square instead of 200 feet square. Even though digital synthetic reverbs don't sound exactly like an acoustic space, they do offer some powerful advantages. First, an acoustic space has one "preset"; a digital reverb offers several. Second, digital reverb is highly customizable. Not only can you use this ability to create a more realistic ambience, you can create some unrealistic—but provocative—ambiences as well. However, the only way to unlock the true power of digital reverb is to understand how its parameters affect the sound. Sure, you can just call up a preset and hope for the best. But if you want world-class reverb, you need to tweak it for the best possible match to the source material. By the way, although we'll concentrate on the parameters found in synthetic reverbs, many convolution reverbs have similar parameters. REVERB PARAMETERS The reverb effect has two main elements: The early reflections (also called initial reflections) consist of the first group of echoes that occur when sound waves hit walls, ceilings, etc. (The time before these sound waves actually hit anything is called pre-delay.) These reflections tend to be more defined and sound more like "echo" than "reverb." The decay, which is the sound created by these waves as they continue to bounce around a space. This "wash" of sound is what most people associate with reverb. Following are the types of parameters you'll find on higher-end reverbs. Lower-cost models will likely have a subset of these. Room size. This affects whether the paths the waves take while bouncing around in the virtual room are long or short. If the reverb sound has flutter (a periodic warbling effect that sounds very unrealistic), vary this parameter in conjunction with decay time (described next) for a smoother sound. Decay time. This determines how long it takes for the reflections to run out of energy. Remember that long reverb times may sound impressive on instruments when soloed, but rarely work in an ensemble context (unless the arrangement is very sparse). Decay time and room size tend to have certain "magic" settings that work well together. Preset reverbs lock in these settings so you can't make a mistake. For example, it can sound "wrong" to have a large room size and short decay time, or vice-versa. Having said that, though, sometimes those "wrong" settings can produce some cool effects, particularly with synthetic music where the goal isn't necessarily to create the most realistic sound. Damping. If sounds bounce around in a hall with hard surfaces, the reverb's decay tails will be bright and more defined. With softer surfaces (e.g., wood instead of concrete, or a hall packed with people), the reverb tails will lose high frequencies as they bounce around, producing a warmer sound with less "edge." A processor has a tougher time making accurate calculations for high frequency sounds, so if your reverb produces an artificial-sounding high end, just concede that fact and introduce some damping to create a warmer sound. High and low frequency attenuation. These parameters restrict the frequencies going into the reverb. If your reverb sounds metallic, try reducing the highs starting at 4—8kHz. Remember, many of the great-sounding plate reverbs didn't have much response over 5kHz, so don't fret too much about a reverb that can't do great high frequency sizzle. Having too many lows going through the reverb can produce a muddy, indistinct sound that takes focus away from the kick and bass. Try attenuating from 100—200Hz on down for a tighter low end. Early reflections diffusion (sometimes just called diffusion). This is one of the most critical reverb controls for creating an effect that properly matches the source material. Increasing diffusion pushes the early reflections closer together, which thickens the sound. Reducing diffusion produces a sound that tends more toward individual echoes. For percussive instruments, you generally want lots of diffusion to avoid the "marbles bouncing on a steel plate" effect caused by too many discrete echoes. However, for vocals and other sustained sounds, reduced diffusion can give a beautiful reverberant effect that doesn't overpower the source. With too much diffusion, the voice may lose clarity. Note that there may be a second diffusion control for the reverb decay. With less versatile reverbs, both diffusion parameters may be combined into a single control. Early reflections pre-delay. It takes a few milliseconds before sounds hit the room surfaces and start to produce reflections. This parameter, usually variable from 0 to 100ms or so, simulates this effect. Increase the parameter's duration to give the feeling of a bigger space; for example, if you've dialed in a large room size, you'll probably want to employ a reasonable amount of pre-delay. Reverb density. Lower densities give more space between the reverb's first reflections and subsequent reflections. Higher densities place these closer together. Generally, as with diffusion, I prefer higher densities on percussive content, and lower densities for vocals and sustained sounds. Early reflections level. This sets the early reflections level compared to the overall reverb decay. The object here is to balance them so that the early reflections are neither obvious, discrete echoes, nor masked by the decay. Lowering the early reflections level also places the listener further back in the room, and more toward the middle. High frequency decay and low frequency decay. Some reverbs have separate decay times for high and low frequencies. These frequencies may be fixed, or there may be an additional crossover parameter that sets the dividing line between the lows and highs. These controls have a huge effect on the overall reverb character. Increasing the low frequency decay creates a bigger, more "massive" sound. Increasing high frequency decay gives a more "ethereal" type of effect. An extended high frequency decay, which is generally not found in nature, can sound great on vocals as it adds more reverb to sibilants and fricatives, while minimizing reverb on plosives and lower vocal ranges. This avoids a "muddy" reverberant effect, and doesn't compete with the vocals. ONE REVERB OR MANY? I tend not to use a lot of reverb, and when I do, it's to simulate an acoustic space. Although some producers like putting different reverbs on different tracks, I prefer to insert reverb in an aux bus, and use different send amounts to place the sound source in the reverberant space (more send places the sound further back; less send places it more up front). For this type of "program material" application, I'll use fairly high diffusion coupled with a decent amount of high frequency damping. The only exceptions to this are when I want an "effect" on drums, like gated reverb, or need a separate reverb for the voice. Voices often benefit from a bright, plate-like effect with less diffusion and damping. In general I'll send some vocal into the room reverb and some into the "plate," then balance the two so that the vocal reverb blends well with the room sound. REALITY CHECK The most difficult task for a digital reverb is to create realistic first reflections. If you have a nearby space with hard surfaces like a tile bathroom, basement with hard concrete surfaces, or even just a room with a tiled floor, place a speaker in the room and feed it with an aux bus output. Then add a mic in the space to pick up the reflections. Blend in the real first reflections with the decay from a digital reverb, and the result often sounds a lot more like a real reverb chamber. DOUBLE YOUR (REVERB) PLEASURE I've yet to find a way to make a bad reverb plug-in sound good, but you can make a good reverb plug-in sound even better: "Double up" two instances of reverb (each on their own aux bus), set the parameters slightly differently to create a more "surrounding" stereo image instead of a point source, then pan one reverb somewhat more to the left and the other more to the right. You can even do this with two different reverbs. The difference may be subtle, but it can definitely improve the sound. Curious what this sounds like? Click here to download the sound of one reverb, and click here to download the sound of two reverbs combined together. The difference is very subtle (it's best to listen with headphones), but as with most tweaks involving audio, these differences add up over the course of many tracks in a multitracked production. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. It's Like Viagra for Live Performance by Craig Anderton Jennifer Hudson did it while singing the national anthem at the Super Bowl. Kiss does it. Even classical musicians playing at the President's inaugural do it. Sometimes it seems everyone uses backing tracks to augment their live sound. So why not you? Yes, it's sorta cheating. But somewhere between something innocuous like playing to a drum machine, and lip-synching to a pre-recorded vocal rather than singing yourself, there's a "sweet spot" where you can enhance what is essentially a live performance. A trio might sequence bass lines, for example, or a drummer might add pre-recorded ethnic percussion. However, you want something bullet-proof, easy to change on the fly if the audience's mood changes, and simple. I SYNC, THEREFORE I AM If a drummer's playing acoustic drums and a sequencer's doing bass parts, the drummer will have to follow the sequencer. But what happens if there's no bass to follow at the beginning of a song, or it drops out? The solution is in-ear monitors (besides, monitor wedges are so 20th century!). Assuming whatever's playing the backing part(s) has more than one output available, one channel can be an accented metronome that feeds only the in-ear monitors, while the other channel contains the backing track. If there are only two outputs the backing track will have to be mono, but that doesn't matter too much for live performance. BACKING TRACK OPTIONS The simplest backup is something that plays in the background (e.g., drum machine, pre-recorded backing track on CD, iPod, MP3 player, etc.), and you play to it. RAM-based MP3 players are super-reliable. They don't care about vibration, don't need maintenance, and have no start-up time. However, you can get CD players with enough anti-skip memory to handle tough club environments (just don't forget to clean your CD player's lens if you play smoky clubs). Another advantage of a simple stereo playback device is potential redundancy: Bringing another CD/MP3 player for backup is cheap and easy to swap out. The biggest drawback is musical rigidity. Want to take another eight bars in the solo? Forget it. A few drum machines give you some latitude (even the venerable Alesis SR-16 can switch between patterns and extend them), but with most players, what you put in is what you get out. To change song orders, just use track forward/backward to find the desired track. But the backup track player will always have to start off the song, or you'll need to hit Play at just the right time to bring it in. But these days, it's also possible to use machines designed specifically to play backing tracks - like the Boss JS-10 eBand (Fig. 1). This can play back WAV or MP3 files from an SD card (32GB will give you around 50 hours of playing time - perfect for Grateful Dead tribute bands). You can also create song files specific to the JS-10. THE LAPTOP FACTOR As many of the parts you'll use for backing tracks probably started in a computer sequencer, it makes sense to use it for your backing tracks. This is also the most flexible option; for example, if you sequence your backing track using Ableton Live (or most other hosts), you can change loop points on-the-fly and have a section repeat if you want to extend a solo (Fig. 2). Cool. It's also easy to mute or solo tracks for additional changes. Fig. 2: Move Live's loop locators (the looped portion is shown in red for clarity) on the fly to repeat a portion of music. As to reliability, though, computers can be scary. Few laptops are built to rock and roll specs, although there are exceptions. Connectors are flimsy, too; at least build a breakout box with connectors that patch into your computer, then plug the cables that go to the outside world into the breakout box. Secure your laptop (and the breakout box) to your work surface. Tape down any cables so no one can snag them. On the plus side, the onboard battery will carry you through if the power is iffy, or if someone trips over the AC cord while passing out drunk. Not, of course, that something like that could ever happen at a live performance... THE iPAD OPTION For less rigorous needs, an iPad will tale care of you. In fact, the SyncInside app ($8.99 from the App Store; see Fig. 3) lets you hook up a USB interface using the camera connector kit, and can output stereo tracks as well as a click through headphones (assuming your interface is up to the task). Fig. 3: The SyncInside iPad app was designed specifically for playing backing tracks in live performance situations. OneTrack is another iOS app for playing backing tracks, but it works with iPhone and iPod touch as well as an iPad. iOS solutions can also be convenient because nothing's better for live performance than redundancy. If you have an iPhone and an iPad, then an app like OneTrack can live in both places - if one device dies, you're still good to go. THE SEQUENCER SOLUTION A reliable solution, and very flexible solution, is the built-in sequencer in keyboard workstations (e.g., Roland Fantom, Yamaha Motif, Korg Kronos, etc.). If you're already playing keyboard, hitting a Play button is no big deal. You may also be able to break a song into smaller sequences, creating a "playlist" you can trigger on the fly to adapt to changes in the audience's mood; and with a multitrack sequence, you have the flexibility to mute and mix the various tracks if you want to get fancy (Fig. 4). What's more, as most workstation keyboards have separate outs, sending out a separate click to headphones will probably be pretty simple. Fig. 4: Yamaha's workstations have sophisticated sequencing options, as evidenced in this screen from the Motif XS. Another option is arranger keyboards. Casio's WK-6500 isn't an arranger keyboard in the strictest sense, as it's also a pretty complete synthsizer workstation (Fig. 5). Fig. 5: If you're looking for a keyboard-based backing track solution, arranger keyboards, and keyboards with auto-accompaniment like the Casio WK-6500, will often give you want you want. However, it does include auto-accompniment features and drum patterns with fills, ends, and so on. And with a 76-key keyboard, you can enhance your backing tracks with real playing. How's that for a concept? (The price is right, too - typically under $300.) THE IMPORTANCE OF AN EXIT STRATEGY With live backing tracks, always have an exit strategy. I once had a live act based around some, uh, unreliable gear, so I patched an MP3 player with several funny pieces of audio recorded on it into my mixer. (One piece was a "language lesson," set to music, that involved a word we can't mention here; another had a segment from the "How to Speak Hip" comedy album.) If something needed reloading, rebooting, or troubleshooting, I'd hit Play on the player. Believe me, anything beats dead air! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...