Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. Those two screws on the side of your pickup aren’t just there for decoration by Craig Anderton Spoiler alert: The correct answer is “it depends.” Pickup height trades off level, sustain, and attack transient, so you need to decide which characteristics you want to prioritize. I think we all have a sense that changing pickup height changes the sound, but I’d never taken the time to actually quantify these changes. So, Itested the neck and bridge humbucker pickups in a Gibson Les Paul Traditional Pro II 50s guitar, and tried two different pickup height settings. For the “close” position, the strings were 2mm away from the top of the pole pieces. In the “far” position, the distance was 4mm. I then recorded similar strums into Steinberg’s WaveLab digital audio editor; although it’s impossible to get every strum exactly the same, I did enough of them to see a pattern. The illustrations show the neck pickup results, because the bridge pickup results were similar. Fig. 1: This shows the raw signal output from three strums with the rhythm pickup close to the strings, then three strums with the pickup further away. It’s clear from Fig. 1 that the “close” position peak level is considerably higher than the “far” position—about 8 dB. So if what matters most is level and being able to hit an amp hard, then you want the pickups close to the strings. Fig. 2: The last three strums, with the pickups further from the strings, have a higher average level compared to the initial transient. Fig. 2 tells a different story. This screen shot shows what happens when you raise the peaks of the “far” strums (again, the second set of three) to the same peak level as the close strums, which is what would happen if you used a preamp to raise the signal level. The “far” strum initial transients aren’t as pronounced, so the waveform reaches the sustained part of the sound sooner. The waveform in the last three is “fatter” in the sense that there’s a higher average level; with the “close” waveforms, the average level drops off rapidly after the transient. Based on how the pickups react, if you want a higher average level that’s less percussive while keeping transients as much out of the picture as possible (for example, to avoid overloading the input of a digital effect), this would be your preferred option. Fig. 3 shows two chords ringing out, with the waveforms normalized to the same peak value and amplified equally in WaveLab so you can see the sustain more clearly. Fig. 3: The second waveform (pickups further from strings) maintains a higher average level during its sustain. With the “tail” of the second, “far” waveform, the sustain stays louder for longer. So, you do indeed get more sustain—not just a higher average level and less pronounced transients—if the pickup is further away from the strings. However, remember that the overall level is lower, so to benefit from the increased sustain, you’ll need to turn up your amp’s input control to compensate, or use a preamp. ADDITIONAL CONCLUSIONS The reduced transient response caused by the pickups being further away from the strings is helpful when feeding compressors, as large transients tend to “grab” the gain control mechanism to turn the signal down, which can create a “pop” as the compression kicks in. With the pickups further away, the compressor action is smoother although again, you’ll need to increase the input level to compensate for the lower pickup output. Furthermore, amp sims generally don’t like transients as they consist more of “noise” than “tone,” so they don’t distort very elegantly. Reducing transients can give a less “harsh” sound at the beginning of a note or strum. So the end result is that if you’ve set your pickups close to the strings, try increasing the distance. You might find this gives you an overall more consistent sound, as well as better sustain. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Adding dirt doesn’t have to destroy dynamics by Craig Anderton Saturation can help beef up sounds, but I find that with percussive audio like drums, the “flattening” of the waveform and general muddiness can reduce dynamics dramatically. You can always try the “mix in some straight signal” option as you would with parallel compression, but then you start to compromise the saturated character. The following technique uses phase cancellation to help retain dynamics (this is something you'll just have to try to hear why it's cool). This works because saturation affects the highest level signals the most, which of course, are the percussive peaks. The lower-level saturated signals are much more like the dry sound, so combining the saturated signal with the phase-flipped dry signal tends to cancel the lower-level signal while leaving the percussive peaks intact. Here’s the procedure; to get a feel for how this works, load a drum loop or other drum part. 1. Clone the audio from your primary track to create a secondary, identical track. Turn its level down for now. 2. Insert the saturation effect in your primary track. The saturation options within hosts varies considerably; if you’re using Sonar, try the Pro Channel’s Tube Distortion for this application (Fig. 1), because (depending on the source audio) being able to choose between the Type I and Type II saturation options can make a big difference in the overall effectiveness. However, this technique also works with the Softube Saturation knob, a variety of amp sims, and other signal “warmers” and tape saturation plug-ins when cranked up. Fig. 1: The original drum track (left) and copied drum track (right) are identical except that track 1 has saturation added, and track 2 is flipped out of phase (phase switch circled in red for clarity), with its fader providing the desired degree of dry signal cancellation. 3. Start playback, and adjust the saturation controls for the desired saturation character. Don’t worry about piling on the distortion—we’ll tighten it up. 4. Now flip the secondary channel’s phase, and turn up its fader. As the level gets closer to matching the first audio channel’s level, the individual drums will become more distinct. Note how the channel meter indicates a more dynamic signal. The greater the cancellation, the more the level will tend to drop. As it takes some tweaking to get just the right balance of phase-flipped to processed audio, it’s helpful to group the level controls for the two audio tracks so they track each other if you want to change the level. How to group faders varies from program to program, but it’s a common enough function it shouldn’t be too hard to figure out. You might want to increase the bass a bit to compensate for any thinness that occurs from the partial cancellation; distortion affects the high frequency content more, which means low frequencies will have more of a tendency to cancel. You might also find the highs excessive with some settings. The remedy to any of these issues is to send the track outs to a bus, then insert EQ in the bus to do any final tone shaping. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages. .
  3. Eliminate phase issues when recording acoustic guitar by Craig Anderton Using two mics on acoustic guitar is a common recording technique. Although it can give a good stereo image, there may be phase issues due to interaction between the two mics. Stereo miking also requires more setup time. Recording in mono with a single, high-quality condenser or ribbon mic eliminates phase problems but loses the stereo image. Fortunately, it’s possible to use equalization to create a stereo image from a mono signal. If done correctly, this can result in a spacious, big sound that’s particularly well-suited to solo guitar (especially nylon-string guitar). Furthermore, this lets you dedicate your budget to a single, high-quality mic—you don’t have to compromise on two lesser mics. (With condenser mics, a small-diaphragm type gives the best transient response for a tight, “present” sound; a large-diaphragm mic offers a somewhat “warmer” tone—given only one option, I’d choose the small-diaphragm type.) CREATING THE VIRTUAL MICS Start by copying the mono guitar track to two additional tracks. One track will provide the “finger noises/high frequencies” track. Solo it, and set its EQ for a high-pass filter response with a 24dB/octave slope and frequency around 1kHz. Pan this track right; after all, if you’re facing a guitarist the finger and fretting noises will be to the listener’s right (Fig. 1). Fig. 1: The three audio tracks are to the left, with the original at the top. The EQ to the immediate right shows the 24dB/octave lowpass filter response, while the EQ to the far right shows the 24dB/octave highpass filter response (unused EQ sections are grayed out for clarity). The Gloss button is a Sonar X-series feature that adds a little extra “sheen” to the highs. The second copied track is the “guitar body” track. Solo it and set its EQ response to lowpass, again with the slope to 24dB/octave, and frequency to about 400Hz. Pan this track left, as it emulates the guitar body’s “boom.” While monitoring all three tracks, pan the original track to center and bring up its level. The result should be a big, stereo guitar sound—but we’re not done yet. THE VIRTUAL COMBINATION LOCK Think of this technique as a combination lock, where everything has to line up just right for the lock to open. The level balance of the three tracks is crucial, as are the EQ frequencies. Experiment with the EQ settings, and consider trimming the ranges covered by the high and low tracks in the original track. For example, if the “body” track consists mostly of frequencies below 400Hz, trim frequencies below 400Hz from the original track to increase the separation. You might also want to trim the original track’s highs in the same range as the finger noises track. Then again if the image is too wide, pan the two copied tracks more to center. You may be pleasantly surprised to hear a stereo guitar with no phase issues—the sound is stronger, more consistent, and the stereo image is rock-solid. Give it a try next time you need to mic an acoustic guitar. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Make quantization work for you, not against you by Craig Anderton Quantization was controversial enough when it was limited to MIDI, but now that you can quantize audio, it’s even more of an issue. Although some genres of music work well with quantization, excessive quantization can suck the human feel out of music. Some people take a “holier than thou” approach to quantization by saying it’s for musical morons who lack the chops to get something right in the first place. These people, of course, never use quantization...well, at least while no one’s looking. I feel quantization has its place; it’s the ticket to ultra-tight grooves, and a way to let you keep a first and inspired take, instead of having to play a part over and over again to get it right—and lose the human feel by beating a part to death. But like any tool, if misused quantization can cause more harm than good by giving an overly rigid, non-musical quality to your work. TRUST YOUR FEELINGS, LUKE The first thing to remember is that computers make terrible music critics. Forcing music to fit the rhythmic criteria established by a machine is silly—it’s real people, with real emotions, who make and listen to music. To a computer, having every note hit exactly on the beat may be desirable, but that’s not the way humans work. There’s a fine line between “making a mistake” and “bending the rhythm to your will.” Quantization removes that fine line. Yes, it gets rid of the mistakes, but it also gets rid of the nuances. When sequencers first appeared, musicians would often compare the quantized and non-quantized versions of their playing. Invariably, after hearing the quantized version, the reaction would be a crestfallen “gee, I didn’t realize my timing was that bad.” But in many cases, the human was right, not the machine. I’ve played some solo lines were notes were off as much as 50 milliseconds from the beat, yet they sounded right. Rule #1: You dance; a computer doesn’t. You are therefore much more qualified than a computer to determine what rhythm sounds right. WHY QUANTIZATION SHOULD BE THE LAST THING YOU DO Some people quantize a track as soon as they’ve finished playing it. Don’t! In analyzing unquantized music, you’ll often find that every instrument of every track will tend to rush or lag the beat together. In other words, suppose you either consciously or unconsciously rush the tempo by playing the snare a bit ahead of the beat. As you record subsequent overdubs, these will be referenced to the offset snare, creating a unified feeling of rushing the tempo. If you quantize the snare part immediately after playing, then you will play to the quantized part, which will change the feel. Another possible trap occurs if you play a number of unquantized parts and find that some sound “off.” The expected solution would be to quantize the parts to the beat, yet the “wrong” parts may not be off compared to the absolute beat, but to a part that was purposely rushed or lagged. In the example given above of a slightly rushed snare part, you’d want to quantize your parts in relation to the snare, not a fixed beat. If you quantize to the beat the rhythm will sound even more off, because some parts will be off with respect to absolute timing, while other parts will be off with respect to the relative timing of the snare hit. At this point, most musicians mistakenly quantize everything to the beat, destroying the feel of the piece. Rule #2: Don’t quantize until lots of parts are down and the relative—not absolute—rhythm of the piece has been established. SELECTIVE QUANTIZATION Often only a few parts of a track will need quantization, yet for convenience musicians tend to quantize an entire track, reasoning that it will fix the parts that sound wrong and not affect the parts that sound right. However, the parts that sound right may be consistent to a relative rhythm, not an absolute one. The best approach is to go through a piece, a few measures at a time, and quantize only those parts that are clearly in need of quantization. Very often, what’s needed is not quantization per se but merely shifting an offending note’s start time. Look at the other tracks and see if notes in that particular part of the tune tend to lead or lag the beat, and shift the start time accordingly. Rule #3: If it ain’t broke, don’t fix it. Quantize only the notes that are off enough to sound wrong. BELLS AND WHISTLES Modern-day quantization tools, whether for MIDI or audio, offer many options that make quantization more effective. One of the most useful is quantization strength, which moves a note closer to the absolute beat by a particular percentage. For example, if a note falls 10 mlliseconds ahead of the beat, quantizing to 50% strength would place it 5 milliseconds ahead of the beat. This smooths out gross timing errors while retaining some of the original part’s feel (Fig. 1). Fig. 1: The upper window (from Cakewalk Sonar) shows standard Quenziation options; note that Strength is set to 80%, ad there's a bit of Swing. The lower window handles Groove Quantization, which can apply different feels by choosing a "groove" from a menu. Some programs offer “groove templates” (where you can set up a relative rhythm to which parts are quantized), or the option to quantize notes in one track to the notes in another track (which is great for locking bass and drum parts together). Rule #4: Study your recording software’s manual and learn how to use the more esoteric quantization options. EXPERIMENTS IN QUANTIZATION STRENGTH Here’s an experiment I like to conduct during sequencing seminars to get the point across about quantization strength. First, record an unquantized and somewhat sloppy drum part on one track. It should be obvious that the timing is off. Then copy it to another track, quantize it, and play just that track back; it should be obvious that the timing has been corrected. Then copy the original track again but quantize it to a certain strength—say, 50%. It will probably still sound unquantized. Now try increasing the strength percentage; at some point (typically in the 70% to 90% range), you’ll perceive it as quantized because it sounds right. Finally, play back that track along with the one quantized to 100% strength and check out the timing differences, as evidenced by lots of slapback echoes. If you now play the 100% strength track by itself, it will sound dull and artificial compared to the one quantized at a lesser strength. Rule #5: Correct rhythm is in the ear of the beholder, and a totally quantized track never seems to win out over a track quantized to a percentage of total quantization. REMEMBER, MIDI IS NOT AUDIO Quantizing a MIDI part will not affect fidelity, but quantizing audio will usually need to shift audio around and stretch it. Although digital audio stretching has made tremendous progress over the years in terms of not butchering digital audio, the process is not flawless. If significant amounts of quantization are involved, you’ll likely notice some degree of audio degradation but you’ll be able to get away with lesser amounts. Rule #6: Like any type of correction, rhythmic correction is most transparent with signals that don’t need a lot of correction. Yes, quantization is a useful tool. But don’t use it indiscriminately, or your music may end up sounding mechanical—which is not a good thing unless, of course, you want it to sound mechanical! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Yes, this sounds insane...but try it By Craig Anderton Do you want better mixes? Of course you do—the mix, along with mastering, is what makes or breaks your music. Even the best tracks won’t come across if they’re not mixed correctly. Different people approach mixing differently, but I don’t think anyone has described something as whacked-out as what we’re going to cover in this article. Some people will read this and just shake their heads, but others will actually try the suggested technique, and craft tighter, punchier mixes without any kind of compression or other processing. THE MIXING PROBLEM What makes mixing so difficult is, unfortunately, a limitation of the human ear/brain combination. Our hearing can discern very small changes in pitch, but not level. You’ll easily hear a 3% pitch change as being distinctly out of tune, but a 3% level change is nowhere near as dramatic. Also, our ears have an incredibly wide dynamic range—much more than a CD, for example. So when we mix and use only the top 20-40 dB of average available dynamic range, even extreme musical dynamics don’t represent that much of a change for the ear’s total dynamic range. Another problem with mixing is that the ear’s frequency response changes at different levels. This is why small changes in volume are often perceived as tonal differences, and why it is so important to balance levels exactly when doing A-B comparisons. Because our ears hear low and high end signals better at higher levels, just a slight volume boost might produce a subjective feeling of greater “warmth” (from the additional low end) and “sparkle” (from the increased perception of treble). The reason why top mixing engineers are in such demand is because through years of practice, they’ve trained their ears to discriminate among tiny level and frequency response differences (and hopefully, taken care of their ears so they don’t suffer from their own frequency response problems). They are basically “juggling” the levels of multiple tracks, making sure that each one occupies its proper level with respect to the other tracks. Remember, a mix doesn’t compare levels to an absolute standard; all the tracks are interrelated. As an obvious example, the lead instruments usually have higher levels than the rhythm instruments. But there are much smaller hierarchies. Suppose you have a string pad part, and the same part delayed a bit to produce chorusing. To avoid having excessive peaking when the signals reach maximum amplitude at the same time, as well as better preserve any rhythmic “groove,” you’ll probably mix the delayed track around 6 dB behind the non-delayed track. The more tracks, the more intricate this juggling act becomes. However, there are certain essential elements of any mix—some instruments that just have to be there, and mixed fairly closely in level to one another because of their importance. Ensuring that these elements are clearly audible and perfectly balanced is, I believe, one of the most important qualities in creating a “transportable” mix (i.e., one that sounds good over a variety of systems). Perhaps the lovely high end of some bell won’t translate on a $29.95 boombox, but if the average listener can make out the vocals, leads, beat, and bass, you have the high points covered. Ironically, though, our ears are less sensitive to changes in relatively loud levels than to relatively soft ones. This is why some veteran mixers start work on a mix at low levels, not just to protect their hearing but because it makes it easier to tell if the important instruments are out of balance with respect to each other. At higher levels, differences in balance are harder to detect. ANOTHER ONE OF THOSE ACCIDENTS The following mixing technique is a way to check whether a song’s crucial elements are mixed with equal emphasis. Like many other techniques that ultimately turn out to be useful, this one was discovered by accident. At one point I had a home studio in Florida that didn’t have central air conditioning, and the in-wall air conditioner made a fair amount of background noise. One day, I noticed that the mixes I did when the air conditioner was on often sounded better than the ones I did when it was off. This seemed odd at first, until I made the connection with how many musicians use the “play the music in the car” test as the final arbiter of whether a mix is going to work or not. In both cases the background noise masks low-level signals, making it easier to tell which signals make it above the noise. Curious whether this phenomenon could be quantized further, I started injecting pink noise (Fig. 1) into the console while mixing. Fig. 1: Sound Forge can generate a variety of noise types, including pink noise. This just about forces you to listen at relatively low levels, because the noise is really obnoxious! But more importantly, the noise adds a sort of “cloud cover” over the music, and as mountain peaks poke out of a cloud cover, so do sonic peaks poke out of the noise. APPLYING THE TECHNIQUE You’ll want to add in the pink noise very sporadically during a mix, because the noise covers up high frequency sounds like hi-hat. You cannot get an accurate idea of the complete mix while you’re mixing with noise injected into the bus, but what you can do is make sure that all the important instruments are being heard properly. (Similarly, when listening in a car system, road noise will often mask lower frequencies.) Typically, I’ll take the mix to the point where I’m fairly satisified with the sound. Then I’ll add in lots of noise—no less than 10 dB below 0 with dance mixes, for example, which typically have restricted dynamics anyway—and start analyzing. While listening through the song, I pay special attention to vocals, snare, kick, bass, and leads (with this much noise, you’re not going to hear much else in the song anway). It’s very easy to adjust their relative levels, because there’s a limited range between overload on the high end, and dropping below the noise on the low end. If all the crucial sounds make it into that window and can be heard clearly above the noise without distorting, you have a head start toward an equal balance. Also note that the “noise test” can uncover problems. If you can hear a hi-hat or other minor part fairly high above the noise, it’s probably too loud. I’ll generally run through the song a few more times, carefully tweaking each track for the right relative balance. Then it’s time to take out the noise. First, it’s an incredible relief not to hear that annoying hiss! Second, you can now get to work balancing the supporting instruments so that they work well with the lead sounds you’ve tweaked. Although so far I’ve only mentioned instruments being above the noise floor, there are actually three distinct zones created by the noise: totally masked by the noise (inaudible), above the noise (clearly audible), and “melded,” where an instrument isn’t loud enough to stand out or soft enough to be masked, so it blends in with the noise. I find that mixing rhythm parts so that they sound melded can work if the noise is adjusted to a level suitable for the rhythm parts. FADING OUT Overall, I estimate spending only about 3% of my mixing time using the injected noise, and I don't use it at all for some mixes. But sometimes, espeically with dense mixes, it’s the factor responsible for making the mix sound good over multiple systems. Mixing with noise may sound crazy, but give it a try. With a little practice, there are ways to make noise work for you. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. It's not just a signal processor, but an audio interface you can aggregate with Mac and Windows By Craig Anderton DigiTech’s iPB-10 is best known as a live-performance multieffects pedal that you program with an iPad, but it’s also an excellent 44.1kHz/24-bit, USB 2.0 stereo audio interface for guitar. USING IT WITH THE MAC Core Audio is plug-and-play. Patch the iPB-10 USB output into an available Mac USB port. Now, select “DigiTech iPB-10 In/Out” as the input and output under Audio MIDI Setup. With my quad core Mac, the system played reliably with a buffer size of 64 samples in Digital Performer, and even at 45 samples with simple Ableton Live projects (Fig. 1)—that’s excellent performance. Fig. 1: Setting up the iPB-10 for Ableton Live with a 45-sample buffer size. USING IT WITH WINDOWS The driver isn’t ASIO, so in your host select WDM or one of its variants as the preferred driver mode (MME or DirectX drivers work, too, but latency is objectionable). With Sonar using WDM, the lowest obtainable latency was 441 samples. With WASAPI, it was 220 samples. Mixcraft 6 listed the lowest latency as 5ms (see Fig. 2; Mixcraft doesn’t indicate sample buffers). Fig. 2: Working as a Windows WaveRT (WASAPI) interface with Acoustica’s Mixcraft 6. I was surprised the iPB-10 drivers were compatible with multiple protocols but in any event, the performance equalled many dedicated audio interfaces. ZERO-LATENCY MONITORING A really cool feature is that under the iPB-10’s Settings, you can adjust the ratio of what you’re hearing from the DAW’s output via USB, and what’s coming from the iPB-10. If you monitor from the iPB-10, you essentially get zero-latency monitoring with effects, because you’re listening to the iPB-10 output—not monitoring through the computer. Typically, for this mode, you’d turn off the DAW track’s input echo (also called input monitor), and set the iPB-10 XLR Mix slider for 50% USB and 50% iPB-10. (If you’re monitoring from the 1/4” outs, choose the 1/4” Mix slider). Then, you’ll hear your DAW tracks from the USB side, and your guitar—with zero latency and any iPB-10 processing—from the iPB-10 side. If your computer is fast enough that latency isn’t an issue, then you can monitor solely via USB, and turn on your DAW’s input monitoring/input echo to monitor your guitar through the computer. This lets you hear the guitar through any plug-ins inserted into your guitar’s DAW track. THERE’S MORE! As the audio interfacing is class-compliant and doesn’t require installing drivers, with Core Audio or WDM/WASAPI/WaveRT drivers you can use more than one audio interface (called “aggregation”). So keep your go-to standard audio interface connected, but also use the iPB-10 for recording guitar. As long as your host supports one of the faster Windows audio protocols—or you’re using a recent Mac—I think you’ll be pleasantly surprised by the performance. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Take control of your sound and add expressiveness By Craig Anderton In ancient times, patch cords connected synthesis modules together to create sounds. But many people didn’t like patch cord clutter, nor various reliability issues (oxidized jacks, loose solder connections, etc.). To eliminate these problems, the EMS VCS3 synthesizer (made in the late ’60s) employed a mechanical patch bay consisting of a small, square matrix of tiny pin jacks. Each audio or modulation source (output) fed a row of these jacks, while each column fed an audio or modulation destination (input). Inserting pins at the junction of rows and columns physically connected ins and outs (Fig. 1). ARP synthesizers used a similar concept, but with a slider instead of pins. Software-based matrix modulation typically includes a list of modulation sources, a list of modulation destinations, and a certain number of “slots” (i.e., a software patch cord). Each slot specifies a modulation source and destination (Fig. 2). The more available destination parameters, the better; if you want to be able to modulate a parameter, hopefully it will be available. In any event, modulation is the key to expressiveness, and matrix modulation is your key chain. When you need to spice up a patch, jack into the matrix. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. It’s easier to carry a laptop than an arsenal of keyboards—but you’ll need to optimize your computer for the task By Craig Anderton There are two important truths when using virtual instruments live: Your entire system could die at any moment, but your system will probably give you years of reliable operation. So feel free to go ahead and file this under“hope for the best, but plan for the worst”—but in this article, we'll plan for the worst. MAC VS. PC For desktop computing, I use both; with laptops, for almost a decade I used only Macs, but now I use only Windows. Computers aren't a religion to me—and for live performance, they're simply appliances. I'd switch back to Mac tomorrow if I thought it would serve my needs better, but here's why I use Windows live. Less expensive. If the laptop dies, I'll cope better. Often easier to fix. With my current Windows laptop, replacing the system drive takes about 90 seconds. Laptop drives are smaller and more fragile, so this matters. Easier to replace. Although it's getting much easier to find Macs, if it's two hours before the gig in East Blurfle and an errant bear ate your laptop, you'll have an easier time finding a Windows machine. Optimization options. This is a double-edged sword, because if you buy a laptop from your local big box office supply store, it will likely be anti-optimized for live performance with virtual instruments. We'll cover tweaks that address this, but you’ll have to enter geek mode. If you just want a Windows machine that works . . . There are companies that integrate laptops for music. I've used laptops from PC Audio Labs and ADK, and they've all outperformed stock laptops. I’ve even used a PC Audio Labs x64 machine that was optimized for video editing, but the same qualities that make it rock for video make it rock and roll for music. Of course, if you're into using a Mac laptop (e.g., MainStage is your act's centerpiece, or you use Logic to host virtual instruments), be my guest—I have a Mac laptop that runs Mavericks as well as a Windows machine that’s currently on Windows 7, and they’re both excellent machines. Apple makes great computers, and even a MacBook Air has enough power to do the job. But if you're starting with a blank slate, or want to dedicate a computer to live performance, Windows is currently a pretty compelling choice. PREPARING FOR DISASTER There are two main ways disaster can strike. The computer can fail entirely. One solution—although pricey—is a redundant, duplicate system. Consider this an insurance policy, because it will seem inexpensive if your main machine dies an hour before the gig. Another solution is to use a master keyboard controller with internal sounds. If your computer blows up, at least you'll have enough sounds to limp through the gig. If you must use a controller-only keyboard, then carry an external tone module you can use in emergencies. If you have enough warning, you can buy a new computer before the gig. In that case, though, you'll need to carry everything needed to re-install the software you use. One reason I use Ableton Live for live performance and hosting virtual instruments is that the demo version is fully functional except for the ability to save—it won't time out in the middle of a set, or emit white noise periodically. I carry a DVD-ROM and USB memory stick (redundancy!) with everything needed to load into Live to do my performance; if all else fails I can buy a new computer, install Live, and be ready to go after making the tweaks we'll cover shortly. Software can become corrupted. If you use a Mac, bring along a Time Machine hard drive. With Windows, enable system restore—the performance hit is very minor. Returning to a previous configuration that’s known to be good may be all you need to fix a system problem. For extra security, carry a portable hard drive with a disk image of your system drive. Macs make it easy to boot from an external drive, as do Windows machines if you're not afraid to go into the BIOS and change the boot order. WINDOWS 7 TWEAKS Neither Windows nor the Mac OS are real-time operating systems. Music is a real-time activity. Do you sense trouble ahead? A computer juggles multiple tasks simultaneously, so it gets around to musical tasks when it can. Although computers are pretty good at juggling, occasional heavy CPU loading (“spikes”) can cause audio dropouts. Although one option is increasing latency, this produces a much less satisfying feel. A better option is to seek out and destroy the source of the spikes. Your ally in this quest is DPC Latency Checker, a free program available at www.thesycon.de/eng/latency_check.shtml. LatencyMon (www.resplendence.com/latencymon) is another useful program, but a little more advanced. DPC Latency Checker monitors your system and shows when spikes occur (Fig. 1); you can then turn various processes on and off to see what's causing the problems. Fig. 1: The left screen shows a Windows laptop with its wireless card enabled, and system power plan set to balanced. The one on the right shows what happens when you disable wireless and change the system power plan to high performance. From the Start menu, choose Control Panel then open Device Manager. Disable (don't uninstall) any hardware devices you're not using, starting with any internal wireless card—it’s a major spike culprit. Even if your laptop has a physical switch to turn this on and off, that's not the same as actually disabling it (Fig. 2). Also disable any other hardware you're not using: internal USB camera, ethernet port, internal audio (which you should do anyway), fingerprint sensor, and the like. Fig. 2: In Device Manager, disable any hardware you’re not using. Onboard wireless is particularly problematic. By now you should see a lot less spiking. Next, right-click on the Taskbar, and open Task Manager. You'll see a variety of running tasks, many of which may be unnecessary. Click on a process, then click on End Process to see if it makes a difference. If you stop something that interferes with the computer's operation, no worries—you can always restart, and the service will restart as well. Finally, click on Start. Type msconfig into the Search box, then click on the Startup tab. Uncheck any unneeded programs that load automatically on startup. If all of this seems too daunting, don't worry; simply disabling the onboard wireless in Device Manager will often solve most spiking issues. BUT WAIT—THERE'S MORE! Laptops try hard to maximize battery life. For example if you're just composing an email, the CPU can loaf along at a reduced speed, thus saving power. But for real-time performance situations, you want as much CPU power as possible. Always use an AC adapter, as relying on the battery alone will almost invariably shift into a lower-power mode. With Windows machines, the most important adjustment is to create a power plan with maximum CPU power. With Windows 7, choose Control Panel > Power Options and create a new power plan. Choose the highest performance power plan as a starting point. After creating the plan, click on Change Plan Settings, then click on Change Advanced Power Settings. Open up Processor Power Management, and set the Maximum and Minimum processor states to 100% (Fig. 3). If there's a system cooling policy, set it to Active to discourage overheating. Fig. 3: Create a power plan that runs the processor at 100% for both minimum and maximum power states. Laptops will have an option to specify different CPU power states for battery operation; set those to 100% as well. If overheating becomes an issue (it shouldn't), you can probably throttle back a bit on the CPU power, like to 80%. Just make sure the minimum and maximum states are the same; I've experienced audio clicks when the CPU switched states. (And in the immortal words of Herman Cain, “I don't have the facts to back me up” but it seems this is more problematic with FireWire interfaces than USB.) A HAPPIER LAPTOP A laptop's connectors are not built to rock and roll specs. If damaged, the result may be an expensive motherboard replacement. Ideally, every computer connection should be a break-away connection; Macs with MagSafe power connectors are outstanding in this respect. With standard power connectors, use an extension cable that plugs between the power supply plug and your computer's jack. Secure this extension cable (duct tape, tie it around a stand leg, or whatever) so that if there's a tug on the power supply, it will pull the power supply plug out of the extension cable jack—not the extension cable plug out of the computer.. Similarly, with USB memory sticks or dongles, use a USB extender (Fig. 4) between the USB port and external device. Fig. 4: A USB extension cable can help keep a USB stick from breaking off at its base (and possibly damaging your motherboard) if pressure is applied to it. It’s also important to invest in a serious laptop travel bag. I prefer hardshell cases, which usually means getting one from a photo store and customizing it for a computer instead of cameras. Finally, remember when going through airport scanners to put your laptop last on the conveyer belt, after other personal effects. People on the incoming side of security can’t run off with your laptop, but those who’ve gone through the scanner can if they get to your laptop before you do. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. There’s more to filter life than peaks and dips By Craig Anderton Just about everyone knows what a parametric equalizer does: Boost or cut a specific range of frequencies. But today’s EQs are usually multimode, and have other responses as well. Are you taking full advantage of what your EQ can offer? If not, you will after reading this. SHELVING FILTERS The low shelf response is one of the preferred methods for adding “bottom” when boosted, and removing “mud” when cut. Similarly, the high shelf can add “sparkle” when boosting, and reduce “shrillness” when cut. The shelving filter’s distinguishing characteristic is that after reaching the maximum amount of boost or cut, that amount remains constant. For example, if you use a high shelf to boost a signal by +3dB starting at 1kHz, and it reaches the full boost of +3dB around 2kHz, the boost will remain at +3dB up to the filter’s high frequency response limit. Using a combination of shelving and parametric can solve numerous problems. Suppose there’s a drum loop with too much kick drum, but not enough low end in general. Use a low shelf to bring up the low end, then use a parametric to “punch” a dip just in the vicinity of the kick (Fig. 1). Fig. 1: Combining a low-frequency shelf with a parametric stage. Shelving filters work with individual tracks, but because of their gentle and relatively benign effect, can be applied to program material as well. HIGHPASS AND LOWPASS FILTERS The difference between a highpass filter (which as the name suggests, passes high frequencies) and a shelf set to cut low frequencies is that the response continues to attenuate with frequency, at a rate specified in dB/octave. For example, with a 12dB/octave response, for each octave you go below the cutoff frequency, the response will be down roughly another 12dB compared to the previous octave. The highpass filter is a fine way to get rid of subsonics, low frequency mud, room rumble, and excessive plosive sounds from vocals. One pole of rolloff (6dB per octave) isn’t really enough; if possible, dial in a sharper cutoff to solve these types of problems (Fig. 2). Fig. 2: A highpass filter to the left rolls off lows at 48dB/octave; a lowpass filter (right) rolls off at a similarly steep rate for high frequencies. Also note that with multiband EQs, you may be able to set each band to responses other than the traditional parametric. If you have to deal with subsonics, you can “gang” multiple highpass filters in series for a sharper overall cutoff. High pass filters rarely control a signal’s tone; they exist mostly to solve problems. If you want to control the low end in a more general way, shelving is usually the better option. As to lowpass filters, the traditional use is to attenuate hiss. Few signals have significant amounts of energy above 10kHz, so if necessary you can usually remove the very highest frequencies without degrading a signal’s integrity. But there are other uses. Removing high frequencies can help put a signal further back in the mix without resorting to a change in volume, and a lowpass filter can also take away some of the “brightness” of digital signals (like synthesizers) if they clash with primarily analog tracks. Lowpass filters are seldom, if ever, used with program material because even a slight amount of high frequency reduction can create a “muffled” sound. This isn’t as noticeable on individual tracks. THE COMB FILTER Some sounds demand more attention than they should. Conventional EQ may fix the problem, but an easy way to “dilute” a sound without altering its fundamental character is to apply comb filtering. The comb filter gets its name because its frequency response curve looks like a comb — instead of being a straight line, it has a huge number of dips and peaks. Just as you can thin out MIDI controller streams by removing pieces of data, you can thin out sound by inserting lots of narrow notches in portions of the frequency spectrum. This doesn’t work with everything, though. It’s best for overbearing pads, non-pitched sound sources, and instruments designed to sit in the back, like rhythm guitar. Otherwise, if the notches fall at, say, the resonant frequency of a tom, the sound may get too thin. Also be aware that comb filtering adds a subtle sense of pitch to the sound, although you can adjust this to some degree. It’s unlikely that your host’s EQ offers a comb response, so you’ll probably have to construct one yourself (Fig. 3); it’s not difficult. Fig. 3: Sony’s Simple Delay has been pressed into service as a comb filter. Insert a simple delay plug-in — the simpler the better. Make sure that it can do short delays (1–20ms). A flanger or chorus may work if it’s not a multi-voice type and you can turn off the modulation. Set the dry out and delay out to the same level (if there’s a blend or mix control, set it to 50%). Adjust the delay time in the 100μs to 20ms range until you hear the desired amount of “thinness.” Times under 10ms have a major effect on the sound; longer delays get into the echo range. If the sound is too pitched or “phasey,” reduce the delay level slightly. This reduces the notch depths. DON’T FORGET ABOUT AUTOMATION Most EQ plug-ins, as well as EQ integrated into a host, are automatable. This greatly extends the usefulness of “alternative” EQ responses; for example, you can cause the low pass to sweep down for just a fraction of a second to eliminate an annoying high frequency transient or reduce an overly friendly high hat, then return immediately to full frequency response. Or, if there’s a signal with some hiss, bring the lowpass frequency down in quiet sections, then sneak it back up again when no one will notice. In the example of setting up a comb filter effect, you could change the ratio of straight to delayed sound to adjust the sound’s “thinness.” As long as you’re adjusting filter boost/cut, you probably won’t hear any “stairstepping” due to quantization of the parameter into multiple steps. You may or may not hear any when changing the frequency. All of this comes down to one thing: Be creative with EQ. In the days of physical consoles, EQ tended to be set-and-forget devices because there just weren’t enough arms to run them and also ride the faders. But today, we have no such limitations . . . which means more options if you’re into creative recording. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Let there be light—if you have a USB port by Craig Anderton When I saw my first light powered by a USB-port, I was smitten. Whether trying to get work done on a plane without disturbing the grumpy person sitting next to me or running a live laptop set in a dark club, I had found the answer. Or more realistically, almost the answer...it used an incandescent bulb, drew much current, weighed a lot, and burned out at an early age. I guess it was sort of the Elvis Presley of laptop accessories. Mighty Bright introduced a single-LED USB light a few years ago that fulfilled the same functions, but much more elegantly. And now, unlike Scotty in Star Trek they actually can give you more power—with their new 2-LED USB Light. What You Need to Know The two white LEDs are controlled by a push switch so you can light one or both LEDs. Compared to the single LED version, having the extra LED available makes a big difference in terms of throwing more light on a subject. The gooseneck is very flexible but holds it positions, and the weight is reasonable The size is about the same as the single-LED version, and it fits in the average laptop bag without problems. Limitations My only concern is the weight—not because it weights a lot, but because USB ports aren’t exactly industrial-strength. However if you plug into a laptop’s side USB port and bend the light in a U so the top is over where the USB connector plugs into the port (Fig. 1), then it becomes balanced and places little weight on the port itself. Fig. 1: Optimum laptop positioning for the 2-LED USB Light. Conclusions Once you have one of these things sitting around, you’ll find other uses. Given how many computers have USB ports on the back, plug this in and you’ll be able to see where all your mystery wires are routed. I take the 2-LED USB Light when I’m on the road, and combined with a general-purpose charger for USB devices, the combo makes a dandy night light—helpful in strange hotel rooms when the fire alarm goes off in the middle of the night, and you don’t want to trip on your way out the door. Also, lots of keyboards have USB ports and assuming it’s not occupied with a thumb drive or similar, the 2-LED USB Light can help illuminate the keyboard’s top panel. Considering the low cost and long LED life (100,000 hours, which equals three hours a day for 90 years), I’d definitely recommend having one of these babies around. You never know when you’re going to need a quick light source, and these days, it’s not too hard to find a suitable USB connector to provide the power. Resources Musician’s Friend Mighty Bright 2-LED USB Light online catalog page ($14.00 MSRP, $11.99 “street”) Mighty Bright’s 2-LED USB Light product web page Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. The ideal—and easiest—answer might be a “MIDI LFO” By Craig Anderton Lots of effects parameters have tempo sync—like delay time, tremolo LFO rate, envelopes times, etc. But what if you want to sync, say, filter cutoff or resonance variations to tempo? Thanks to MIDI, it’s easy. The key is to use a MIDI controller to control the parameter you want to sync, and fit the controller data to tempo. For example, both Cakewalk Sonar and Steinberg Cubase let you “draw” periodic controller data whose period is quantized to tempo. If your DAW of choice doesn’t offer this option, then create a library of MIDI sequences with rhythmic controller values that you can paste in to MIDI tracks. Remember—because it’s MIDI, any controller shapes you create will work at any tempo. Here’s how to create tempo-synched modulation in Cubase and Sonar. Cubase: With the Key Editor open, in a controller lane choose your controller (this example shows controller #7—main volume) and click on the Line tool’s drop-down menu. You’ll see options for Line, Parabola, Sine, Triangle, Square, and Paint. Suppose you want the volume controller to sync to tempo with a triangular wave, with one period of the waveform equal to a sixteenth note (Fig. 1). Fig. 1: Steinberg Cubase allows drawing LFO shapes with MIDI controllers. Select Triangle from the Line menu, then choose the period with the Quantize drop-down menu—in this example, 1/16 is selected (make sure Snap is selected as well). The Length parameter sets the amount of space between controllers, with 1/16th or 1/32nd note being a good compromise between resolution and data density. However, choosing coarser values can give cool “step-sequenced” effects, so don’t ignore that possibility. If you choose Quantize Link, then the control signal's resolution depends on the Zoom resolution. Once everything is set up, draw as if you were drawing a line; drag the “line” up and down to set the waveform amplitude. The selected waveform will appear as the “line.” To change the duty cycle with triangle and square waves, hold down Shift-Ctrl (or with the Mac, Shift-Command) as you drag. While still holding down the mouse button and Shift-Ctrl/Cmnd, after defining the waveform’s length drag right of left to change the duty cycle. There are other keyboard shortcut options; refer to the help for details. Sonar: Sonar allows for automated envelope drawing, including MIDI controller shapes. In Track View, open up an automation lane, and choose the envelope you want to create (Fig. 2). Fig. 2: A triangle wave is being selected as a drawing tool in Sonar. Right-click on the toolbar's Draw tool, and from the context menu, choose the desired waveform (your choices are Freehand, Line, Sine, Triangle, Square, Saw, and Random—my personal favorite). As with Cubase, the quantization value sets the waveform period. Click in the automation lane where you want the envelope to start, and also, to set its midpoint value. Drag up or from this point to set the amplitude, then drag left or right to set the waveform’s length. You don’t have to drag on a straight line; if you vary the height, you’ll vary the waveform amplitude, as shown in the screen shot. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. Fix those little “gotchas” before they make it into the final mix by Craig Anderton MIDI sequencing is wonderful, but it’s not perfect—and sometimes, you’ll be sandbagged by problems like false triggers (e.g., what happens when you brush against a key accidentally), having two different notes land on the same beat when quantized, voice-stealing that cuts off notes abruptly, and the like. These glitches may not be obvious when other instruments are playing, but they nonetheless can muddy up a piece or even mess up the rhythm. Just as you’d “proof” your writing, it’s a good idea to “proof” sequenced tracks. Begin by listening to each track in isolation; this reveals flaws more readily than listening to several tracks simultaneously. Headphones can also help, as they may reveal details you’d miss over speakers. As you listen, also check for voice-stealing problems caused by multi-timbral soft synths running out of voices. Sometimes if notes are cut off, merely changing note durations to prevent overlap—or deleting one note from a chord—will solve the problem. But you may also need to dig deeper into some other issues, such as . . . NOTES WITH ABNORMALLY LOW VELOCITIES OR DURATIONS Even if you can’t hear these notes, they still use up voices. They’re easy to find in an event list editor, but if you’re in a hurry, do a global “remove every note with a velocity of less than X” (or for duration, “with a note length less than X ticks”) using a function like Cakewalk Sonar’s DeGlitch option (Fig. 1). Fig. 1: Sonar’s DeGlitch function is deleting all notes with velocities under 10 and durations under 10 milliseconds. Note that most MIDI guitar parts benefit greatly from a quick cleanup of notes with low velocities or durations. UNWANTED AFTERTOUCH (CHANNEL PRESSURE) DATA If your master controller generates aftertouch (pressure) but a patch isn’t programmed to use it, you’ll be recording lots of data that serves no useful purpose. When driving hardware synths, this can create timing issues and there may even be negative effects with soft synths if you switch from a sound that doesn’t recognize aftertouch to one that does. Note that there are two types of aftertouch—channel aftertouch, which generates one message that correlates to all notes being pressed, and polyphonic aftertouch, which generates individual messages for each note being pressed. The latter sends a lot of data down the MIDI stream, but as there are few keyboard controllers with polyphonic aftertouch, it’s unlikely you’ll encounter this problem. Steinberg Cubase’s Logical Editor (Fig. 2) is designed for removing specific types of data, and one useful application is removing unneeded aftertouch data. Fig. 2: In this basic application of Cubase's Logical Editor, all aftertouch data is being removed. Note that many recording programs disable aftertouch recording as the default, but if you enable it at some point, it may stay enabled until you disable it again. OVERLY WIDE DYNAMIC VARIATIONS This can be a particular problem with drum parts played from a keyboard—for example, some all-important kick drum hits may be much lower than others. There are two fixes: Edit individual notes (accurate, but time-consuming), or use a MIDI edit command that sets a minimum or maximum velocity level, like the one from Sony Acid Pro (Fig. 3). With pop music drum parts, I often limit the minimum velocity to around 60 or 70. Fig. 3: Sony's Acid Pro makes it easy to restrict MIDI dynamics to a particular range of velocity values. DOUBLED NOTES If you “bounce” a key (or drum pad, for that matter) when playing a note, two triggers for the same note can end up close to each other. This is also very common with MIDI guitar. Quantization forces these notes to hit on the same beat, using up an extra voice and producing a flanged/delayed sound. Listening to a track in isolation usually reveals these flanged notes; erase one (if two notes hit on the same beat, I generally erase the one with the lower velocity value). Some programs offer an edit function that deletes duplicates automatically, such as Avid Pro Tools’ Delete Duplicate Notes function (Fig. 4). Fig. 4: Pro Tools has a menu item dedicated specifically to eliminating duplicate MIDI notes. NOTES OVERLAP WITH SINGLE-NOTE LINES This applies mostly to bass and wind instruments. In theory, with single-note lines you want one note to end before another begins. Even slight overlaps make the part sound more mushy (bass in particular loses “crispness”) but what’s worse, two voices will briefly play where only one is needed, causing voice-stealing problems. Some programs let you fix overlaps as a Note Duration editing option. However note that with legato mode, you do want notes to overlap. With this mode, a note transitions smoothly into the next note, without re-triggering an envelope when the next note occurs. Thus in a series of legato notes, the envelope attack occurs only for the first note of the series. If the notes overlap without legato mode selected, then you’ll hear separate articulations for each note. With an instrument like bass, legato mode can simulate sliding from one fret to another to change pitch without re-picking the note. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), andSound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. The more interesting the synth sound, the better By Craig Anderton One of the great advantages of virtual instruments is that many parameters are automatable, so you can introduce dynamic changes as you do with other elements of a mix. But which parameters are worth automating? Just about everyone appreciates twisting filter cutoff frequency and envelope attack or decay, but today’s virtual synths have seemingly progressed faster than the ability of musicians to assimilate all the cool new tricks they can do. Case in point: oscillators. If you think that oscillators are simply static tone generators that are good only for providing a signal so that filters and envelopes can do something interesting to them, think again—thanks to capabilities like hard sync, frequency modulation, and ring modulation. These are powerful options that can make a synth leap out of a track by adding a more dynamic, edgy timbre. Let’s take a look at hard sync, the most popular of these effects, and a trademark sound of old school analog synths. HARD TIMES IN SYNC-LAND One of the best known-examples of the hard sync sound appears in the synth figure on that old 80s hit by the Cars, “Let’s Go.” Hard sync changes a tone’s harmonic structure, so it’s kind of like filtering; but the sound is more pronounced, with an almost vocal-type resonance. Hard sync requires two oscillators. One oscillator tracks the keyboard and sets the pitch. However, we never hear this oscillator’s audio. Instead, it provides a sync reference for a slave oscillator, whose period (the length of one cycle) is always forced to be the same as the pitch oscillator. This is why it’s considered “synched” to the pitch oscillator. Confused? Here’s a more obvious example of “hard sync,” but at a much lower frequency. Suppose you have a slow LFO hooked up to control filter cutoff. There will probably be a mode that re-triggers the LFO when you hit a key. This “hard syncs” the LFO to your playing: no matter where the LFO is in its cycle, when you hit a key, it re-starts. Similarly, regardless of what the slave waveform is doing, when the pitch oscillator starts a new cycle, so does the slave. If you change the slave’s pitch, the waveform will still have the same period—thus the same perceived pitch—because it’s slaved to the pitch oscillator (Fig. 1). However, the harmonic structure will change radically as the slave’s frequency changes. Fig. 1: These three waveforms represent three different slave oscillator frequencies, but the pitch oscillator frequency is the same in each case. Note that the period (the length of each waveform’s cycle) is the same, but the waveform’s shape—and therefore, harmonic structure—differs. There are a few “rules” about hard sync settings: You generally don’t want any audio from the pitch oscillator. Listen to the output from the slave oscillator. If the slave oscillator pitch goes lower than the pitch oscillator, then the hard sync effect disappears. The slave oscillator should always be higher-pitched than the pitch oscillator. The pitch oscillator waveform isn’t really significant, as it’s used only as a timing reference. The slave oscillator waveform is less important than usual because the hard synching action has such a strong effect on the sound. However, a sawtooth or square wave will give more “bite” than a sine or triangle wave. ANATOMY OF A PATCH In addition to setting up the oscillators, you also need a modulation source to vary the slave oscillator frequency. Common choices are an envelope generator, mod wheel, or LFO (particularly an LFO that’s reset when you play new notes). However, you need a really wide-range pitch sweep—like four or five octaves—to get dramatic hard sync effects, and many modulation sources aren’t really designed to deliver that kind of output. The simplest solution is to “double up” the number of envelope destinations. For example, if there are two possible output destinations, send both to the slave oscillator’s pitch, and turn up both output levels as high as possible. I’ve even set two envelope generators with the same settings, each with two outputs, to the slave pitch, thus quadrupling the range. Also, if the oscillator has a control input, make sure that it too is set for as wide a range as possible. Fig. 2 shows the setup for a typical hard sync patch using Cakewalk’s PSYN II synthesizer. Consult your particular soft synth’s documentation, because you want to make sure you know which oscillator is modulating which so you can listen to the slave oscillator. Fig. 2: In Cakewalk’s PSYN II synth, Osc I sets pitch; Osc II is the slave. The section outlined in red sets the two oscillators to sync mode. The yellow line outlines the slave’s pitch parameters, whose initial setting is higher than Osc I. The modulation source is an envelope generator decay (outlined in orange); it has two modulation outputs (blue outline), both set to control Pitch, and both set to maximum to sweep over the widest possible range. The slave’s envelope generator modulation sensitivity control (green outline) is also set to maximum. There are oscillator cross-modulation options other than hard sync; Frequency Modulation (FM) is also popular. In a nutshell, this causes one oscillator to modulate the other (without re-synching) so that the resulting signal has more complex sidebands. Increasing the level of the modulating signal increases the complexity of the signal. In any event, next time you need to do a cutting synth solo, don’t reach for the EQ or exciter processor: set up hard sync, link it to your mod wheel, play expressively, and record the mod wheel movements as automation. You’ll be treated to a far more animated, biting sound that if you let it, can take over a track. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. It's time to play "stompbox reloaded" by Craig Anderton The studio world is not experiencing a compressor shortage. Between hardware compressors, software compressors, rack compressors, and whatever other compressors I’ve forgotten, you’re pretty much covered. But there may be a useful compressor that you haven’t used recently: one of the stompbox persuasion. With most DAWs including inserts so you can integrate external effects easily, interfacing stompboxes isn’t very difficult. Yes, you’ll need to match levels (likely attenuating on the way into the compressor and amplifying on the way out), but that’s not really a big deal. But why bother? Unlike studio compressors, which are a variation on limiters and whose main purpose is to control peaks, guitar compressors were generally designed to increase sustain by raising the level as a string decayed (Fig. 1). Fig. 1: The upper waveform is an uncompressed guitar signal, while the lower one adds compression to increase the sustain. Both waveforms have the same peak level, but the compressed guitar’s decay has a much higher level. In fact some compressors were called “sustainers,” and used designs based on the Automatic Level Control (ALC) circuitry used to keep mic signals at a constant level for CB and ham radio. The gain control elements were typically field-effect transistors (FET) or photoresistors, and had minimal controls—usually sustain, which was either a threshold control or input level that “slammed” the compressor input harder—and output level. Some guitar players felt that compressors made the sound “duller,” so a few designs tuned the compressor feedback to compress lower-frequency signals more than higher-frequency signals—the opposite of a de-esser. Many guitarists patched a preamp between the guitar and compressor to give even more sustain because higher input levels increased the amount of compression. Putting compressors before octave dividers often caused them to work more reliably, and adding a little compression before an envelope-controller filter (like the Mutron III) gave less variation between the low and high filter frequencies. Some legendary compressors include the Dan Armstrong Orange Squeezer (Fig, 2), MXR Dyna-Comp, and BOSS CS-1. But many companies produced compressors, and continue to do so. Fig. 2: Several years ago the classic Dan Armstrong Orange Squeezer was re-issued. Although it has since been discontinued, schematics for Dan’s original design exist on the web. APPLICATIONS RE-LOADED Bass. Not all compressors designed for guitar could handle bass frequencies, especially not a synthesizer set for sub-bass. So, it’s usually best to patch the compressor in parallel with your bass signal. With a hardware synthesizer or bass, split the output and feed two interface (or amp) inputs, one with the compressor inserted. With a virtual synthesizer or recorded track, send a bus output to a spare audio interface output, patch that to the compressor input, then patch the compressor output to a spare audio interface input. Use the bass channel’s send control to send signal into the bus that feeds the compressor. Synthesizers are particularly good with vintage compressors because you can edit the amplitude envelope for a fast attack and quick decay before the sustain. Turn the bass output way up to hit the compressor hard, and you’ll get the aggressive kind of attack you hear with guitar. Drums. Guitar compressors can give a punchy, “trashy” sound that’s good for punk and some metal. As with synth bass, parallel compression is usually best to keep the kick drum sound intact (Fig. 3). Adding midrange filtering before or after the compression can give an even funkier sound. Fig. 3: This setup provides parallel compression. The channel on the left is the drum track; the one on the right is a bus with an “external insert” plug-in. This plug-in routes the insert effect to your audio interface, which allows patching in a hardware compressor as if it was a plug-in. The drum channel has a send control to feed some drum signal to the compressor bus, whose output goes to the master bus. Bus compression. You wouldn’t want to compress a master bus with a stompbox compressor (well, maybe you would!), but try sending bass and drums to am additional bus, then compressing that bus and patching it in parallel with the unprocessed bass and drums sound. This makes for a fatter sound, and “glues” the two instruments together. What’s more, many older compressors had some degree of distortion, which adds even more character to any processing. Vintage compressors with relatively short decay times (most stompbox compressors had fixed attack or decay times) give a “pumping” sound to rhythm sections. EMULATING STOMPBOX COMPRESSION WITH MODERN GEAR Don’t have an old compressor around? There are ways to come close with modern gear. If your compressor has a lookahead option, turn it off. Set the attack to the absolute minimum time possible. Decay time varied depending on the designer; a shorter release (around 100ms) gives a “rougher” sound with chords, but some compressors had quite long release times—over 250ms—to smooth out the decaying string sound. Set a high compression ratio, like 20:1, and a low threshold, as older compressors had low thresholds to pick up weak string vibrations. Finally, try overloading the compressor input to create distortion, which also gives a harder attack. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Whether you're quantizing sequences, programming drum machines, creating beats, or synching to tempo, it helps to know rhythmic notation by Craig Anderton As we all know, lots of great musicians have been able to create an impressive body of work without knowing how to read music. But regardless of whether you expect to be able to read lead sheets on the fly—or even will need to do so—there are some real advantage to “knowing the language.” In particular, it’s hard not to run into references to rhythmic notation. Today’s DAWs quantize to particular rhythmic values, and effects often sync to particular rhythms as well. And if you want to program your own beats, it also helps to know how rhythm works. So let’s forget the tough stuff and take some baby steps into the world of rhythmic notation. This brief overview of rhythmic notation provides the basics; but if you’re new to all this, you’ll probably need to read this section over several times and fool around a bit with something like a drum machine before it all falls into place. Measures. A piece of music is divided into smaller units called measures (also called bars), and each measure is divided into beats. The number of beats per measure, and the rhythmic value of the beats, depends on both the composition and the time signature. Time Signatures. A time signature (also called metric signature) defines a piece of music’s rhythmic nature by describing a measure’s rhythmic framework. The time signature is notated at the beginning of the music (and whenever there’s a change) with two numbers, one on top of the other. The top number indicates the number of beats in each measure, while the bottom number indicates the rhythmic value of the beat (e.g. 4 is a quarter note, 8 is an eighth note, etc.). If that doesn’t make sense yet, let’s move on to some examples. Rhythmic Values for Notes. With a measure written in 4/4, there are four beats per measure, and each beat represents a quarter note. Thus, there are four quarter notes per measure of 4/4 music. Quarter note symbol With a 3/4 time signature, the numerator (upper number) indicates that there are three beats per measure, while the denominator indicates that each of these beats is a quarter note. There are two eighth notes per quarter note so there are eight eighth notes per measure of 4/4 music. Eighth note symbol There are four 16th notes per quarter note, which means there are 16 16th notes per measure of 4/4 music. 16th note symbol There are eight 32nd notes per quarter note. If you’ve been following along, you’ve probably already guessed there are 32 32nd notes per measure of 4/4 music. 32nd note symbol There are also notes that span a greater number of beats than quarter notes. A half note equals two quarter notes. Therefore, there are two half notes per measure of 4/4 music. Half note symbol A whole note equals four quarter notes, so there is one whole note per measure of 4/4 music. (We keep referring these notes to 4/4 music because that’s the most commonly used time signature in contemporary western music.) Whole note symbol Triplets The notes we’ve covered so far divide measures by factors of two. However, there are some cases where you want to divide a beat into thirds, giving three notes per beat. Dividing a quarter note by three results in eighth-note triplets. The reason we use the term “eighth-note triplets” is because the eighth note is closest to the actual rhythmic value. Dividing an eighth note by three results in 16th-note triplets. Dividing a 16th note by three results in 32nd-note triplets. Eighth-note triplet symbol Note the numeral 3 above the notes, which indicates triplets. Rests. You can also specify where notes should not be played; this is indicated by a rest, which can be the same length as any of the rhythmic values used for notes. Rest symbols (from left to right): whole note, half note, quarter note, eighth note, and 16th note Dotted Notes and Rests. Adding a dot next to a note or rest means that it should play one and a half times as long as the indicated value. For example, a dotted eighth would last as long as three 16th notes (since an eighth note is the same length as two 16th notes). A dotted eighth note lasts as long as three 16th notes Uncommon Time Signatures. 4/4 (and to a lesser extent 3/4) are the most common time signatures in our culture, but they are by no means the only ones. In jazz, both 5/4 (where each measure consists of five quarter notes) and 7/4 (where each measure consists of seven quarter notes) are somewhat common. In practice, complex time signatures are often played like a combination of simpler time signatures; for example, some 7/4 compositions would have you count each measure not as 1, 2, 3, 4, 5, 6, 7 but as 1, 2, 3, 4, 1, 2, 3. It’s often easier to think of 7/4 as a bar of 4/4 followed by a bar of 3/4 (or a bar of 3/4 followed by a bar of 4/4, depending upon the phrasing), since as we mentioned, 4/4 and 3/4 are extremely common time signatures. Other Symbols. There are many, many other symbols used in music notation. > indicates an accent; beams connect multiple consecutive notes to simplify sight reading; and so on. Any good book on music notation can fill you in on the details. Two 16th notes beamed together Drawing beams on notes makes them easier to sight-read compared to seeing each note drawn indivicually. FOR MORE INFORMATION These books can help acquaint you with the basics of music theory and music notation. Alfred’s Pocket Dictionary of Music is a concise but thorough explanation of music theory and terms for music students or teachers alike. Practical Theory Complete, by Sandy Feldstein is a self-instruction music theory course that begins with the basics—explanations of the staff and musical notes—and ends with lesson 84: “Composing a Melody in Minor.” Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), andSound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. MIDI can be your best friend if you want to have fun with techno leads by Craig Anderton Is it time for a rave revival already? Gee, it seemed like it was only...well, it was over two decades ago when L.A. Style released “James Brown is Dead,” so we’re due. Techno music doesn’t have “leads” in the traditional sense of screaming guitars and vocals; many times they’re sampled sound snippets whose source can be anything from political speeches to old movies. Trying to find appropriate samples is only one task—the other is laying them into the tune’s rhythmic bed. Although you can always use a digital audio editor to lay samples on a timeline and then cut and paste them to create cool effects, I much prefer putting these samples in a playable format and doing these tricks in real time—the process is more spontaneous, and more fun. Even if you’re not into samplers, many virtual drums let you load samples into the drum’s “pads” and play them from a keyboard. However, note that samples rarely have inflections that match the music’s rhythms perfectly, which can be distracting. Some musicians who don’t take a digital audio editing approach attack this problem at the sampler itself, by breaking phrases down into individual samples and triggering different words from different keys at the desired rhythms. However, there are a lot of sequencer tricks that can produce similar effects with less effort. FIRST, A SAMPLING SAFARI Finding appropriate samples is the first task (getting clearance for them is another one, but that’s a whole other issue). Ideally, the samples not only stand on their own, but can work with each other to create composite effects where the whole is greater than the sum of the parts. A simple example is finding phrases which can be combined in ways to make different sentences. A friend recently turned me on to a grade-Z sci-fi movie, “Invisible Invaders,” which you can not only catch on Netflix but is a gold mine of samples. The premise is that earth’s invaders can only be destroyed with sound waves. One sample, “Sound is the answer,” became the song’s title. Other samples that made up the collection were: “I asked you a question” “The answer is in sound” “The device must have used sonic rays” “If you think sound is the answer” “Sound vibrations” “Only two theories seem to make any sense” Some of these are fairly long, and at 135 BPM, I wanted to have the words line up with the rhythms as much as possible, and also mutate the samples for other and perhaps more nefarious purposes. Here are some tricks that worked for me. SAMPLE TRUNCATION WITHIN THE SEQUENCE You can shorten samples within a sampler or digital audio editor, but the easiest approach with MIDI sequencing is just to shorten the note’s duration. For example, I wanted to follow “The device must have used sonic rays” with “The device must have used sound vibrations.” Rather than cut and paste to replace “sonic rays” with “sound vibrations” and create another sample, I simply shortened the note for the first sample so that it ended after “...must have used,” then added a note for the “sound vibrations” sample immediately after to create the composite sentence (Fig. 1). Fig. 1: The E note triggers the first sample up to “...must have used,” while the G# triggers the sample for “sound vibrations.” The pitch bend messages speed up the first sample slightly so that it can end before the next sample, which I wanted to have start on the beat. Truncating notes to extremely short times gives nifty percussive effects that sound very primitive and guttural. Generally, I map a bunch of samples across the keyboard as a multisample so that each sample covers at least a fifth, making different pitches available. Playing several notes at the desired rhythm, and setting their durations to 30-50 ms, gives the desired effect. This works best with sounds that have fairly abrupt beginnings; a word such as “whether” has an attack time that lasts longer than 30-50 ms. As one example, I wanted a series of eighth-note “ohs.” Triggering “only two theories seem to make any sense” with a note just long enough to play the “o” from “only” did the job. SETTING SAMPLE START TIME WITHIN THE SEQUENCE What if you want to play back the last part of a sample rather than the beginning? This is a little trickier. Put a controller 7 = 127 (maximum volume) message where you want the phrase to start in the sequence, and a controller 7 = 0 message somewhere before that. Jog the note start time so that the controller 7 = 127 message occurs right before the section of the phrase you want to hear (Fig. 2). Fig. 2: Adding a message for controller 7 = 0 (circled in red for clarity) mutes the phrase “The device must have used,” but the second message for controller 7 = 127 unmutes just before the sample says “sonic rays.” Note that in a multisampled keyboard setup, this will affect any other samples that are sounding at the same time. To fix this, set up the different samples multitimbrally. THE EARLY BIRD CATCHES THE EAR It seems that many samples work best if they’re nudged forward in the track so that they start just a bit ahead of the beat. This is probably because some sounds take a while to get up to speed (like the “w” sounds mentioned earlier). Another factor might be that the ear processes data on a “first come, first served” basis. Placing the sample very slightly before the beat gives it more importance than the sounds that follow it right on the beat. CREATING WEIRD DOUBLING EFFECTS If a sample covers a range of the keyboard rather than just one key, you can play two samples at the same time for groovacious effects. For example, copy a note that triggers a sample and transpose it down a half-step. The lower-pitched sample takes longer to play, so move it slightly ahead of the higher-pitched sample. Depending on the start times of the two notes, you’ll hear echo, flanging, and/or chorusing effects. If they start and end with about the same amount of delay, you’ll hear a way cool flanging effect in the middle. USING PITCH BEND TO CHANGE RHYTHM If a sample works perfectly except that you need to shorten or lengthen a single word, no problem--apply pitch bend to just one portion of the phrase. Bend pitch down to lengthen, bend up to shorten. This can also add some fun, goofy effects if taken to an extreme. Fig. 3 shows this technique applied to several notes. Fig. 3: The first note rises in pitch (thus shortening the sample); the fourth and fifth notes bend downward to lengthen the sample. The right-most note shortens the beginning, lengthens the middle for emphasis, and shortens the end. Combining all these tricks means you can lay samples into the track that sound as if they were cut specifically for your tune. Yes, they do take a little work—but the tight phrasing can preserve the rhythmic integrity of the song, and really make a difference. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. Get more emotion out of your ’boards by putting on the pressure By Craig Anderton Synthesizer keyboards are basically a series of on-off switches, so wresting expressiveness from them is hard. There’s velocity, which produces dynamics based on how fast the key goes from key up to key down; you also have mod wheel, footpedal, pitch bend, and usually one or two sustain switches, all of which can help with expressiveness. But some keyboards have an additional, and powerful, way to increase expressiveness: Aftertouch, also called Pressure. THE TWO KINDS OF AFTERTOUCH Aftertouch is a type of MIDI control signal. Like pitch bend, it’s not grouped in with MIDI Continuous Controller signals but is deemed important enough to be its own dedicated control signal. It produces an output based on how hard you press on the keys after they’re down. There are two types of aftertouch: Channel aftertouch (or pressure). This is the most common form of aftertouch, where the average pressure being applied to the keys produces a MIDI control signal. More pressure increases the value of the control signal. From a technical standpoint, the usual implementation places a force-sensing resistor under the keyboard keys. Pressing on this changes the resistance, which produces a voltage. Converting this voltage to a digital value produces MIDI aftertouch data. Key (or polyphonic) aftertouch (or pressure). Each key generates its own control signal, and the output value for each key corresponds to the pressure being applied to that key. AFTERTOUCH ISSUES Key aftertouch is extremely expressive, but with a few exceptions—notably Keith McMillen Instruments QuNexus (Fig. 1) and CME Xkey USB Mobile MIDI Keyboard—it’s not common in today’s keyboards. Fig. 1: Keith McMillen Instruments QuNexus is a compact keyboard with polyphonic aftertouch. The late, great synthesizer manufacturer Ensoniq made several keyboards with key aftertouch, but the company is no more. Another concern is that key aftertouch is data-intensive, because every key produces data. In the early days of MIDI, this much data often “choked” MIDI sequencers running on old computers that couldn’t keep up. Although many virtual synthesizers (and even hardware ones) can accept key aftertouch data, most likely you’ll be using a keyboard with channel aftertouch. Back then even channel aftertouch could produce too much data, so most MIDI sequencers included MIDI data filters that let you filter out aftertouch and prevent it from being recorded. Most DAWs that support MIDI still include filtering, and for aftertouch, this usually defaults to off. If you want to use aftertouch, make sure it’s not being filtered out (Fig. 2). Fig. 2: Apple Logic (left) and Cakewalk Sonar (right) are two examples of programs that let you filter out particular types of data, including aftertouch, from an incoming MIDI data stream. Depending on the keyboard, the smoothness of how the aftertouch data responds to your pressure can vary considerably. Some people refer to a keyboard as having “afterswitch” if it’s difficult to apply levels of pressure between full off and full on. However, most recent keyboards implement aftertouch reasonably well, and some allow for a very smooth response. A final issue is that many patches don’t incorporate aftertouch as an integral element because the sound designers have no idea whether the controller someone will be using has aftertouch. So, most sounds are designed to respond to mod wheel, velocity, and pitch bend because those are standard. If you want a patch to respond to aftertouch you’ll need to decide which parameter(s) you want to control, do your own programming to assign aftertouch to these parameters, and then save the edited patch. AFTERTOUCH APPLICATIONS Now that you know what aftertouch is and how it works, let’s consider some useful applications. Add “swells” to brass patches. Assign aftertouch to a lowpass filter cutoff, then press harder on the keys to make the sound brighter. You may need to lower the initial filter cutoff frequency slightly so the swell can be sufficiently dramatic. You could even assign aftertouch to both filter and to a lesser extent to level, so that the level increases as well as the brightness. Guitar string bends. Assign aftertouch to pitch so that pressing on the key raises pitch—just like bending a string on a guitar. However, there are two cautions: Don’t make the response too sensitive, or the pitch may vary when you don’t want it to; and this works best when applied to single-note melodies, unless you want more of a pedal steel-type effect. Introduce vibrato. This is a very popular aftertouch application. Assign aftetouch to pitch LFO depth, and you can bring vibrato in and out convincingly on string, guitar, and wind patches. The same concept applies to introducing tremolo to a signal. “Bend” percussion. Some percussion instruments become slighter sharp when first struck. Assign aftertouch to pitch; if you play the keys percussively and hit them hard, you’re bound to apply at least some pressure after the key is down, and bend the pitch up for a fraction of a second. This can add a degree of realism, even if the effect is mostly subliminal. Morph between waveforms. This may take more effort to program if you need to control multiple parameters to do morphing. For example, I use this technique with overdriven guitar sounds to create “feedback.” I’ll program a sine wave an octave or octave and fifth above the guitar note, and tie its level and the guitar note’s level to aftertouch so that pressing on a key fades out the guitar while fading in the “feedback.” This can create surprisingly effective lead guitar sounds. Control signal processors. Although not all synths expose signal processing parameters to MIDI control, if they do pressure can be very useful—mix in echoed sounds, increase delay feedback, change the rate of chorusing for a more randomized effect, increase feedback in a flanger patch, and the like. I’d venture a guess that few synthesists use aftertouch to its fullest—so do a little parameter tweaking, and find out what it can do for you. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. Time for a quick trip down the disinformation superhighway by Craig Anderton Maybe it’s just the contentious nature of the human race, but as soon as digital audio appeared, the battle lines were drawn between proponents of analog and those who embraced digital. A lot of claims about the pros and cons of both technologies have been thrown back and forth; let’s look at what’s true and what isn’t. A device that uses 16-bit linear encoding with a 44.1 kHz sampling rate gives “CD quality” sound. All 16-bit/44.1 kHz systems do not exhibit the same audio quality. The problem is not with the digital audio per se, but interfacing to the analog world. The main variables are the A/D converter and output smoothing filter, and to a lesser extent, the D/A converter. Simply replacing a device’s internal A/D converter with an audiophile-quality outboard model that feeds an available AES/EBU or S/PDIF input can produce a noticeable (and sometimes dramatic) change. What’s more, one of digital audio’s dirty little secrets is that when the CD was introduced, some less expensive players used 12-bit D/A converters—so even though the CD provided 16 bits of resolution, it never made it past the output. I can’t help but think that some of the early negative reaction to the CD’s fidelity was about limitations in the playback systems rather than an inherent problem with CDs. 16 bits gives 96 dB of dynamic range, and 24 bits gives 144dB of dynamic range. There are two things wrong with this statement. First, it’s not really true that each bit gives 6dB of dynamic range; for reasons way too complex to go into here, the actual number is (6.02 X N) + 1.76, where “N” is the number of bits. Based on this equation, an ideal 16 bit system has a dynamic range of 98.08 dB. As a rule of thumb, though, 6 dB per bit is a close enough approximation for real-world applications. Going from theory to practice, though, many factors prevent a 16-bit system from reaching its full potential. Noise, calibration errors within the A/D converter, improper grounding techniques, and other factors can raise the noise floor and lower the available dynamic range. Many real-world 16-bit devices offer (at best) the performance of an ideal 14-bit device, and if you find a 24-bit converter that really delivers 24 bits of resolution...I want to buy one! Also note that for digital devices, dynamic range is not the same as signal-to-noise ratio. The AES has a recommended test procedure for testing noise performance of a digital converter; real-world devices spec out in the 87 to 92 dB range, not the 96 dB that’s usually assumed. (By the way, purists should note that all the above refers to undithered converters.) Digital has better dynamic range than analog. With quality components and engineering, analog circuits can give a dynamic range in excess of 120 dB — roughly equivalent to theoretically perfect 20-bit operation. Recording and playing back audio with that kind of dynamic range is problematic for either digital or analog technology, but when 16-bit linear digital recording was introduced and claimed to provide “perfect sound forever,” the reality was that quality analog tape running Dolby SR had betters specs. With digital data compression like MP3 encoding, even though the sound quality is degraded, you can re-save it at a higher bit rate to improve quality. Data compression programs for computers (as applied to graphics, text, samples, etc.) use an encoding/decoding process that restores a file to its original state upon decompression. However, the data compression used with MP3, Windows Media, AAC, etc. is very different; as engineer Laurie Spiegel says, it should be called “data omission” instead of “data compression.” This is because parts of the audio are judged as not important (usually because stronger sounds are masking weaker sounds), so the masked parts are simply omitted and are not available for playback. Once discarded, that data cannot be retrieved, so a copy of a compressed file can never exhibit higher quality than the source. Don’t ever go over 0 VU when recording digitally. The reason for this rule is that digital distortion is extremely ugly, and when you go over 0 VU, you’ve run out of headroom. And frankly, I do everything I can to avoid going over 0. However, as any guitarist can tell you, a little clipping can do wonders for increasing a signal’s “punch.” Sometimes when mixing, engineers will let a sound clip just a tiny bit—not enough to be audible, but enough to cut some extremely sharp, short transients down to size. It seems that as long as clipping doesn’t occur for more than about 10 ms or so, there is no subjective perception of distortion, but there can be a perception of punch (especially with drum sounds). Now, please note I am by no means advocating the use of digital distortion! But if a mix is perfect except for a couple clipped transients, you needn’t lose sleep over it unless you can hear that there’s distortion. And here’s one final hint: If something contains unintentional distortion that’s judged as not being a deal-breaker, it’s a good idea to include a note to let “downstream” engineers (e.g., those doing mastering) know it’s there, and supposed to stay there. You might also consider normalizing a track with distortion to -0.1dB, as some CD manufacturers will reject anything that hits 0 because they will assume it was unintentional. Digital recording sounds worse than vinyl or tape because it’s unnatural to convert sound waves into numbers. The answer to this depends a lot on what you consider “natural,” but consider tape. Magnetic particles are strewn about in plastic, and there’s inherent (and severe) distortion unless you add a bias in the form of an ultrasonic AC frequency to push the audio into the tape’s linear range. What’s more, there’s no truly ideal bias setting: you can raise the bias level to reduce distortion, or lower it to improve frequency response, but you can’t have both so any setting is by definition a compromise. There are also issues with the physics of the head that can produce response anomalies. Overall, the concept of using an ultrasonic signal to make magnetic particles line up in a way that represents the incoming audio doesn’t seem all that natural. Fig. 1: This is the equalization curve your vinyl record goes through before it reaches your ears. Vinyl doesn’t get along with low frequencies, so there’s a huge amount of pre-emphasis added during the cutting process, and equally huge de-emphasis on playback—the RIAA curve (Fig. 1) boosts the response by up to 20 dB at low frequencies and cuts by up to 20 dB at high frequencies, which hardly seems natural. We’re also talking about a playback medium that depends on dragging a rock through yards and yards of plastic. Which of these options is “most natural” is a matter of debate, but it doesn’t seem that any of them can make too strong a claim about being “natural”! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Use effects to bend, fold, spindle, and mutilate the sound of your bass by Craig Anderton Want to make your bass sound fatter, fuller, thinner, thunkier, nicer, or nastier? Signal processors can help deliver the sound you want—providing you know what device to choose, and how to use it. Most signal processors fall into one of four basic categories: level control, synthesis of new signals (e.g., an octave divider), timbre alteration, or time shift (delay). Let’s look at the most common devices in each category, and how they relate to bass. LEVEL-ALTERING EFFECTS Level-altering effects change your signal’s dynamics. Depending on the application, you may want to decrease or increase dynamic range, or subject the full dynamic range to overall level control (e.g., a volume pedal). Compressor A compressor smooths out the bass’s dynamic range by automatically attenuating signal peaks and boosting signal valleys. This creates a more consistent and “rounder” signal with more sustain. Originally invented to “squeeze” signals with a wide dynamic range into a medium with a narrower dynamic range (such as tape or AM radio), compression is just as likely to be used as an effect rather than to overcome dynamic range limitations. Limiter A limiter also restricts dynamic range, but “clamps” signals to a preset maximum threshold level. While popular with vocalists who lack good mic technique (and whose level therefore fluctuates wildly), limiters work very well with bass when you want to retain a fairly wide dynamic range but also need to tame the occasional loud note or slap. Volume Pedal Aside from standard level control, you can use a volume pedal to produce “bowed” effects—go from full off at the beginning of a note to full on over a period of about half a second (this eliminates the initial pluck sound). If you split your signal into two paths (for example, a dry and processed path), inserting a volume pedal in the processed section of the split lets you mix in the desired degree of processed sound. Noise Gate A noise gate doesn’t let any signal through from input to output unless the input signal exceeds a certain threshold. So, if you set the threshold just above any hiss and hum that occurs when you don’t play, the hiss and hum won’t make it to the output; but as soon as you play a note, the input signal exceeds the threshold, and the gate lets your signal through. The note will usually be loud enough to mask the noise, so as long as you play, you won’t hear noise. If you stop playing, the noise gate will keep things quiet. SYNTHESIZING EFFECTS These effects are not commonly used with bass, but that may be because manufacturers usually optimize synthesizing-based effects for guitar. However, sometimes all that’s required to get a good sound with these effects and bass is to split your signal into dry and processed signal paths (as described earlier) so you can set a balance between the dry and processed sounds. Distortion Distortion adds harmonics and a lot of high-frequency content. This abundance of high frequencies tends to make the bottom less prominent, so many bass players don’t use distortion. However, following distortion with a filter that can reduce high frequencies (for example, a graphic equalizer; see later) can restore the proper bass/treble balance, yet leave enough of the distortion effect intact to give a rougher, ruder, more “growling” sound. Even better, some distortion devices are designed specifically for bass (Fig. 1). Fig. 1: Electro-Harmonix’s Deluxe Bass Big Muff Pi is a distortion stompbox that’s optimized for bass. Octave Divider Although most octave dividers are designed for guitar, playing high up on the bass neck, with an octave divider providing a rich, octave-lower subharmonic, can deliver a huge “8-string” bass sound. Make sure you test drive any octave divider before you buy it, because some have a hard time handling the low bass strings (although you’ll probably find the octave divider most useful when playing higher notes anyway). TIMBRE-ALTERING EFFECTS (EQUALIZERS) These effects are the “bread and butter” of a good bass sound—just about any recorded bass sound you hear probably used some kind of timbre-altering effect, whether to increase midrange for more pick “snap,” decrease low frequencies for a less muddy bottom, or increase low frequencies to shake the rafters when you turn up the volume. Graphic Equalizer This a sort of like a fancy tone control. Instead of having just bass and treble controls, though, a typical graphic equalizer (EQ for short) will have anywhere from five to 30 or more “bands,” each of which can boost or cut the response at a specific frequency. If you want more low end you could increase the lower bands; if there’s too much high end, you can cut the upper bands. Increasing the midrange gives more punch. The graphic equalizer is best suited to general tone shaping since you can’t alter a band’s frequency, nor can you vary the boost or cut’s bandwidth (i.e., the range of frequencies that’s boosted or cut). Parametric Equalizer A parametric equalizer has fewer bands than a graphic EQ, but each band is more sophisticated and generally includes variable frequency and bandwidth. As an example of why this is useful, suppose there’s a “dead” spot on your bass neck where some notes don’t come through as strongly as the others. With a parametric, you can dial in that specific range of frequencies, then adjust the bandwidth to boost only that note while leaving the others pretty much undisturbed. Wah Pedal Generally shunned by bassists because it thins out the bottom end, a wa-wa is nonetheless useful if you split your bass into two amp channels, with one split going through the wa-wa and the other straight to your amp. The straight feed maintains the bottom; mix in as much wa-wa as you want on top of that without losing the low end. The same holds true for wah-based effects, like auto-wah (Fig. 2). Fig. 2: MXR’s M82 provides filtering effects that respond to the dynamics of your playing. TIME-ALTERING EFFECTS Time-altering effects are not particularly popular with bass, probably because they tend to make the sound “spacier” and more diffuse—which is at odds with the tight, rhythmically accurate sound of a good bass part. Still, for thickening effects, or to make the bass blend in better with a background track, time-altering effects can be very useful (particularly with synthesized bass). Delay is also popular with bass tracks intended for dance music, as they can add an extra rhythmic element. Flanger A flanger imparts a whooshing, “jet airplane” effect. However, flangers are most effective with signals rich in harmonics, which is not the case with electric bass. Although useful for thickening up the sound somewhat, flanging isn’t one of the bass’s A-list effects. Chorusing A chorus unit splits your signal into two paths and adds a bit of varying time-shift to one of them—not enough to create an echo, but enough to provide some differentiation between the delayed and your original signal. This simulates the effect of two instruments playing en ensemble, giving a thicker, more animated sound. Although a chorused sound becomes more diffused and less “focused,” chorusing is sometimes used with bass. Delay Like a chorus unit, a delay splits your signal into two paths; but in this case, there’s enough delay to cause an obvious echo effect. Usually there’s also a feedback control that, when turned up, produces a series of echoes, each quieter than the previous echo. Echoes work best when synchronized to the music’s tempo— you can even play rounds and harmonies with yourself, or turn eighth-note runs into sixteenth-note runs. Reverb Reverb electronically simulates the sound of playing in a large hall or other acoustic environment. However, when playing in these types of spaces the bass often becomes muddy and indistinct, and electronic reverbs must be used sparingly, if at all. Very few bassists use reverb. MULTIPLE EFFECTS Many devices include multiple effects (such as compression, equalization, delay, and reverb) in a single package (Fig. 3). Fig. 3: The Zoom B3 allows up to three effects at once, along with amp simulation to re-create the sound of a bass amp. Although it’s sometimes tedious to customize the available sounds to your own needs, the flexibility, size, and cost-effectiveness of this type of multieffects are powerful incentives to add one to your setup. There are also preamps designed specifically for bass that include limiting/compression, equalization, and perhaps some effects loops for adding in some of your favorite existing effects. Either of these options can be more convenient and reliable than using a bunch of battery-powered stomp boxes interconnected with patch cords. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Create ethereal, unusual reverb effects with phase reversal by Craig Anderton Reverb hasn’t changed a lot over the years: you emulate a gazillion sound waves bouncing off surfaces. But if you’re a gigging musician, one thing that may have changed is how you hear reverb. Back when you were in the audience, you heard reverb coming at you from all sides of a room. Then when you graduated to the stage, reverb started sounding different: you initially heard the sound of your amp or monitors, and then you heard the reverb as it reflected off the the walls, ceiling, and other surfaces. The effect is a little like pre-delay at first, but as the reverb waves continue to bounce, you hear a sort of “bloom” where the reverb level increases before decaying. Controlling reverb to give this kind of effect can produce some lovely, ethereal results. It also has the added bonus of not “stepping on” the original signal being reverberated, because the reverb doesn’t reach its full level until after the original signal has occured It’s not hard to set up this effect; here’s how (Fig. 1). CREATING ETHEREAL REVERB You’ll need two sends from the track to which you want to add reverb that go to two reverb effects buses. These should have the same settings for send level, pan, and pre-post. Insert your reverb of choice into one of the effects bus returns, and set the reverb parameters for the desired reverb sound. For starters, set a decay time of around 2 seconds. Next, insert the same reverb into the other effects bus return, with the same settings. If you can’t do something like drag/copy the existing reverb into another track, save the first reverb’s settings as a preset so you can call it up in the other reverb. The returns should have identical settings as well. Assuming the sends are pre-fader, turn down the original signal’s track fader so you hear only the reverb returns (Fig. 1). Fig. 1: The yellow lines represent sends from a guitar track to two send returns; each has a reverb inserted (in this example. One return also has a plug-in that reverses the phase. Now it’s time for the “secret sauce”: reverse one of the reverb return’s phase (also called polarity). Different DAWs handle this in different ways. Some may have a phase button, while others might have a phase button only for tracks but not for send returns. For situations like this, you can usually insert some kind of phase-switching plug-in like Cakewalk Sonar’s Channel Tools, PreSonus Studio One Pro’s Mixtool, or Ableton Live’s Phase. Reversing the phase should cause the reverb to disappear. If not, then there’s a mismatch somewhere with your settings—check the send control levels, reverb parameters, reverb return controls, etc. Another possibility is that the reverb has some kind of randomizing option to give more “motion.” For example, with Overloud’s Breverb 2, you’ll need to go into the Mod page and turn down the Depth control. In any event, find the cause of the problem and fix it before proceeding. Finally, decrease the reverb decay time on one of the reverbs (e.g., to around 1 second), and start playback. When a signal first hits the reverbs, they’ll be identical or at least very similar and cancel; as the reverb decays, the two reverbs will diverge more, so there will be less cancellation and the reverb tail will “bloom.” Because the cancellation reduces the overall level of the reverbs, you’ll likely need to compensate for this by increasing the reverb return levels. However, note that the two reverb returns need to remain identical with respect to each other. I find the easiest way to deal with this is to group the two faders so that adjusting one fader automatically adjusts the other one. If you’re using long reverb times and there’s not much difference between the two decay times, the volume will be considerably softer. In that case, you may need to send the bus outputs to another bus so you raise the overall level of the combined reverb sound, APPLICATIONS Because it takes a while for the reverb to develop, this technique probably isn’t something you’ll want to use on uptempo songs. It’s particularly evocative with vocals, especially ones where the phrasing has some “space,” as well as with languid, David Gilmour-type solo guitar lines. But I’ve also tried this ethereal reverb effect on individual snare hits and a variety of other signals, so feel free to experiment—maybe you’ll discover additional applications. Happy ambience! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Replace your pickup selector switch with a panpot by Craig Anderton I've tried several designs over the years to be able to do a continuous pan between the bridge and neck pickups, like how a mixer panpot sweeps between the left and right channels. This isn’t as easy as it sounds, but if you’re in an experimental mood, this mod gives you a wider range of colors from your axe without needing outboard boxes like equalizers. However, there are some tradeoffs. A pickup selector switch has no ambiguous positions: it’s either neck, bridge, or both—end of story. A panpot control has two unamibiguous positions at the extremes of rotation, but there's a whole range of possible sounds in between. These variations are subtle, but while it's more difficult to dial in an exact setting than with a standard pickup selector switch, in return there are more possibilities. ABOUT THE SCHEMATIC This circuit uses a standard potentiometer for volume, a dual-ganged potentiometer to do the panning, and an SPDT (single-pole, double-throw) switch with a third, center-off position. Although you won’t need to drill any extra holes if you guitar has a selector switch/volume/tone control combination, the dual gang pot is thicker than standard pots; this could be a problem with thinner-body guitars. Due to all the variables in this circuit, I recommend running a pair of wires (hot and ground) from each pickup to a test jig so you can experiment with different parts values. To avoid hum problems, make sure the metal cases of any pots or switches are grounded. If you end up deciding this mod’s for you, build the circuitry inside the guitar. The dual-ganged panpot (R3) provides the panning. Ideally, this would have a log taper for one element and an antilog taper for the other element but these kinds of pots are very difficult to find. A suitable workaround is use a standard dual-ganged linear taper pot and add "tapering" resistors R1 and R2. If these are 20% of the pot's total resistance, they’ll change the pot taper to a log/antilog curve. The panpot value can range between 100k and 1 Meg, which would require 22k and 220k tapering resistors respectively. Higher resistance values will provide a crisper, more accurate high end while lower values will reduce the highs and output somewhat. A 100k panpot with 22k tapering resistors will cause noticeable dulling and a loss of volume unless you use active pickups, in which case lower values are preferred to higher values; however, some people might prefer the reduced high end when playing through distortion, because this can warm up the sound. The volume control (R4) can be a 250K, 500K, or 1 Meg log (audio) taper control. The three-position switch provides a tone control designed specifically for this circuit, and connects a capacitor (C1) across one pickup, the other pickup, or neither pickup (the tone switch's center position). I was surprised at how switching in the capacitor can change the timbre at the panpot's mid position, and this definitely multiplies the number of tonal options. The optimum capacitor value will depend on the pickups and amp you use, but will probably range from 10 nF (0.01 uF; less bassy) to 50 nF (0.05 uF; more bassy). For even more versatility, you could connect the switch center terminal to ground, and wire different capacitor values from each switch terminal to its corresponding pickup. Two final notes: adjust the two pickups for the same relative output by adjusting their distance from the strings. If one pickup predominates, it will shift the panpot's apparent center off to one side. Finally, switching one pickup out of phrase provides yet another bunch of sounds; also note that removing the tapering resistors may produce a feel that you prefer, particularly if one of the pickups is out of phase. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Signal processing and cool effects aren't just for electric guitars by Craig Anderton Although the goal with acoustic guitar is often to create the most realistic, organic sound possible, a little electric-type processing can enhance an acoustic’s sound in many ways that open up new creative avenues. We’ll assume your acoustic has been electrified (presumably with a piezo pickup) and can produce a signal of sufficient level, and of the proper impedance, to drive contemporary effects units. If you're not sure about this, contact the manufacturer of the pickup assembly, or whoever did the installation. There are quite a few processors dedicated to acoustic guitar, like Zoom’s A3 (Fig. 1). Fig. 1: Zoom’s A3 packages acoustic guitar emulations and effects in a floor pedal format. While convenient and cost-effective, but this article takes more of an à la carte approach with conventional, individual effects. IMPROVING TONE Most electrified acoustics have frequency response anomalies—peaky midrange, boomy bass, and so on—caused primarily by the interaction among the guitar body, pickup, and strings. While some of these anomalies are desirable (classical guitars wouldn't sound as full without the bass resonance most instruments exhibit), some are unwanted. Smoothing out the response is a task for equalization. There are two main types of equalizers (EQ for short) used with acoustic guitar, graphic and parametric. A graphic EQ splits the audio spectrum into numerous frequency bands (Fig. 2). Fig. 2: Source Audio’s Programmable EQ is a graphic EQ that can save and recall custom settings. Depending on the model, the range of frequencies (bandwidth) covered by each band can be as wide as an octave to as narrow as 1/3 octave. The latter types are more expensive because of the extra resolution. The response of each band can be boosted to accent the frequency range covered by that band, or attenuated to make a frequency range less prominent. Graphic equalizers are excellent for general tone-shaping applications such as making the sound "brighter" (more treble), "warmer" (more lower midrange), "fuller" (more bass), etc. A parametric equalizer has fewer bands—typically two to four—but offers more precision since you can dial in a specific frequency and bandwidth for each band, as well as boost or cut the response. So, if your guitar is boomy at a particular frequency, you can reduce the response at that specific frequency only and set a narrow bandwidth to avoid altering the rest of the sound. Or, you can set a wider bandwidth if you want to affect more of the sound. Either type of equalization can help balance your guitar with the rest of the instruments in a band. For example, both the guitar and the male voice tend to fall into the midrange area, which means that they compete to a certain extent. Reducing the guitar's midrange response will leave more "space" for your voice. Another example: if your band has a bass player, you might want to trim back on the bass to avoid a cluttered low end. However, if your band is bassless, then try boosting the low end to help fill out the bottom a bit. Note that piezo pickups have response anomalies, and equalization is very helpful for evening out the response. For more information, check out the article “Make Acoustic Guitar Piezo Pickups Sound Great” at Gibson.com. BRIGHTNESS OR FULLNESS WITHOUT EQUALIZATION Many multieffects offer pitch transposition. I've found that transposing an acoustic guitar sound up an octave (for a brighter sound) or down an octave (for a fuller sound) can sound pretty good, providing that you mix the transposed signal way in the background of the straight sound—you don't want to overwhelm the straight sound, particularly since the processed sound will generally sound artificial anyway. BIGGER SOUNDS A delay line can simulate having another guitarist mimicking your part to create a bigger-than-life, ensemble sound. Run your guitar through a delay set for a short delay (30 to 50 milliseconds). Turn the feedback (or regeneration) and modulation controls to minimum; this produces a slapback echo effect, giving a tight doubling effect. Another option is chorusing, which creates more of a swirling, animated sound as opposed to a straight doubling. The settings are similar to slapback, except use a shorter delay (around 10 to 30 milliseconds) and add a little modulation to vary the delay time and produce the "swirling" effect. Note: with most delay effects, it's best to set the balance (mix) control so that the delayed sound is less prominent than the dry sound. INCREASED SUSTAIN Guitars are percussive instruments that produce a huge burst of energy when you first pluck a string, but then rapidly decays to a much lower level. Often this is what you want, but in some cases the decay occurs too quickly and you might prefer more sustain. A limiter is just the ticket. This device decreases the guitar's dynamic range by holding the peaks to a preset level called a threshold, then optionally amplifying the limited signal to bring the peaks back up to their original level (Fig. 3). Fig. 3: The signal with 4dB limiting (blue) has a higher average level than the original recording. Don't set the threshold too low, or the guitar will sound "squeezed" and unnatural. Also, although many people confuse limiters and compressors, these are not identical devices. A compressor tries to maintain a constant output in the face of varying input signals, which means that not only are high-level signals attenuated, but low-level signals may be subject to a lot of amplification. The above explanation of limiting is fairly basic, and there are several variations on this particular theme. Early model limiters would simply clamp the signal to the threshold; newer models can do that, but may also allow for a gentler limiting action to provide a more natural sound. PEDALLING YOUR WAY TO BIGGER SOUNDS If you have a two-channel amp or mixer, one trick that's applicable to all of the above options is to split your guitar signal into two paths with one split carrying the straight guitar sound, while the other goes through a volume pedal before feeding the desired signal processor. Use the volume pedal to go from a normal to processed acoustic guitar sound, and bring in as much of the processed sound as you want. The possibilities for processing acoustic guitar are just as exciting as for processing electric guitars. The best way to learn, though, is not just by reading this article—my intention is to get you inspired enough to experiment. You never know what sounds you'll discover as you plug your guitar output into various device inputs. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Dynamics control isn’t just for vocals and physical instruments by Craig Anderton Adding dynamics control after a synthesizer can be extremely helpful. This isn’t just about controlling levels, but also adding an effect, or compensating for limitations that are inherent in synthesizers. Limiters make great post-synth processors, as they can keep peaks under control while preserving the synth’s character. On the other hand, compression can “thicken” the sound somewhat while controlling peaks. Experiment with a DAW’s dynamics options, and you’ll find certain processors are better-suited for particular sounds. For example, you’ll probably find that light compression works better on string patches than hard limiting. So much for background; here are the five tips on using dynamics with synthesizers. 1. When using a compressor, set the controls for fast attack and moderate decay. If the main goal is to trap short peaks and transients, set the threshold fairly high, and use a very high compression ratio. This will leave most of the signal relatively unaffected, but peaks won’t exceed a safe, non-distorting level. Alternatively, use a Limiter to shave off the peaks (Fig. 1), and you won’t have to concern yourself with attack times. Fig.1: SONAR’s Concrete Limiter being used to tame the peaks from the Z3TA+ synthesizer’s Techno Chords preset. 2. Detuned oscillators, though they can sound animated and fat, create strong peaks when the chorused waveform peaks occur at the same time. Although you can solve this by dropping one one oscillator’s level about 30%-50% below the other, using compression (Fig. 2) or limiting will allow the sound to remain animated—yet the peaks won’t be as drastic. Fig. 2: Universal Audio’s 1176LN Limiting Amplifier is keeping the chorusing in a fat Arturia mini V patch under control. 3. Electric bass parts are often compressed to maintain a more consistent low-end level, and this same trick works with synth bass parts. 4. Drum machine sounds work well with compression, but processing an entire kit can cause undesirable side effects such as pumping and breathing. To avoid this split the drums into two submixes, with the kick, snare, and toms feeding a compressor and the assorted percussion and cymbals feeding a non-processed bus. The main drum sounds will be compressed, but the lighter, more accent-oriented sounds will retain their original dynamic range and not be subject to the side effects of compression. 5. High-resonance filter settings are troublesome; hitting a note at the filter’s resonant frequency creates a radical peak. To keep this under control, one option is to use as little resonance as is necessary—but what fun is that? This is another situation where a limiter can keep the levels under control without robbing the sound’s essential character. Dynamic control is a beautiful thing—and that’s true of virtual instruments as well as other signal sources. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Get the most out of today’s digital wonderboxes by Craig Anderton Everyone’s always looking for a better guitar sound, and while the current infatuation with vintage boutique effects has stolen a bit of the spotlight from digital multieffects, don’t sell these processors short. When properly programmed, they can emulate a great many “vintage” timbres, as well as create sounds that are extremely difficult to achieve with analog technology. As with many other aspects of audio, there is no one “secret” that gives the ultimate sound; great sounds are often assembled, piece by piece. Following are ten tips to help you put together a better guitar sound using multieffects. Line 6's POD HD500 is one of today's most popular digital multieffects for guitar. 1. DON’T BELIEVE THE INPUT LEVEL METERS Unintentional digital distortion can be nasty, so minimize any distortion other than what’s created intentionally within the multieffects. The input level meters help you avoid input overload, but they may not tell you about the output. For example, a highly resonant filter sound (e.g.,wa) can increase the signal level internally so that even if the original signal doesn’t exceed the unit’s input headroom, it can nonetheless exceed the available headroom elsewhere. Some multieffects meters can monitor the post-processed signal, but this isn’t a given. If the distortion starts to “splatter” yet the meters don’t indicate overload, try reducing the input level. 2. USE PROPER GAIN-STAGING If a patch uses many effects then there are several level-altering parameters, and these should interact properly—just like gain-staging with a mixer. Suppose an equalizer follows distortion. The distortion will probably include input and output levels, and the filter will have level boost/cut controls for the selected frequency. As one illustration of gain-staging, suppose the output filter boosts the signal at a certain frequency by 6 dB. If the signal coming into the filter already uses up the available headroom, asking it to increase by 6 dB means crunch time. Reducing the distortion output level so that the signal hitting the filter is at least 6 dB below the maximum available headroom lets the filter do its work without distortion. 3. ADD EQ PEAKS AND DIPS FOR REALISM Speakers, pickups, and guitar bodies have anything but a flat response. Much of the characteristic difference between different devices is due to frequency response variations—peaks and dips that form a particular “sonic signature.” For example, I analyzed some patches David Torn programmed for a multieffects and found that he likes to add 1 kHz boosts. On the other hand I often add a slight boost around 3.5 kHz so guitars can cut through a mix even at lower volume levels. With 12-strings, I usually cut the low end to get more of a Rickenbacker sound. Parametric EQ is ideal for this type of processing. 4. CUT DELAY FEEDBACK LOOP HIGH FREQUENCIES Each successive repeat with tape echo and analog delay units has progressively fewer high frequencies, due to analog tape’s limited bandwidth. If your multieffects can reduce high frequencies in the delay line’s feedback path, the sound will resemble tape echo rather than straight digital delay. 5. A SOLUTION FOR THE TREMOLO-IMPAIRED If your pre-retro craze multieffects doesn’t have a tremolo, check for a stereo autopanner function. This shuttles the signal between the left and right channels at a variable rate (and sometimes with a choice of waveforms, such as square to switch the sound back and forth, or triangle for a smoother sweeping effect). To use the autopanner for tremolo, simply monitor one channel and turn down the other one. The signal in the remaining channel will fade in and out cyclically, just like a tremolo. 6. CABINET SIMULATORS ARE COOL, BUT… Many multieffects have speaker simulators, which supposedly recreate the frequency response of a typical guitar speaker in a cabinet. If you’re feeding the multieffects output directly into a mixer or PA instead of a guitar amp and this effect is not active, the timbre will often be objectionably buzzy. Inserting the speaker emulator in the signal chain should give a more realistic sound. However, if you go through a guitar amp and the emulator is on, the sound will probably be much duller, and possibly have a thin low end as well—so bypass it. You might be surprised how many people have thought a processor sounded bad because they plugged an emulated cabinet output designed for direct feeds to mixers into a guitar amp. 7. USE A MIDI PEDAL FOR MORE EXPRESSION A multieffects will generally let you assign at least one parameter per patch to a MIDI continuous controller number. For example, if you set echo feedback to receive continuous controller message 04, and set a MIDI pedal to transmit message 04, then moving the pedal will vary the amount of echo feedback. You can usually scale the response as well, so that moving the pedal from full off to full on creates a change that’s less than the maximum amount. This allows greater precision because the pedal covers a narrower range. Scaling can sometimes invert the “sense” of the pedal, so that pressing down creates less of an effect rather than more. 8. MAKE SURE STEREO OUTPUTS DON’T CANCEL Some cheapo effects, and a large number of “vintage” effects, create stereo with time delay effects by sending the processed signal to one channel, and an out-of-phase version of the processed signal to the other channel. While this can sound pretty dramatic with near-field monitoring, should the two outputs ever collapse to mono , the effect will cancel and leave only the dry sound. To test for this, plug the stereo outs into a two-channel mono amp or mixer (set the channel pans to center). Start with one channel at normal listening volume, and the second channel down full. Gradually turn up the second channel; if the effect level decreases, then the processed outputs are out of phase. If the effect level increases, all is well. 9. PARALLELING MULTIEFFECTS WITH GUITAR AMPS One way to enrich a sound is to double a multieffects with an amp, and mix the sounds together. Although you could simply split the guitar through a Y-cord and feed both, here’s a way that can work better. To supplement the multieffects sound with an amp sound, send the multieffects “loop send” (if available) to the amp input. This preserves the way the multieffects input stage alters your guitar. If you’d rather supplement the basic amp sound with a multieffects, feed the amp’s loop send to the multieffects signal input to preserve the amp’s preamp characteristics. 10. BE AWARE OF THE PROBLEMS WITH PRESETS Many musicians evaluate a multieffects by stepping through the presets, but you need to be aware of two very important issues. First, whoever designed the presets wasn’t you—it’s very doubtful they were using the same guitar, pickups, string gauge, pick, touch, etc. If a preset works with your playing style, it’s due to luck more than anything else. Second, presets are usually designed to sound impressive during demos, and will be loaded up with effects. Sometimes creating your own cool presets simply involves taking a factory preset and removing some selected effects, and adjusting an emulated amp’s drive control to match your playing style. Well, that covers the 10 tips. Have fun strumming those wires—and remember that the magic word for all guitar multieffects is “equalization.” Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. This song, "Maladie du Coeur," is inspired by zouk and groups like Kassav. What's unusual about this one is that I post song mixes with an unlisted link in my Sound, Studio, and Stage forum, and ask people to make comments. The comments are invariably intelligent and spot-on, so I incorporate them into my "final draft" and that's the version that goes public on my YouTube channel. [video=youtube;fsp5hOxRX1M] All my recent songs have taken advantage of what I call the "SSS Production Squad," and I truly believe the final versions have benefited greatly as a result...yet another HC coolness. BTW you'll also find several covers of Mark's songs on my YouTube channel. I think he's a gifted songwriter and singer, and I like giving his songs a different spin...to me, one mark (heh heh) of a great song is that you can do it several different ways, and they're all valid.
×
×
  • Create New...