Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. If you're new to the world of audio and recording, it pays to know how frequency response affects what you do by Craig Anderton Sound is essentially a rhythmic variation in air pressure. This phenomenon resembles ocean waves, except that instead of having crests and troughs of water, we have crests and troughs of air pressure. But not all sounds are alike. Some are bassy, some are shrill; some are loud and some are soft. We can classify these sounds by their level (commonly called volume), and frequency. Frequency measures the changes in air pressure. If these pressure changes occur many thousands of times in a single second, then we're dealing with a high frequency sound. If the air pressure changes occur at a slower rate—say, only 40 or 50 in a second—then we have a comparatively low frequency sound. Because of the wave-like motion of sound, each "wave" (the crest and trough) is called a cycle. We measure frequency by counting how many cycles occur in a single second; this gives us a figure in cycles per second. In 1960, the term "cycles per second" was replaced with the single word Hertz (abbreviated Hz), to commemorate Heinrich Rudolph Hertz (1857 - 1894), a scientist who contributed much to the subject we're discussing. For higher frequencies, the term kilohertz (abbreviated kHz) stands for 1000 Hz. Thus, a 1000 Hz tone has the same frequency as a 1 kHz tone. Level defines the sound's loudness or softness; so if we know a sound's frequency and level , we have at least a vague idea of the type of sound we're talking about (in practice, though, sounds are complex and comprise numerous frequencies at numerous levels). Now that we've defined our terms, let's move on to frequency response. FREQUENCY RESPONSE Frequency response is a characteristic associated with audio equipment. Since everyone has a set of ears, that's a pretty universal piece of audio equipment to examine first. Our ears respond to frequencies over about a 10 octave total range, from approximately 20 Hz to 20 kHz; but unfortunately, these figures only hold true for the ears of a relatively healthy youngster. As we get older, our ears lose their ability to respond to high frequency sounds.So, at a very advanced age we could have a response that tops out at 5 or 6 kHz. Because most ears respond differently to different frequencies, ears are an example of an uneven frequency response. Now let's take this concept a step further. If the ear was a perfect listening machine, and if a sound source (loudspeaker or whatever) produced tones from 20 Hz to 20 kHz at exactly the same level, then our ears would respond equally to these tones; the high frequency ones would sound just as loud as the low frequency ones. This would be an example of flat response—i.e., the response would be even throughout the audible frequency range. But as we've already seen ears are imperfect, which means we have to deal with a deviation from flat response. The ear also exhibits a different frequency response at different sound levels. At fairly low listening levels, the ear responds less to very high, and very low, frequencies. On the other hand, at high listening levels the ear's response is much flatter, although it's still not ideal. So much for the problems inherent in our hearing...it would be great if these were the only problems we had to deal with, but unfortunately, there are also other trouble spots in the audio signal chain. A speaker never has a flat frequency response; no matter how much you spend, every speaker will deviate to some degree from an ideal response. For example, at very high frequencies a loudspeaker has to create very fast variations in air pressure—but the mass of the speaker's cone, friction problems, and other error sources make very accurate high frequency reproduction difficult. At the other end of the audio range, you have low notes that require the movement of large amounts of air. Even a 15" speaker can have trouble moving enough air to generate massive air pressure changes, thus reducing the low frequency response. A typical loudspeaker's frequency response rolls off towards both the extreme high and low ends, but that's not all: resonances (response anomalies) in the speaker and speaker enclosure itself can cause deviations in the midrange response. To complicate matters even further, the room in which you are listening to the speaker will also change the response. A room with many hard surfaces (concrete, glass, etc.) will bounce high frequencies around and make them appear more prominent, while a thickly carpeted room will absorb many of the high frequencies. And we're not done yet...headphones, microphones, and other transducers that convert mechanical energy to electrical energy also introduce their own deviations. Amplifiers don't have perfect frequency responses either, but compared to our ears (or loudspeakers), they're excellent. Many amplifiers can reproduce tones from 20 Hz to 20 kHz, or even 100 kHz, with ruler-flat response. Generally, the amp will not be the weak link in an audio system. WHY FLAT RESPONSE IS GOOD We're reaching the moral of the story: with so many variables between the sound source and the listener, we have to do something to keep the chaos to a minimum. Hence, whenever possible, we try for audio systems that have the flattest possible frequency response. Then, the only variables left are the listener's ears and acoustic environment. Professional recording studios count on accurate monitor speakers and acoustically treated rooms to provide as flat a frequency response as possible. If a mix plays through a listening system with flat frequency response, then the listener will hear what the recording engineer heard while mixing. But if the studio loudspeaker exaggerates the high frequencies, then any recordings made at that studio will probably sound deficient in high frequency response when played over a system with a truly flat response. For this whole process to work smoothly, both the recording and playback systems need to have a flat frequency response. But it's impossible for all systems to have a flat frequency response. As a result, when recording it's important to create a mix that sounds good on a variety of systems. Recording studios will often have small, imperfect "real-world" speakers right next to their standard, high quality studio monitors, thus making it easier to create a recording which sounds acceptable over both types of speaker. This may require a compromise—for example, the sound might be a shade too bright on the good speakers and not quite bright enough on the real-world speakers; but this is better than having just the right amount of brightness on the good speakers but an overly dull sound on the “real-world” speakers. Now let's relate what we've learned to the real world. For example, if you want to check out a speaker's frequency response, you'll see a graph with squiggly lines all over it and strange markings given in "Hz" and "kHz" (which we already know about), and decibels (which we'll cover next; see Fig. 1). Interpreting this type of information is important when trying to compare audio equipment. Fig. 1: This typical speaker response graph shows level on the X (vertical) axis, and frequency on the Y (horizontal) axis. Note the dropoff at the highest and lowest frequencies, and a bit of a midrange emphasis around 2 – 4 kHz. THE DECIBEL Let's examine another important technical term: the decibel (or dB). Actually, there are several different kinds of dB, and a complete treatment of the subject could take up a book. So, for now let's deal with the dB in general terms. Simply stated, the dB is a unit of ratio between two audio signals; probably the best way to become familiar with the dB is through some examples. Suppose we're listening to an amplifier/speaker combination, and have a sound level meter calibrated in dB that registers changes in the system's acoustic output. Furthermore, suppose the input to the amplifier is not a complex musical source (such as a recording), but instead is a very pure audio test tone that can vary in frequency from 20 Hz to 25 kHz. Remember, because the dB expresses a ratio, we're going to need some kind of standard signal to which we can compare other signals in order to derive this ratio. Under ideal circumstances, you would adjust the level of the tone for a comfortable listening level, and adjust the sound level meter so that it reads "0 dB" at this reference level. Notice that already there's a big advantage to working with the dB: the absolute sound level coming out of the speakers is not important, so we can listen at any volume level. What we're looking for are changes in volume level compared to the standard reference signal. The amount of change is a ratio, which is then expressed in dB. A signal that is stronger than the reference creates a ratio that is + so many dB, while a signal that is weaker than the reference creates a ratio that is - so many dB. 1 kHz is a common reference frequency because as mentioned earlier, the greatest response anomalies occur at the limits of the audio spectrum; 1 kHz lies in the nominal "middle" of the audio range. So, we have our reference frequency (1 kHz), and a reference level (0 dB). Now, let's vary the test tone frequency as we monitor the output of the amplifier/speaker combination with the sound level meter. Because no speaker is perfect, it's pretty safe to assume that the output will vary somewhat at different frequencies. Typically, in the lower regions (below around 125 Hz) the response starts dropping off and becomes relatively uneven. A typical speaker's response might be summarized as varying no more than 6 dB from 60 Hz up to about 18 kHz. A spec sheet would thus indicate the response as "plus or minus 3 dB, 60 Hz - 18 kHz." This response would be typical of a medium size bookshelf speaker. Knowing the speaker's response is important if we want to obtain the most accurate sound from our monitoring system. For example if we know where the speaker is not flat, we can flatten out the speaker's response to produce a more accurate monitoring system by adding an equalizer set to compensate for any frequency response aberrations. However, specs tend to present products in the best possible light. Two speakers could have identical printed specs (such as plus or minus 3 dB, 50 Hz to 18 kHz), but one could have a much smoother response with just a dropoff at the extreme high and low frequencies, while the other looks like a relief map of the Alps and has all kinds of midrange peaks and dips that affect the sound. The point of all this is that we often take devices such as loudspeakers for granted. However, suppose you're doing a dance mix and listening to it over headphones or speakers that “hype” the bass. The mix will sound somewhat bassier than it should due to the emphasized bass, so there might be a tendency to trim the bas back a bit. So far, so good—but if you then play the recording over a speaker with flat response, the sound will have less low end than what you were used to hearing because you had trimmed the treble back not to compensate for a defect in the microphone, but for a defect in the monitoring speaker. As a result, professional recording engineers often "learn" the speakers they are using. For example, if you know that your speakers are somewhat light in terms of bass response and you boost the bass to where it sounds right over your system, the sound will be bass-heavy on speakers which have a better low end response. So, you'll know to be conservative with the bass, knowing it will sound right on flatter systems, and not sound too boomy on systems that hype the bass somewhat. All these potential variations explain why a major goal of mixing and mastering is to make recordings that “translate” over any system, from cheap earbuds to an audiophile's dream system. It's not an easy task, but if you make sure your own listening environment has a flat and predictable frequency response, you're off to a good start. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Prevent "tone suckage" with this simple test procedure by Craig Anderton Is your guitar sounding run down, tired, dull, and anemic? It may not have the flu but be feeding the wrong kind of input. A guitar pickup puts out relatively weak signals, and the input it feeds can either coddle those signals or stomp on them. It’s all a question of the input’s impedance, so let’s look at a simple test for determining whether that amp or signal processor you’re feeding is a signal coddler or a signal stomper. This article will outline how to test guitar impedance so you can achieve the best sound possible. You might think that testing for input impedance is pretty esoteric, and that you need an expensive impedance tester, or at least have to find one of those matchbooks that says, “Learn Electronics at Home in Your Spare Time.” But in this case, testing for guitar impedance is pretty simple. You’ll need a standard issue analog or digital volt-ohmmeter (VOM), as sold by electronics stores, and other online stores. You may be able to find a good digital model that cost less than $40. This is one piece of test equipment no guitarist should be without anyway, as you can test anything from whether your stage outlets are really putting out 117V to whether your cable is shorted. You’ll also need a steady test tone generator, which can be anything from an FM tuner emitting a stream of white noise to a synthesizer set for a constant tone (or even a genuine test oscillator). What Is Input Impedance? If theory scares you, skip ahead to the next subhead. If you can, though, stay tuned since guitar input impedance crops up a lot if you work with electronic devices. Input Impedance is a pretty complex subject, but we can just hit the highlights for the purposes of this article. An amp or effect’s input impedance essentially drapes a resistance from the input to ground, thus shunting some of your signal to ground. The lower the resistance to ground, the greater the amount of signal that gets shunted. The guitar’s output impedance, which is equivalent to putting a resistance in series with your guitar and the amp input, works in conjunction with the input impedance to impede the signal. If you draw an equivalent circuit for these two resistances, it looks suspiciously like the schematic for a volume control (Fig. 1). Fig. 1: The rough equivalent of impedance, expressed as resistance. If the guitar’s output impedance is low and the amp input impedance is high, there’s very little loss. Conversely, a high guitar output impedance and low amp input impedance creates a lot of loss. The reason why a low input impedance "dulls" the sound is because a guitar pickup’s output impedance changes with frequency—at higher frequencies, the guitar pickup exhibits a higher output impedance. Thus, low frequency signals may not be attenuated that much, but high frequencies could get clobbered. Buffer boards and on-board preamps can turn the guitar output into a low impedance output for all frequencies, but many devices are already designed to handle guitars, so adding anything else would be redundant. The trick is finding out which devices are guitar-friendly, and which aren’t; you have to be particularly careful with processors designed for the studio, as there may be enough gain to kick the meters into the red but not a high enough input impedance to preserve your tone. Hence, the following test.= Impedance Testing This test takes advantage of the fact that input impedance and resistance are, at least for this application, roughly equivalent. So, if we can determine the effect’s input resistance to ground, we’re covered. (Just clipping an ohmmeter across a dummy plug inserted in the input jack isn’t good enough; the input will usually be capacitor-coupled, making it impossible to measure resistance without taking the device’s cover off.) Wire up the test jig in Fig. 2, which consists of a 1 Meg linear taper pot and two 1/4" phone jacks. Plug in the signal generator and amplifier (or other device being tested), then perform the following steps. Fig. 2: The test jig for measuring impedance. Test points are marked in blue. 1. Set the VOM to the 10V AC range so it can measure audio signals. You may later need to switch to a more sensitive range (e.g., 2.5V or so) if the test oscillator signal isn’t strong enough for the meter to give a reliable reading. 2. Set R1 to zero ohms (no resistance). 3. Measure the signal generator level by clipping the VOM leads to test points 1 and 2. The polarity doesn’t matter since we’re measuring AC signals. Try for a signal generator level between 1 and 2 volts AC but be careful not to overload the effect and cause clipping. 4. Rotate R1 until the meter reads exactly 50% of what it did in step 3. 5. Be very careful not to disturb R1’s setting as you unplug the signal generator and amplifier input from the test jig. 6. Set the VOM to measure ohms, then clip the leads to test points 1 and 3. 7. Measure R1’s resistance. This will essentially equal the input impedance of the device being tested. Interpreting The Results If the input impedance is under 100k, I’d highly recommend adding a preamp or buffer board between your guitar and amp or effect to eliminate dulling and signal loss. The range of 100k to 200k is acceptable although you may hear some dulling. An input impedance over 200k means the designer either knows what guitarists want or got lucky. Note, however, that more is not always better. Guitar Input impedances above approximately 1 megohm are often more prone to picking up radio frequency interference and noise, without offering much of a sonic advantage. So, there you have it: amaze your friends, impress your main squeeze (well, on second thought maybe not), and strike fear into the forces of evil with your new-found knowledge of guitar input impedance. A guitar that feeds the right input impedance comes alive, with a crispness and fidelity that’s a joy to hear. Happy picking—and testing. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. Create dual-band distortion for more flexible sounds by Craig Anderton Splitting a guitar signal into high and low bands, then processing each one individually, can give a more defined sound by reducing intermodulation distortion. This is also a useful technique for interesting effects, such as adding tempo-synched delay to one band and adding a delay synched to a different tempo to the other (for what it’s worth, I generally prefer shorter delays on the low frequencies, and longer delays on the high frequencies). Another cool techniques is choosing a chunky distortion for lower frequencies, and a more intense, sustained distortion for higher frequencies. The key to do this is Guitar Rig’s Crossover Mix module, whose purpose is to split an input into two separate bands. Start by clicking the Components tab, then choose Categories > Tools. Drag the CrossOver module into the rack (Fig. 1). Fig. 1: Add the CrossOver module to the rack. Click on the Amps tab in the component section, and drag over the Amp that will process the low end (Gratifier is a good choice). Drop it between the CrossOver Low and High modules (Fig. 2). Fig. 2: Drag the Gratifier into the low crossover section. Minimizing the Matched Cabinet a shown saves space. Set the CrossOver crossfader to 100:0 (all lows) and Frequency to taste (try 400-800Hz). Now tweak the Gratifier controls for the desired low band sound; the controls shown in Fig. 3 give a punchy, raw bass timbre. Fig. 3: These settings are a good point of departure for the Gratifier low end sound. Next, drag the Amp you’ll use for the high band Amp between the CrossOver High and mix modules. Lead800 (Distorted preset, with Boost enabled) works for me.Set the Crossover Mix Crossfade control to 0:100 (all highs) and tweak the Lead 800 amp for the desired high band sound. Now it’s time for the finishing touches. Adjust the Crossover Mix’s crossfade slider for the desired balance of the high and low sounds; also experiment with the Frequency parameter, and you might want to spread the two bands a bit in the stereo field using the Pan controls (Fig. 4). Fig. 4: The final preset. Don’t forget to save any presets you like—you may want to use them again. Note that patching the Tube Compressor between the CrossOver High module and the high band amp can give a smoother, more sustained high end. Finally, remember to enable High Resolution mode (the button just to the left of the NI logo in the upper right corner). This doubles the CPU hit, but improves the sound quality and is worth doing if your computer can handle it. Rock on! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Today's virtual mixers in computer-based recording programs have functionality hardware mixers can only dream about by Craig Anderton Today’s digital audio recording software is marvelous. For a few hundred dollars and a computer, you can have a system which would have costs tens, if not hundreds, of thousands of dollars not that long ago—and record as many tracks as your computer can handle. While multiple tracks add flexibility, they also require mixing—the process of combining all these tracks, and doing any necessary processing, for the best possible blend of sound quality and “transportability” (ability to sound equally good over any playback system, from earbuds on a smartphone to a sophisticated audiophile setup). THE VIRTUAL MIXER Most recording software has a mixer section based on the hardware mixers that existed prior to computer-based recording, and are still used for live performance. A mixer is the "audio traffic director" that combines signals (each with its own level control), routes them to appropriate destinations, and provides the mixed (usually stereo) output signal. Mixers may appear intimidating, but they consist of many identical modules; learn one, and you know how 90% of the mixer works. To illustrate basic mixing principles, consider a basic song with lead vocal, harmony vocal, piano, and guitar. The four tracks feed four mixer inputs. These go through four controls that set each signal's level, then the combined signals feed a common output stereo master bus, which goes to your monitoring system. Think of the input signal path as a vertical, downward flow into the mixer, and the bus (output signal path) as a horizontal flow from left to right from each channel to a master channel (Fig. 1). This master channel feeds your monitoring system. Fig. 1: A mixer setup in Cakewalk SONAR X3. The four inputs are to the left. The left-most input has additional signal processors which can optionally “fly out” from the main channel for editing, then folded back in to save space. The master bus is the second channel in from the right; the right-most channel is a bus dedicated to reverb, as explained later. At the junction of each input and the bus going to the master channel, you'll find a fader to set the level. Because the output is usually stereo, rotating the panpot for each channel places the signal anywhere in the stereo field (left, right, or center). With software mixers, you don’t have hardware limitations so you can construct mixers with as many channels as you have tracks, assuming your computer has enough processing power to handle all those tracks. GETTING ON THE BUS In the days of hardware, only the simplest mixers restricted their buses to a single master output bus. Typically, there would be several buses (8-bus mixers were common), and you could set up different mixes on these different buses. The extra buses could be used to create separate headphone mixes for the musicians doing overdubs (for example, the bass player might want to hear more drums in the mix, while the singer might want to hear more vocals), surround-sound compatible recordings, or adding processing such as reverb (more on this later). With hardware, the number of buses you could have were limited by hardware—you can fit only so many knobs on a mixer’s front panel. Virtual mixers let you have a virtually unlimited number of buses. You can even send buses to buses. Most virtual mixers send signal to buses through a drop-down menu selector or switch that chooses the destination bus, and a send level control that controls the signal level going to the bus. You’ll also find a pre-post switch. This chooses whether the signal going to the bus comes from before a channel’s fader (pre), or after (post). In the pre position, the signal going to the bus remains constant regardless of what’s happening with the main mix. In the more common post position, the signal going to the bus depends on the main channel fader as well as the send level control. In Fig. 1, there’s a separate bus for sending signal from each channel to a reverb. The vocals send the most amount of signal and therefore have the most reverb, while the piano and guitar send less so there’s less reverb on those signals. The reverb bus appears as another mixer input channel, and as with the other inputs, feeds the master channel. INPUT MODULES Each mixer channel has its own input module for processing or routing a signal before sending it to the output bus. Typical features include a gain trim, clipping indicator that lights if the signal exceeds the mixer's available dynamic range, the send control(s) that route the audio to the various buses mentioned previously, panpot to control the stereo image placement, a fader to control overall level, meters, etc. Most hardware mixers also included equalization (EQ) on each input module; think of EQ as a fancy, flexible tone control. A basic EQ might offer boost and cut for treble and bass. A more sophisticated version could have separate controls for frequency, boost/cut, and width so you could dial in a specific frequency and boost or cut the frequency response. For example if the sound was muddy, you could trim the bass but if it was not bright enough, you could increase the level of higher frequencies. However with computer-based recording, we now have plug-ins that allow inserting a huge variety of processors, not just EQ, into an input channel. Some virtual mixers have a built-in set of processors with the option to add more, while others simply include a place to insert effects so you can customize the roster of processors as desired for every channel. In Fig. 1, each input has an “FX bin” to insert effects; reverb is inserted in the reverb bus’s FX bin. However like many other virtual mixers, SONAR includes its own processors, which you can “fly out” from a channel in the console view, or view in a separate inspector. Fig. 2 shows the processing included in each channel strip for the mixer in Propellerhead Software’s Reason. Fig. 2: Part of the mixer in Reason. Each channel has a compressor (at the top), with EQ below it, then inserts and sends. You can also see the wider master bus. The graphic to the right lets you focus on specific parts of the mixer; you can also show or hide specific modules to reduce visual clutter. As to how you would use EQ, while mixing various sounds sometimes occupy the same part of the audio spectrum and "mask" each other. EQ can separate instruments by shifting the emphasis from one part of the frequency spectrum to another. For example, if background ambience interferes with narration, reducing the response of the ambience at speech frequencies creates more audio "space" for the narration. Other effects you might want to insert on an “as-needed” basis include compression to even out variations in dynamic range for a "smoother" sound, noise gate to removes hiss, reverb to create ambience effects such as concert hall simulations, and the like. Other common input module features include a solo button, which mutes non-soloed input modules. This is handy for making subtle changes in one track that would normally be overwhelmed by the other tracks. A mute switch does the opposite: it automatically cuts out (or mutes) its associated channel. Mute switches can also be used to “de-clutter” arrangements by taking certain tracks out of the mix at selected times. MASTER FADER CONTROLS The master bus will have a set of controls that’s similar to individual channels, including the option to add processors that affect the entire mix. AUTOMATION Automation lets the computer “remember” your mixing moves—like moving a fader up or down, changing the panning, varying EQ, and like. This lets you create mixes, listen to them over a period of time and/or different systems, and edit the automation for a perfect mix. For most mixers, the process is as simple as enabling automation record while you’re mixing, then enabling automation read during playback. Editing the automation usually involves just clicking on the fader or other parameter and writing the new moves. CUSTOMIZING THE LOOK Hardware mixers tended to be pretty big. It can be challenging to fit all the controls found on analog mixers, along with extras from the digital world, on a computer screen. So, most programs let you show and hide particular elements, or change sizes of various elements—like making channel strips narrower (at the expense of showing fewer controls) so you can see more channels simultaneously onscreen. BEYOND THE MIXER There's more to mixing than the mixer, like your monitor speakers and the acoustics of the room where you’re doing your mix. Ideally the mix would sound acceptable over all systems, rather than sound great on one set of speakers but terrible on everything else. In smaller studios, near-field monitoring is popular; this technique uses small speakers at close range (a few feet from the ears) to minimize influences from room acoustics. Like mixers, dozens of companies make monitors; for near-field monitoring, speakers from KRK, ADAM, Yamaha, and several others are popular in mid-line studios. Mixing is a science and an art. Being able to produce a mix that sounds clear, distinct, and well-balanced over any system is a real challenge, but fortunately, today’s virtual mixers make the process easier than ever. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Exploit the external input found on some synthesizers by Craig Anderton Several software synthesizers offer external inputs for processing audio signals. If you have a bunch of plug-ins, you might wonder how a synth could provide anything particularly useful you don’t already have. However, not only are there synth modules like lowpass filters (as well as possibly some other processors, like ring modulation or overdrive), but you can “play” them with a keyboard and envelopes for effects like gating, as well as do LFO modulation. We’ll show how to use Arturia’s Moog Modular V to create an envelope-followed filter which you can then gate with a MIDI keyboard, but this only hints at the many possibilities. When your usual collection of plug-ins doesn’t seem to stretch quite far enough, a synth might be just the answer—and modular soft synths are particularly well-suited to signal processing. Begin by selecting the audio input to be processed. In stand-alone mode, you typically select the audio interface input and output from some kind of preferences (Fig. 1); with a DAW host, the usual procedure is to insert audio you want to process into the instrument track hosting the soft synth. Fig. 1: The VS-700’s Aux input, which offers a high-impedance option for guitar, is providing the audio input to the Moog Modular. The Envelope Follower module doesn’t default to being in the synth, so right-click on the label for an Envelope module (e.g., Envelope 1) and select Env. Follow. 1 from the pop-up menu (Fig. 2). Fig. 2: Replacing modules with the Moog Modular involves right-clicking on an existing module’s label. With lower-level external ins (e.g., guitar), you’ll need gain. Patch the external audio signal to two mixer module inputs (these should not be linked—i.e., the red button between them should not be lit; see Fig. 3). Fig. 3: The mixer modules can provide gain for lower-level audio inputs. Patch one mixer output to a Low Pass Filter audio input; this will filter the input. Patch the other mixer output to the Envelope Follower’s audio input. Next, patch the Envelope Follower Cont Out to the Filter Mod In, then click on the mod in jack and drag up to turn up the modulation amount (Fig. 4) Fig. 4: Patching between the filter and envelope follower. The blue cables come form the mixer outputs. Patch the Filter Output to a main Envelope input. If you want to just turn on the envelope and let it run while you play through the envelope follower, set the envelope controls to Attack = 0, and Decay, Slope, and Release to maximum (full clockwise). To gate with a MIDI keyboard, set Attack and Release to 0, and Decay and Slope to maximum (Fig. 5). Of course, you can modify these further so that playing the keyboard adds an attack time, a decay after you release the key, etc. Fig. 5: The complete envelope follower + gate patch. Play a key on your MIDI controller, or the Moog Modular’s virtual controller; this will open up the main Envelope, thus opening up the associated VCA and allowing you to hear the audio input. Tweak the filter, mixer, and envelope follower controls as desired. Note that for the tightest envelope tracking, set the Envelope Follower’s Short/Long control to Short and use the minimum Time Control setting that doesn’t give “ripple” (usually about 10-15 ms). The Time Control is the little “trimpot” below the Envelope Follower’s Threshold control. Also, it’s generally best to turn off keyboard tracking to the filter (choose No instead of K1, K2, K3, or K4 in the filter’s lower right corner). Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Not quite sure how digital audio works? Here's your refresher course by Craig Anderton Digital technology—which brought us home computers, $5 calculators, cars you can't repair yourself, Netflix, and other modern miracles—has fundamentally re-shaped the way we record and listen to music. Yet there's still controversy over whether digital audio represents an improvement over analog audio. Is there some inherent aspect of digital audio that justifies this skepticism? Let's take a look at the basics of digital sound audio: why it’s different from analog sound, its benefits, and its potential drawbacks. Although digital audio continues to improve, the more you know about it, the more you can optimize your gear to take full advantage of what digital audio can offer. BASICS OF SOUND What we call “sound” is actually variations in air pressure (at least that’s the accepted explanation) that interact with our hearing mechanism. The information received by our ears is passed along to the brain, which processes this information. However, while acoustic instruments automatically generate changes in air pressure which we hear as sound, electronic instruments create their sound in the form of voltage variations. Hearing these voltage variations requires converting them into moving air. A transducer is a device that converts one form of energy into another; for example, a loudspeaker can convert voltage variations into changes in air pressure, while a microphone can change air pressure changes into voltage variations. Other transducers include guitar pickups (which convert mechanical energy to electrical energy), and tape recorder heads (which convert magnetic energy into electrical energy). If you look at audio on a piece of test equipment, it looks like a squiggly line, which graphically represents sound (Fig. 1). Fig. 1: An audio waveform. This could stand for air pressure changes, voltage changes, string motion, or whatever. A straight horizontal line represents a condition of no change (i.e. zero air pressure, zero voltage, etc.), and the squiggly line is referenced to this base line. For example, if the line is showing a speaker cone’s motion, excursions above the base line might indicate that the speaker cone is moving outward, while excursions below the base line might indicate that the speaker cone is moving inward. These excursions could just as easily represent a fluctuating voltage (such as what comes out of a synthesizer) that alternates between positive and negative, or even the air pressure changes that occur if you strike a piano key. The squiggly line is called a “waveform.” Let’s assume that striking a single piano note produces the waveform shown in Fig. 1. If we take that waveform and press an exact analogy of the waveform into a vinyl record, that record will contain the sound of a piano note. Now, suppose we play that record. As the stylus traces this waveform, the phono cartridge will send out voltage variations which are analogous to the original air pressure changes caused by the piano note. This low-level signal then passes through an amplifier, which augments the voltage enough to drive a speaker cone back and forth. The final result is that the speaker cone follows the waveform motion, thus producing the same air variations originally pressed into the vinyl record. Notice that each stage transfers a signal in its own medium (vinyl, wire, air, etc.) that is analogous to the input signal; hence the term, analog recording. Unfortunately, analog recording is not without its faults. First of all, if the record has pops, clicks, or other problems, these will be added on to the original sound and show up as undesirable “artifacts” in the output. Second, the cartridge will add its own coloration; if it can’t follow rapid changes due to mechanical inertia, distortion will result. Phono cartridge preamps also require massive equalization (changes in frequency reponse) to accommodate cartridge limitations. Amplifiers add noise and hum, and speakers are subject to all kinds of distortion and other problems. So, while the signal appearing at the speaker output may be very similar to what was originally recorded, it will not duplicate the original sound due to these types of errors. When you duplicate a master tape or press it into vinyl, other problems will occur due to the flawed nature of the transfer process. In fact, every time you dub an analog sound, or pass it through a transducer, the sound quality deteriorates. THE CONSISTENCY OF DIGITAL Digital audio removes some of the variables from the recording and playback process by converting audio into a string of numbers, and then passing these numbers through the audio chain (in a bit, we’ll see exactly why this improves the sound). Fig. 2 illustrates the conversion process from an analog signal into a number. Fig. 2: The digital conversion process. Fig. 2a represents a typical waveform which we want to record. A computer takes a “snapshot” of the signal every few microseconds (1/1,000,000th of a second) and notes the analog signal's level, then translates this “snapshot” into a number representing the signal's level. Taking additional samples creates the “digitized” signal shown in Fig. 2b. Note that the original signal has been converted into a series of samples, each of which has its own unique value. Let’s relate what we’ve discussed so far to a typical audio system. A traditional microphone picks up the audio signal, and sends it to an Analog-to-Digital Converter, or ADC for short. The computer takes this numerical information and optionally processes it—for example, delays it in the case of a digital delay or with a sampling keyboard, stores the information in memory. So far so good, but listening to a bunch of numbers does not exactly make for a wonderful audio experience. After all, this is an analog world, and our ears hear analog sound, so we need to convert this string of numbers back into an analog signal that can do something useful such as drive a loudspeaker. This is where the Digital-to-Analog Converter (DAC) comes into the picture; it takes each of the numerical samples and re-converts it to a voltage level, as shown in Fig. 2c. A lowpass filter works in conjunction with the DAC to filter the stair-step signal, thus “smoothing” the series of discrete voltages into a continuous waveform (Fig. 2d). We may then take this newly converted analog signal and do all of our familiar analog tricks like putting it through an amplifier/speaker combination. But what’s the point of going through all these elaborate transformations? And doesn’t it all affect the sound? Let’s examine each question individually. The main advantage of this approach is that a digitally-encoded signal is not subject to the deterioration an analog signal experiences. Consider the compact disc, the first example of mass-market digital audio; it stores digital information on a disc which is then read by a laser and converted back into analog. By taking this approach, if a scratch appears on the disc it doesn’t really matter—the laser recognizes only numbers, and will tend to ignore extraneous information. Even more importantly, using digital audio preserves quality as this audio goes through the signal chain. For example, a conventional analog multi-track tape gets mixed down to an analog two-track tape, which introduces some sound degradation due to limits of the two-track machine. It then gets mastered (another chance for error), converted into a metal stamper (where even more errors can occur), and finally gets pressed into a record (and we all know what kinds of problems that can cause, from pops to warpage). At each audio transfer stage, signal quality goes down. With digital recording, suppose you record a piece of music into a computer-based recording system that stores sounds as numbers. When it’s time to mix down, the numbers—not the actual signal—get mixed down to the final stereo or surround master (of course, the numbers are monitored in analog so you can tell what’s going on). Now, we can transfer that digitally-mixed signal directly to the compact disc; this is an exact duplicate (not just an analogy) of the mix, so there's no deterioration in the transfer process. Essentially, the Analog-to-Digital Converter at the beginning of the signal chain “freeze dries” the sound, which is not reconstituted until it hits the Digital-to-Analog Converter in the listener’s audio system. This is why digital audio can sound so clean; it hasn’t been subjected to the petty humiliations endured by an analog signal as it works its way from studio to home stereo speaker. LIMITATIONS OF DIGITAL AUDIO So is digital audio perfect? Unfortunately,digital audio introduces its own problems which are very different from those associated with analog sound. Let’s consider these one at a time. Insufficient sampling rate. Consider Fig. 3, which shows two different waveforms being sampled at the same sampling rate. Fig. 3: Sampling rate applied to two different waveforms. The original waveforms are the light lines, each sample is taken at the time indicated by the vertical dashed line, and the heavy black line indicates what the waveform looks like after sampling. Fig. 3a is a reasonably good approximation of the waveform, but Fig. 3b just happens to have each sample land on a peak of the waveform, so there is no amplitude difference between samples, and the resulting waveform looks nothing at all like the original. Thus, what comes out of the DAC can, in extreme cases, be transformed into an entirely different waveform from what went into the ADC. The solution to the above problems is to make sure that enough samples are taken to adequately represent the signal being sampled. According to the Nyquist theorem, the sampling frequency should be at least twice as high as the highest frequency being sampled. There is some controversy as to whether this really is enough, but that’s a controversy we won’t get into here. Filter coloration. As mentioned earlier, we need a filter after the DAC to convert the stair-step samples into something smooth and continuous. The only problem is that filters can add their own coloration, although over the years digital filtering has become much more sophisticated and transparent. Quantization. Another sampling problem relates to resolution. Suppose a digital audio system can resolve levels to 10 mv (1/100th of a volt). Therefore, a level of 10 mV would be assigned one number, a level of 20 mV another number, a level of 30 mV yet another number, and so on. Now suppose the computer is trying to sample a 15 mV signal—does it consider this a 10 mV or 20 mV signal? In either case, the sample does not correspond exactly to the original input level, thus producing a quantization error. Interestingly, note that digital audio has a harder time resolving lower levels (where each quantized level represents a large portion of the overall signal level) than higher levels (where each quantized level represents a small portion of the overall signal level). Thus, unlike analog gear where distortion increases at high amplitudes, digital systems tend to exhibit the greatest amount of distortion at lower levels. Dynamic range errors. A computer cannot resolve an infinite number of quantized levels; therefore, the number of levels it can resolve represents the systen's dynamic range. Computers express numbers in terms of binary digits (also called “bits”), and the greater the number of bits, the greater the number of voltage levels it can quantize. For example, a four-bit system can quantize 16 different levels, an eight-bit system 256 different levels, and a 16-bit system can resolve 65,536 different levels. Clearly, a 16 bit system offers far greater dynamic range and less quantization error than four or eight-bit systems, and 20 or 24 bits is even better. Incidentally, there’s a simple formula to determine the approximate dynamic range in dB based on the bits used in a digital audio system, where dynamic range = 6 X number of bits. Thus, a 16 bit system offers 96 dB of dynamic range—excellent by any standards. However, this is theoretical spec. In reality, factors like noise, circuit board layouts, and component limitations reduce the maximum potential dynamic range. THE DIGITAL AUDIO DIFFERENCE Despite any limitations, when the CD was introduced most consumers voted with their dollars and seemed to feel that despite any limitations, the CD's audio quality sure beat ticks, pops, and noise. Unfortunately, the first generation of CD players didn't always realize the full potential of the medium; the less expensive ones sometimes used 12-bit converters, which didn't do the sound quality any favors. Also, engineers re-mastering audio for the CD had to learn a new skill set, as what worked with tape and vinyl didn't always translate to digital media. While digital audio may not be perfect, it’s pretty close and besides, the whole field is still relatively young compared to the decades over which analog audio matured. An alternate digital technology, Direct Stream Digital, was introduced several years to a less-than-enthusiastic response from consumers yet many believe it sounds better than standard digital audio based on PCM technology; furthermore, as of this writing the industry is considering transitioning to 24-bit systems with a 96kHz sampling rate. While controversial (many feel any advantage is theoretical, not practical), this does indicate that efforts are being made to further digital audio's evolution. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Lock your bass to the kick (or other drums) for a super-tight groove By Craig Anderton One of life’s better moments is when the bass/drum combination plays like a single person with four hands—and really tight hands at that. When the rhythm section is totally locked, everything else seems just that much tighter. When you’re in the studio, several tools can help lock the rhythm section together. While they’re no replacement for “the real thing” (i.e., humans that play well together!), they can provide some pretty cool effects. Following is one of my favorites: a technique that locks bass to the kick drum so that they hit at exactly the same time. BASS GATE Understanding this process requires using a noise gate, a signal processor designed to remove hiss from a signal. It typically has two inputs and one output. One of the inputs is for the audio signal you want to clean up, while the output is where the processed signal exits. In between, there’s a “gate” that either is open and lets the signal through, or is closed and blocks the signal. The second input is a “control” input that senses an incoming signal level and converts it into a control signal. If the signal level is above a user-settable threshold, then the gate opens and lets the signal through. If the signal level is below the threshold, then the gate closes, and there’s no output. Noise gates were very popular in the days of analog tape, which had a consistent level of background hiss. You’d set the gate threshold just above the hiss, so that (at least in theory) any “real” signal, which presumably was higher in level than the hiss, would open the gate. If the signal consisted of just noise, then the gate would close, blocking the hiss. Most noise gates can do more than just simply turn the signal on and off. Other controls include: Decay: Determines how long it takes the gate to fade out after the control signal goes below the threshold. Attack: Sets how long it takes for the gate to fade in after the control signal goes above the threshold (good for attack delay effects). Gating amount: his determines whether the "gate closed" condition blocks the signal completely, or only to a certain extent (e.g., 10 or 20dB below normal). Typically, the control input senses the signal present at the main audio input. However, some hardware noise gates bring this input to its own jack, called a “key” input. This allows some signal other than the main audio input (like a kick drum) to turn the gate on and off. In today’s computer-based recording system, noise gates typically have a “sidechain” input which acts like a key input. A send from a different audio track can feed the sidechain input as a destination, and thus control that gate independently of the signal going through it. CONNECTIONS Fig. 1 shows the basic setup. The kick track has a send bus, with one of the available destinations being the bass track’s gate sidechain input. Whenever the kick hits, the bass passes through the gate; if there’s no kick signal, the bass track’s gate closes and the bass signal becomes inaudible or reduced in level.. Fig. 1: The kick track’s Bus 1 feeds the PC4K Expander/Gate module’s sidechain input, which is shown as part of the Sonar ProChannel that's “flown out” from the Gated Bass track (track 3). Track 2 carries the unprocessed bass sound. However, note there are two copies of the bass track, although you don’t necessarily need this. You may want to vary the blend between the gated and “continuous” tracks, or process the gated track—for example, send the bass through some distortion, then gate it and mix this track in behind the main bass track. Every time the kick hits, it lets through the distorted bass burst (which can be kind of cool). Another example involves adding a significant treble or upper midrange boost to the gated track. Whenever the kick and bass hit simultaneously the bass will sound a little brighter, thus better differentiating the two sounds. Also note that the kick track send post-fader button is turned off, so the send signal is pre-fader. This means the send level is constant regardless of the channel’s fader setting. Having the bass gated on/off can be very dramatic, but don’t forget about using gating to bring in variations on the core sound. Also remember this technique isn’t exclusive to the studio—you can gate live as well. Sure, gating is a “trick”—but it can add some really rhythmic, useable effects. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Tune your EQ's frequency response so it sits better in a mix by Craig Anderton Equalization is crucial to creating a balanced bass sound that plays back faithfully over a variety of listening systems. But unfortunately, there are no “canned,” universally applicable EQ settings. Different basses and amps have different response anomalies that cause a build-up (or lack) of sonic energy at certain frequencies, and each instrument has its own sonic “fingerprint.” Room acoustics and miking also contribute to creating an unbalanced sound with respect to frequency response. Equalizing bass generally requires addressing two broad problems: frequency ranges where the sonic energy is weak and needs boosting, and ranges where the sound is too strong and needs cutting. Recording through an amp will add more of these anomalies than recording direct, but even when recording direct, you may want to boost or cut certain frequency ranges for aesthetic (rather than “problem-solving”) reasons. If you’ve recorded bass for years, after a while you can recognize where any problems lie, and instinctively know which frequencies need massaging to create the desired sound. But what if you don’t have years of experience? Fortunately, there are ways to analyze a sound’s character so you can identify the sweet spots and equalize them accordingly. WHAT ABOUT COMPRESSION? Because frequency response anomalies alter level at certain frequencies, and compression reduces the differences between amplitude peaks and valleys, compression may seem like a good way to even out the overall response. However, compression can color the sound in possibly undesirable ways. For example, only the peaks in a specific frequency range might be loud enough to trigger compression. This would yield a “squeezed” sound at those frequencies, while other frequency ranges sound more natural. It’s preferable to get the best possible sound with EQ first, then add compression for further “smoothing.” One exception is if you’re using compression not to affect frequency response, but to smooth out variations in dynamics. In that case, it usually makes sense to compress first, then add EQ to change the tone. FINDING THE SWEET SPOTS To find the bass’s “sweet spots” for EQ, you’ll need a parametric equalizer with three controls: frequency, boost/cut, and bandwidth. A quasi-parametric EQ, which typically has a fixed bandwidth, might work—although in accordance with Murphy’s Law (“Anything that can go wrong, will”), the bandwidth will invariably be too narrow or too broad for the task at hand. If possible, loop a “busy” portion of the bass track (e.g., notes that cover a wide range of pitches instead of just a single, sustained note). Looping is usually easy with hard disk recording systems; solo just the bass track (or mute all the other tracks). Start by finding where the bass is most “aggressive.” Turn down the monitors, as we’ll temporarily be using significant EQ boosts to help find peaks. Then follow these steps: Turn the parametric’s boost/cut control to lots of boost (e.g., 10 to 12dB). Set the bandwidth to about an octave. Slowly sweep the parametric frequency from high to low. Observe any meter that’s monitoring the channel, and listen carefully for any major sonic boosts. Note the frequency range that drives the meter highest, or sounds the most distorted. There may be several such ranges; look for the most prominent one. Try cutting the signal slightly in that range. This may create a more balanced sound. On the other hand, this frequency may be essential to the instrument’s timbre. Either way, you’ve at least identified the frequency or frequencies where the bass’s response peaks. Adjust the bandwidth control for the best sound. If the frequency range is sharp, narrow the EQ’s bandwidth. If the range is broad, widen the bandwidth. Go back and forth between steps 5 and 6 until the signal is balanced to whatever extent sounds “right.” As a reality check, occasionally use the bypass switch to compare the equalized and non-equalized sounds (Fig. 1). Fig. 1: The top spectrum (from Sonar X3’s ProChannel EQ) shows the bass before EQ. This bass didn’t sit will in a mix because it was too “muddy” in the lows from a bass bump around 100Hz, had an annoying midrange peak in the 500Hz range, and lacked highs that were needed to emphasize higher overtones and pick noises. The lower spectrum shows how the EQ was adjusted to compensate for these issues. Now let’s find the frequencies that are most important in determining an instrument’s intelligibility and “signature.” Follow the same general procedure as above, but in step 1, set the boost/cut control to cut instead of boost. Now as you sweep the frequency control, note what happens to the signal when you hit certain frequency ranges. Taking out frequencies around 60-100 Hz will affect the “bottom.” Frequencies around 700 Hz -1 kHz determine much of the bass’s intelligibility; a lot of the bass “snap” hits at 2-3 kHz, and “air” kicks in at around 5 kHz and above. Reducing these frequencies will reduce important components of the sound. This data, coupled with what you learned earlier while boosting, is invaluable when doing a mix. For example, if cutting at 1.2 kHz reduced intelligibility, then you know that if the bass doesn’t “speak” well in the mix, try boosting at that frequency. On the other hand, if you found there was a major resonance at 130 Hz that caused the bass to sound “muddy,” cut the response a bit at that frequency. ADDITIONAL TIPS Generally, if cutting or boosting will accomplish the same result, I prefer to cut. For example, suppose that the high and low ends seem deficient. Rather than boost them, try cutting the midrange and raising the overall level somewhat. It’s a judgement call, but to my ears, sometimes this results in a more natural sound. I’m not a big fan of EQ presets, because so often, choosing EQ settings depends so much on musical context. However, I still think it’s worth taking the time to store some of your favorite EQ curves. You probably won’t use the same curves each time, but what they will do is provide a point of departure that may shorten your “tweaking time” compared to starting from scratch. Finally, remember that response anomalies can also be part of an instrument’s character, so don’t too extreme. Be especially careful about adding large amounts of boost or cut—even 1 dB can make a significant difference, and you want to avoid a situation where solving one problem introduces another. For example, you turn up the treble a bit, which then makes the bass less prominent . . . so you turn up the lows, and the combination of increased bass and treble makes the midrange comparatively weak, so you increase that, then the treble seems low and you start all over again . . . you get the idea. As with so many other audio processes, think scalpel rather than machete when doing sonic surgery. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. It's not as simple as just placing a mic up against a speaker by Craig Anderton Miking guitar cabinets may seem like a simple process, because all you really need to do is to pick up moving air with a mic. But there are many variables: the mic, its placement the room environment, the cabinet itself, and the amp settings. So, let’s consider some of the most important considerations when miking amp cabinets. MIC SELECTION Many guitarists record with the amp cranked to get “that” sound, so under these circumstances it’s important to choose a mic that can handle high sound pressure levels (SPL). Dynamic mics are ideal for these situations, and the inexpensive Shure SM57 is the classic guitar cabinet mic—many engineers choose it even when cost is no object (Fig. 1). Although dynamic mics sometimes seem deficient in terms of brightness, this doesn’t matter much with amp cabinets, which typically start losing response around 5kHz or so. A couple less dB at 17kHz isn’t going to make a lot of difference. That said, there are also more upscale dynamic mics, like the Electrovoice RE20 and Sennheiser MD421, which give excellent results. Fig. 1: Shure’s SM57 is the go-to cab mic in many pro and project studios. Condenser mics are often too sensitive for close miking of loud amps, but they can give a more “open” response. They also make good “auxiliary” mics—placing one further back from the amp adds definition to the dynamic primary mic, and picks up room ambience that can add character to the basic amp sound. For condenser mics, AKG’s C414B-ULS is a great, but pricey, choice; their C214 gives similar performance but at a much lower cost. Neumann’s U87 is beyond most budgets, but the more affordable Audio-Technica AT 4051 has a similar character and it’s also great for vocals. Then there’s the ribbon mic. Although ribbon mics used to be fragile, newer models use more modern construction techniques and are much more rugged. Ribbon mics have an inherently “warm” personality, and a polar pattern that picks up sounds from the front and back—but not the sides. This characteristic is very useful with multi-cab guitar setups; by choosing which sounds to accept or reject based on mic placement, ribbon mics let you do some pretty cool tricks. Royer’s R-121 and R-101 are popular for miking cabs, Beyer’s M160 is a classic ribbon mic that’s been used quite a bit with cabs. Regardless of what mic you use, check to see whether the mic has a switchable attenuator (called a “pad”) to reduce the mic’s sensitivity. For example, a -10dB pad will make the mic 10dB less sensitive. With loud amps, engage this to avoid distortion. MIC PLACEMENT First, remember that while each speaker in a cab should sound the same, that’s not always true. Try miking each speaker in exactly the same place, and listen for any significant differences. Start off with the mic an inch or two back from the cone, perpendicular to the speaker, and about half to two-thirds of the way toward the speaker’s edge. To capture more of the cabinet’s influence on the sound (as well as some room sound), try moving the mic a few inches further back from the speaker. Moving the mic closer to the speaker’s center tends to give a brighter sound, while angling the mic toward the speaker or moving it further away provides a tighter, warmer sound. Also, the amp interacts with the room: Placing the amp in a corner or against a wall increases bass. Raising it off the floor also changes the sound. The room’s ambience makes a difference as well. If the room is small and has hard surfaces, the odds are there will be quite a bit of ambient sound making its way into the mic, even if it’s close to the speaker. This isn’t necessarily a bad thing; I’m a fan of ambience, because I find it often adds a more lively feel to the overall sound. DIRECT VS. MIKED Some amps offer direct feeds (sometimes with cabinet simulation); combining this with the miked sound can give a “big” sound. However, the miked sound will be delayed compared to the direct sound—about 1ms per foot away from the speaker. This can result in comb filtering, which you can think of as a kind of sonic kryptonite because it weakens the sound. To counteract this, nudge the miked sound earlier in your recording program until the miked and direct sounds line up, and are in-phase (Fig. 2) Fig. 2: In the top pair of waveforms, the top waveform is the direct sound and the next one down is the miked signal. Note how it’s delayed compared to the direct sound. In the bottom pair, the miked signal (bottom waveform) has been “nudged” forward so it lines up with the direct sound. THE MIC PLACEMENT “FLIGHT SIMULATOR” IK Multimedia’s AmpliTube 3 (Fig. 3) lets you move four “virtual mics” around in relation to the virtual amp. The results parallel what you’d hear in the “real world,” and you can learn a lot about how mic placement affects the overall sound by moving these virtuals mics. While this doesn’t substitute for going into the studio, moving mics around various amps, and monitoring the results, it’s a great introduction. Nor is AmpliTube alone; Softibe’s Metal Room offers two cabs and mics (Fig. 4), Overloud’s TH2 has two moveable mics for their cabinets (Fig. 5), and MOTU’s Live Room G plug-in for Digital Performer 8 (Fig. 6) also allows various mic positions for three difference mics. Fig. 3: IK Multimedia’s AmpliTube offers four mics you can place in various positions. Fig. 4: Softube’s Metal Room has two cabs, each with two mics you can position as desired. Fig. 5: Overloud’s TH2 has two mics for covering their cabs. Fig. 6: MOTU’s Digital Performer 8 has two “live room” plug-is, one for guitar and one for bass, that provides for various miking options. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Love your DAW more by giving your tracks a group hug By Craig Anderton Grouping is the process of controlling some parameter of multiple tracks with a single control. The classic example dates back to when engineers started putting multiple mics on drum kits. To change the level of the entire kit, you could either a) change all the faders individually, which messed with the oh-so-delicate balance you had created, or b) group all the tracks so they were controlled by one fader. The standard way to do this was to assign all tracks to a bus, dump the bus into the main stereo output, and use the aux bus master level control to set the level of all drum tracks simultaneously. In the era of the DAW, grouping has become more sophisticated. Sometimes you can group entire tracks, so that changing any parameter on a grouped track changes the same parameter in other grouped tracks. And with almost all hosts, you can simply assign any parameter from any number of tracks to a group; changing the parameter of one track will change that same parameter on all tracks, either ratiometrically or linearly. As an example of what those two terms mean, suppose you have two grouped faders with fader one at 100% (full up), and fader two ar 25% of the way up. With a linear relationship, if you bring fader one down by 25%, then fader one will be at 75% and fader two will be at 0% (i.e., all the way down), because each fader moved a linear amount—in this case, 25%. With a ratiometric change, fader one will be at 75% but fader two will be down 25% from its original setting (a ratio), or 18.75% of the way up. You’ll usually want the ratiometric option, so check whether your host defaults to that setting. Here are a few tips to get you started on your own personal group hug. PROPER GAIN-STAGING THROUGH GROUPING So you start a project, and keep adding tracks. The overall level creeps up with each new track, so you pull it down the master volume to compensate. Eventually, all your tracks are hitting 0, but your master is at –20 or so. This was a real problem with older digital systems that had low resolution, but even in today’s world, many engineers maintain you’ll get a better sound if you keep the master at 0 and lower the individual track levels. To maintain the mix among tracks that you slaved over for the past few hours, group the trim controls for all tracks and bring them down (ratiometrically, of course) until your master can sit at 0 without clipping. Then, ungroup so you have independent control over each channel again. BREAK FREE FROM A GROUP One of the big advantages of sending your tracks into an aux bus and using it as a group level control was that you could still tweak the levels of individual tracks if the balance wasn’t quite right. However, most DAWs let you temporarily free a fader from a group, perhaps by holding down Ctrl or Alt while moving the fader. Once you release the key, the fader joins the group again. CUSTOM GROUPING, PART ONE One of the most powerful features of digital grouping is being able to set a start and end point for a grouped parameter’s travel (Fig. 1). Fig. 1: In Cakewalk Sonar, the volume controls (blue lines) in tracks 19, 20, and 21 are grouped together, as indicated by the small vertical red stripe to the left of the control; each group can have its own color. The Group Manager is open, where you can choose the color associated with a group, as well as specify whether the group faders travel in an absolute, relative, or custom mode. Choosing Custom mode lets you edit start and end values. An excellent example is “complementary panning.” Suppose when one track is panned right you want a grouped track to pan left, and vice-versa. With custom grouping, you can group the two pans together and set different start and end points. For one pan control, the start would be full left and the end, full right. For the other pan control, the start would be full right and the end, full left. Then, as you move one pan control, the other moves in the opposite direction. CUSTOM GROUPING, PART TWO Let’s say you have multiple tracks of drums, and several of them go through aux sends to a reverb bus. But you also want to add an occasional reverb “splash” on the snare in specific places. Only problem is, when you push up the snare’s aux send to get a bigger reverb sound, it’s too big. With a custom grouping, you can set the other drum track aux sends so that when the snare aux snare increases, the other aux sends decrease . . . problem solved. GROUPING MUTES As you go through the process of building a song, you might appreciate the convenience of grouping the mutes on all vocal tracks, all percussion tracks, etc. so you can mute selectively and focus your attention on other tracks. CUSTOM LOOOONG FADEOUTS The usual fade protocol is to grab the master output level and pull it down. Another method is to group all your tracks together, and at the fade, fade out on one track: Automation curves should be written for all grouped tracks. Now you can edit each curve so that some tracks fadeout at slightly different rates, or some instruments become more or less prominent during the course of the fadeout. Fun stuff, eh? Go ahead, give your DAW a group hug . . . it can not only save you time, but lead to more creative projects. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Amp sims aren't only about distortion... By Craig Anderton I’ve seen several comments online that amp sims are okay for distorted sounds, but not clean ones. However, it’s very easy to get good, clean guitar sounds, sometimes even with that “tube sparkle”...you just have to know these six secrets. 1. Record at an 88.2 or 96kHz sample rate. The lack of “cleanliness” you hear might not be due to excessive levels that cause clipping, but aliasing or foldover distortion. Recording at a higher sample rate can minimize the odds of this happening (note that several guitar amp sims offer an “oversampling” option that accomplishes the same basic result, even if the project’s base sampling rate is 44.1 or 48kHz). 2. Choose the right amp model. This may seem obvious, but not all clean models are as expected. For example, many “clean” emulations have a little bit of crunch, just like the original. Some sim manufacturers create clean amps that aren’t designed to emulate classic amps (Fig. 1); try these first. Fig. 1: AmpliTube 3’s Custom Solid State Clean model doesn’t have to emulate anything, so it’s designed to be as clean as possible. 3. Turn down the drive, turn up the master. It’s possible to get cleaner sounds with some amp models by dialing back dramatically on the input drive control, and boosting the output level to compensate (Fig. 2). Fig. 2: POD Farm 2’s Blackface Lux model can give clean sounds that ooze character. Here’s how: Turn down the amp Drive and input gain, turn the amp Volume all the way up, and set the output gain high enough to give a suitable output level. 4. Compress or limit on the way into the amp. Building on the previous tip, if you’re pulling down the level, then the guitar might sound wimpoid. Insert some compression or limiting between the guitar and amp model to keep peaks under control, and allow getting a higher average level to the amp without distortion. 5. Watch your headroom. Guitars have a huge dynamic range, so don’t let the peaks go much above -6 to -10dB if you want to stay clean. Yes, we’re used to making those little red overload LEDs wink at us, but that’s not a good strategy with digital audio—especially these days, when 24-bit resolution gives you plenty of dynamic range. 6. Beware of inter-sample clipping. With most DAWs, you can go well into the red on individual channels because their audio engines have virtually unlimited headroom (thanks to 32-bit floating-point math or better, in case your inner geek wondered). However, when those signals hit the output converters to become audio, headroom goes back to the real world of 16 or 24 bits, and any overloads may turn into distortion. So if the meters don’t show clipping you’re okay, right? Not so fast. Most meters measure the actual values of the digital waveform’s samples, prior to reconstruction into analog. But that reconstruction process might create signal peaks that are higher than the samples themselves, and which don’t register on your meters (Fig. 3). Fortunately, you can download SSL’s free metering plug-in that shows inter-sample clipping from the Solid State Logic web site. Fig. 3: Waves’ G|T|R is set to a clean amp. The DAW’s master output meter (left) shows that the signal is just below clipping, but SSL’s X-ISM meter that measures inter-sample distortion shows that clipping has actually occurred. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. Send effects are good for so much more than reverb by Craig Anderton There are four main ways to add effects with digital audio workstation recording software. The most common method is to add software plug-ins as series insert effects—the track’s audio goes through the effect, then into the DAW’s mixer. Some DAWs also let you use spare audio interface I/O to route a track through external hardware effects. There are also master effects, which affect all buses and all tracks for an entire mix. However, one of the most useful options is adding effects to a send (or auxiliary) bus as this lets multiple tracks feed a single effect, and simplifies techniques like parallel processing. HOW SEND EFFECTS WORK Audio tracks use send controls to “pick off” some of the track’s audio and send it to a bus. An effect inserted in that bus processes any audio it receives. The classic send effect application is reverb, where different tracks send different amounts of audio to the send reverb effect—for example, if you want lots of reverb on voice and guitar but not bass, you’d turn up the voice and guitar track send controls that feed the reverb bus, while leaving the bass’s send control down. The send bus output re-enters the DAW’s mixer via a “return” channel or track (Fig. 1). Figure 1: This Ableton Live project has two returns for send effects—Delay and Reverb. The Guitar, Vocal, and Drums tracks all send some signal to the Reverb, but only the Guitar track sends audio to the Delay. The Delay bus is selected to show its two send effects: A Line 6 POD Farm stereo delay, preceded by EQ that reduces low and high frequencies in order to accent the delay’s midrange. Sends typically can be pre- or post-fader. With post-fader selected, reducing the track’s main fader simultaneously reduces the send level. With pre-fader, only the send level control determines the send level, independently of the track fader’s setting. Furthermore, Mute and Solo buttons typically affect a send only if it’s post-fader. KEEP IT WET When using send effects that have a wet/dry balance controls (like reverb or delay), remember that this effect is in parallel with the track feeding it. As the original track is feeding the DAW’s mixer with a dry signal, you’ll usually set the send effect for wet sound only, then use the return track’s level control to balance the amount of wet signal with the original track’s dry signal. Another trick involves changing phase. If you reverse the phase (polarity) feeding the send bus, this will tend to cancel out any residual dry signal in the send bus effect. For example if you’re using a phaser as a send effect, there’s still some dry signal in there even if the phaser’s wet/dry control is set to all wet. Canceling out some of the remaining dry signal produces a kind of “super-phaser” effect. SENDS AND SIDECHAINS Sidechaining with software plug-ins also makes good use of sends, because a track send usually provides the track’s signal to a sidechainable device. Typically, the sidechain input will show up as an available send destination (Fig. 2). Fig. 2: In Cakewalk Sonar, a send from the drums feeds a control signal to an Expander/Gate processor inserted in the bass track. There’s also a Compressor on the bass track; this is being gated by the Damage drums in Kontakt. Note that for sidechaining, you’ll almost always want the sidechain set for pre-fader, so that the amount of signal feeding the sidechain is constant. This also lets you solo the sound of the track with the sidechained effect without muting the track that provides the sidechain signal. Also remember that when sidechaining via a send bus, you can insert additional effects to process the control signal. For example with a dance mix, you might want to filter out everything but the snare so you can “pimp” a compressor on another track whenever the snare hits. SENDS AND PARALLEL EFFECTS Sends are an easy way to create parallel effects, so any parallel effects applications work well. Here are a few examples. Use a send effect to parallel a wah or envelope-controlled filter with the track it’s processing. This prevents “thinning out” the main track. With bass, keep the full, round bass sound in the main track, and send some audio to processors like distortion, chorus, wah, etc. Parallel processing keeps the low end intact. Send effects can help provide stereo imaging. Use two send effects with short delays (like 13ms and 17ms), then pan one send right and the other left. This adds width and ambience. Then, these two sends could feed another send that feeds reverb, thus adding a room or hall sound. If you have a chain of hardware effects, most DAWs have “external effect” plug-ins that can feed a track or send’s audio to an interface output. This patches to the hardware effects, whose output returns to a spare audio interface input. Although this is usually implemented as an insert effect, it works just as well for sends. The only caution is that this introduces additional latency; you may not notice this with reverb, but the more robust your DAW’s path delay compensation, the better. The latency may be enough to cause problems while tracking, but won’t be an issue during mixdown. Send effects make it easy to maintain consistency with doubled parts. Rather than trying to set up insert effects identically for each part, simply create one send effect, and send audio from each doubled part to the send bus. Sends are great for multiband processing, because you can send particular frequency ranges from a track to different sends, and process those bands separately. To divide a track into different frequency bands, one option is to use a multiband compressor as a crossover (assuming you can solo individual compressor frequency bands, and set the compression ratio to 1:1 so there’s no compression). Insert the multiband compressor as an effect into each send, and use the same frequency ranges for each one. Then, solo individual bands to process only those frequencies for a given send. Amp sims can often add a delightfully trashy sounds to drums, along with some ambience. However, a little goes a long way—put the amp sim in a send efefct so you can dial in a subtle amount in parallel with the drums. Those are enough ideas to get you started, but be creative and you’ll surely come up with some more. And if you do, feel free to add a comment and tell the world about it! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. The virtual mics and room options included with amp sims can have a major effect on the sound By Craig Anderton A physical guitar amp is more than a box with a speaker—it’s a box with a speaker being picked up by a mic in a room, and both the mic and room make a major contribution to the overall sound. To better emulate the sound of a “real” guitar amp, many amp sims include ways to emulate mic position, mic type, and often, the amp’s position in the room or the ability to add “air.” The following images illustrate the options available in several popular amp sims; note that these are not in-depth reviews, but quick sketches of the types of features that are available. Because you can usually add or construct room sounds with other plug-ins, but simulating different mic types and positions is more complicated., most amp sims tend to put most of their efforts into the miking options. Finally, note that amp cabinets based on impulse responses usually don’t let you play around with mics, because that’s typically part of the impulse response. Choosing Cab in Line 6 POD Farm’s "Amp" view brings up a way to place the selected cabinet in a virtual room and specify the amount of room sound, as well as choose from four mic types with different positions (57 on axis, 57 off axis, 421 dynamic, and 67 condenser). Waves’ G|T|R offers 12 mics with various positions in a single drop-down menu; with some amps, an additional delay control simulates space. For Scuffham’s S-Gear 2’s cabinets, there are two mic choices (ribbon and dynamic), each with four placements with respect to the speaker. While not option-laden, the sound quality is very satisfying. With Overloud’s TH2, there are four mics. Two of these offer 18 different mic types, with controls for horizontal and vertical positions with respect to the speaker cabinet, distance from the cone, and phase, The two additional mics don't have these various adjustments; instead, one is rotated 45 degrees and placed in front of the speaker, while the other aims toward the back of the cabinet. All four mics go to a dedicated mixer module (peeking out toward the top, below the navigation section). There are also five different kinds of ambience. Native Instruments’ Guitar Rig includes Control Room Pro, a module with eight independent sections. You can load any of 29 different cabinets as well as DI and no cabinet (23 cabs offer three different mic placements on the speaker cone), and five different mics. There are also controls for level, pan, phase, and room amount for each section; a master mixer duplicates the section pan and level controls, but also includes master bass, treble, and air controls. IK Multimedia’s AmpliTube 3 has two cab positions in a stereo routing, where you can load a huge number of different cabinets. Mic-wise, you can place two mics (chosen from a wide variety) on each cab, move their positions around with respect to the speaker cone, flip phase on either one, and crossfade between the two. For ambience, there are two additional room mics have variable width (physical separation), and the pair can be panned—handy if you’re running two cabs in stereo. (I didn’t give an exact number of cabs and mics because I’ve customized the roster via IK’s “Custom Shop.” Peavey’s ReValver offers 5 different mics but with 20 different total options (e.g., some represent different polar patterns, others include low frequency cuts). But in terms of creating ambience, the main aspect of interest is that you can specify the distance from the speaker, distance from the cone, and the angle between two virtual mics when running in stereo. Softube’s Vintage Amp Room has a mic for each of the three amps you can move closer or further away from the amp, as well as move off-axis in the closest position. Note that only one amp is available at a time. Their Metal Room has two amps, but the miking is more sophisticated—two mics on each where you can vary distance, angle at the closest position, crossfade, and stereo width. CHOOSING THE MIC TYPE So which mic and position is optimum? Let your ears be the judge, but here are some guidelines to get you started. Let’s look at mics first. Dynamic mic: This is a common miking choice for amps, with a relatively balanced response and solid lows. Dynamic mics work well for powerful sounds where you want to avoid “brittleness.” Condenser mic: These are generally brighter (like the difference between a single-coil and humbucker pickup). Use this when you want definition, or more clarity from clean sounds. Ribbon mic: The ribbon mic family typically gives the warmest (or darkest, depending on your viewpont) sound. Thre’s often a lower midrange emphasis that gives a “creamy” tone. With programs that let you choose multiple mics, it’s common to use condenser mics for room sounds as the highs tend to drop off naturally within a room. MIC POSITIONING More amp sims are including the option to have cab mics and room mics. The two main cab options are on- or off-axis. On-axis: The mic points at the virtual speakers, which gives the fullest sound and smoothest midrange. Off-axis: The mic points at the speakers from an angle; this introduces peaks and valleys in the midrange, while thinning out the low and high ends a bit. As to ambience or “air,” consider leaving it off for live performance as the room you play in will impart its own sonic characteristics (whether you want it to or not!). For recording, avoid adding ambience until other tracks have been cut. More ambience can make the guitar sit “further back” in the mix at low levels, or take over the stereo field at higher levels. I tend to use minimal ambience with rhythm guitar so the sound is more defined, but leads often benefit from some “air.” Of course, stacking two or more amps, with their own ambience and miking, opens up a whole other bunch of tone tinkering options. And isn’t that what amp sims are all about? Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. Quiet is good—and these tips will help get you there by Craig Anderton Noise comes in many guises: there’s hiss from preamps, clicks and pops from digital clock mismatches, hum from bad shielding, and (unfortunately) a whole lot more. As a result, there’s no one way to get rid of noise—different problems require different solutions. The secret to a quiet recording is to find, then minimize, each noise source. When you’re chasing down noise, wear headphones to hear more detail in the sound. Then, start from your final output and work backward. Turn up individual faders, enable/bypass EQ, vary the mic preamp gain, etc. to help isolate the main contributors of noise. Following are tips on reducing noise. REDUCING NOISE AT THE SOURCE Set your instruments’s level control up full. The more signal feeding an amp or preamp, the better. Transformers, computers, hard drives, etc. can leak interference into pickups. Angle your axe away from noise sources, but note that even the slightest movement could allow noise to re-enter. A wireless tablet controller or wireless keyboard may allow you to get far enough away from your computer to reduce noise considerably. While recording, turn off any digital gear that’s not in use if it contributes interference. Even LCD monitors can radiate noise; some players have learned how to record using keyboard equivalents to allow for turning off the monitor during the actual recording process. The fewer fans, the better. Some graphics cards are fanless, like the ATI Radeon HD5450 (Fig. 1), which makes them well-suited to the studio. Fig. 1: Look ma, no fans: the ATI Radeon HD5450 is one of many graphics cards that doesn't have a fan, which helps reduce noise in the studio. Solid-state drives—while more expensive than conventional hard drives—make no noise and run cooler. If you have a hard drive running for backup, turn it off while recording. Back up to a USB stick, then copy over to the hard drive after the session is over. Fluorescent lights and dimmers can cause interference. Incandescent and LED lamps are quieter. Humbucking pickups are less sensitive to external interference than single-coil pickups, but they still somewhat susceptible to noise. HARDWARE SIGNAL PROCESSORS Any signal processor fed directly from your instrument should have a high input impedance (250K ohms or higher). This prevents loading down the pickups; loading reduces level as well as high frequencies. Note that this is not an issue with active pickups.If miking your amp is too noisy, try going direct into the console or recorder with a preamp that has “character,” or an amp emulator. This is usually quieter than miking a tube amp, although a high-gain hardware processor or even a software amp sim will amplify noise entering your axe as surely as any physical amp. If you’re feeding a processor designed for studio applications, unless it has an “instrument” input you may need to precede it with a compressor, preamp, or other effect designed specifically for guitar- or bass-level signals. This will “condition” the signal, allowing it to drive the studio-oriented effect while maintaining a good signal-to-noise ratio. Hit the processor inputs with as much signal level as possible, but be careful. Some effects monitor the level coming into the unit, so if the effect being used adds a lot of gain (e.g., a very resonant filter) and overloads the internal DSP, this won’t show up on the meters – yet the sound will be distorted. It’s best if your processor can monitor the DSP output as well as the input. With inexpensive “stomp boxes,” using batteries instead of AC adapters will sometimes give less hum. HARDWARE MIXERS These tips apply to playing live, as well as to using a hardware mixer in the studio. Turn down all unused channels, subgroup level controls, and effects returns. Even if there’s no input signal, turning up a fader can contribute noise. Some systems let you choose between –10 dB and +4 dB as a nominal signal level. Using +4 dB sends more level through the system, which can improve the signal-to-noise ratio. However, be consistent: if you choose +4, then everything should run at +4. You also need peripheral equipment that can run at this level. Check each of your effects returns. Reverbs in particular are notorious for dumping noise into the master output bus. If an effects unit has internal noise gating, sometimes adjusting the threshold up a little from the factory setting will cut out residual hiss. Use proper gain staging. Initially set the master output and channel faders to unity gain (usually indicated as 0 dB). Use the channel strip gain trim controls to bring any input signals up to the proper level. If the trim control can’t go high enough, it’s better to bring up the associated channel fader than the master. To boost mic gain, consider using pro-grade transformers as an alternative to active mic preamps. The pros and cons of each are controversial, but the short form is that transformers can color the sound, while preamps add noise. However, coloration is in the ear of the beholder, and is not necessarily a bad thing. In fact, a quality transformer can serve as a signal processor that “warms up” the signal, particularly with bass. REDUCING NOISE IN DAWS A steep (e.g., 48dB/octave) lowpass filter can take down the highs, and the hiss along with it. Not all noise is in the high frequencies—consider hum with guitar. Again, a steep filter—this time in highpass mode—can help get rid of low-frequency and subsonic noise (Fig. 2). Fig. 2: Trimming the extreme low and high frequencies from a guitar part using Sonar X3's lowpass and highpass filters. Deleting the spaces between notes, then adding a short fadeout from the note into the silent part, is time-consuming but can really clean up a part. Some programs include an option to “strip silence” which automates the process, but check to make sure it’s not causing an unintended consequences (e.g., an overly abrupt decay). There are many “restoration” processors, like iZotope’s RX (Fig. 3), the noise reduction tool in Sony Sound Forge, and several tools in Adobe Audition. Some of the most effective versions work by taking a “noiseprint” of a part of the track that consists only of noise (or hum), and removing only that component from the overall sound. Fig. 3: iZotope's RX suite of noise reduction tools (the current version is RX3) is effective at de-noising, as well as removing crackles, clicks, and other artifacts. Okay . . . now your system should be a lot quieter. Granted, if your music is great, no one’s going to care too much about a dB or so of noise. But why not go for as pro a sound as possible? All it takes is a little effort. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. It's a stereo world, so it's time for your guitar to join in by Craig Anderton Since the dawn of time, with a very few exceptions electric guitar outputs have been mono. This made sense for when the main purpose of guitar players (aside from picking up members of the opposite sex) was to take an amp to a gig and plug in to it. But with more guitar players opting for stereo in the studio, and sometimes even for live use, it’s natural to want to turn that mono output into something with a wider soundstage. So, here are six tips (one for each string, of course) about how to obtain stereo from mono guitars. But first, our most important tip: Don’t automatically assume a guitar part needs to be stereo; sometimes a focused, mono guitar part will contribute more to a mix than stereo. On occasion, I even end up converting the output from a stereo effect back into mono because it ends up making a major improvement. 1 EFFECTS THAT SYNTHESIZE STEREO Reverb, chorusing, stereo delay, and other effects can often synthesize a stereo field from a mono input. This is particularly effective with reverb, as the dry guitar maintains its mono focus while reverb billows around it in stereo. Some delays offer choices for handling stereo—like ping-pong delay, where each delay bounces between the left and right channels, LCR (left/center/right, with three separate taps for left, center, and right delay times), and the ability to set different delay sounds for the two channels. 2 EQUALIZATION I wrote an article for Harmony Central regarding “virtual miking” for acoustic guitar parts (particularly nylon string guitar), which uses EQ to split a mono guitar part into highs on the right, lows on the left, and the rest in between. As this needs only one mic there are no phase cancellation issues, yet you still hear a stereo image. Another EQ-based option uses a stereo graphic EQ plug-in. In one channel, set every other band to full cut and the remaining bands to full boost; in the other channel, set the same bands oppositely (Fig. 1). For a less drastic effect, don’t cut/boost as much (e.g., try -6dB and +6dB respectively). Fig. 1: A graphic equalizer plug-in can provide pseudo-stereo effects. 3 DOUBLE DOWN ON THE CABS With hardware amps, split the guitar into two separate cabinets and mic them separately to create two channels. Doing so “live” will usually create leakage issues unless you have two isolated spaces, but re-amping takes care of that problem because you can create the other channel during mixdown. Remember to align the two tracks so that they don’t go out of phase with each other. 4 CREATE A VIRTUAL ROOM Speaking of amp sims, many of them include “virtual rooms” (Fig. 2) with a choice of virtual mics and mic placements. These can produce a sophisticated stereo field, and are great for experimentation. Fig. 2: MOTU’s Digital Performer includes several guitar-oriented effects, as well as virtual rooms for both guitar and bass with multiple miking options and cabinets. 5 PARALLEL PROGRAM PATHS Amp sims often create stereo paths from a mono input. For example IK’s AmpliTube has several stereo routing options, while Native Instruments’ Guitar Rig includes a “split mix” module that divides a mono path into stereo. You can then insert amps and effects as desired into each path, and at the splitter’s output, set the balance between them and pan them in the stereo field (Fig. 3). Fig. 3: Although you can use Guitar Rig to create mono effects, its signal path is inherently stereo. This makes it easy to convert mono sounds to stereo. 6 DELAY My favorite plug-in for this is the old standby Sonitus fx: Delay, because it has crossfeed as well as feedback parameters. Crossfeed can help create a more complex sound by sending some of one channel’s signal into the other (Fig. 4). Fig. 4: The ancient Sonitus fx: Delay is excellent for create a stereo spread from a mono input. Here it’s used as part of a custom FX chain in Cakewalk Sonar to add width to guitar parts. However, there are plenty of other options. One is to duplicate a mono guitar track, then process the copy through about 15-40 ms of delay sound only (no dry). Pan the two tracks oppositely for a wide stereo image. Make sure you check the mix in mono; if the guitar sounds thinner, re-adjust the delay setting until the sound regains its fullness. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. If you want analog sounds in a digital age, try these simple techniques by Craig Anderton Fancy signal processors aren’t always necessary to emulate some favorite guitar sounds and effects. In today’s digital world, a variety of programs and effects can be made to do your bidding. Want proof? Check out these five examples. For example... VINTAGE WA-WA EFFECTS Many people try to obtain a vintage wa sound simply by sweeping a highly resonant parametric EQ set to bandpass response. This isn’t possible because vintage analog wa pedals have steep response rolloffs that reduce both high and low frequencies, but there is a way to use modern parametric EQs to re-create this effect (Fig. 1). Copy the guitar track so you have two “cloned” tracks set to the same level. In track 1, insert a parametric EQ set to bandpass (peak/dip) mode with about 6dB gain and Q (resonance) of around 8. Flip track 2 out of phase. Sweep the EQ over a range of about 200Hz – 2.2kHz. Fig. 1: The mixer channel on the left is going through a parametric stage of EQ. The channel on the right doesn’t go through an equalizer, but is flipped out of phase (the phase button is circled in red). Throwing one track out of phase causes the high and low frequencies to cancel, so all you hear is the filtered midrange sound—just like a real wa-wa. ADDING AMBIENT “AIR” Recording guitar direct and be simple and produce a clean sound, but sometimes it’s too clean because there aren’t any mics to pick up the room reflections that give a sense of realism. To model these reflections, feed your guitar track through a multi-tap delay plug-in, or send it to at least two stereo buses with stereo delays where you can set independent delay times for the two channels. Next, set the delay times for short, prime number delays (e.g., 3, 5, 7, 11, 13, 17, 19, and 23 milliseconds) to avoid resonant build-ups. Four delays is often all you need; I generally use 7, 11, 13, and 17ms, or 13, 17, 19, and 23 ms, depending on the desired room size (Fig. 2). Fig. 2: Finding delay lines that can give short, precise delays isn’t that easy, but Native Instruments’ Guitar Rig—shown here using two splits, each with its own stereo delay—can do the job. More delays provide a more complex ambience, but sometimes a simple ambience effect actually works better. If you want more “air,” try adding some feedback within the delay, but make sure it’s not enough to hear individual echoes. Experiment with the delay levels and pans, then mix the delayed sound in at a low level. THE CLOSED-BACK TO OPEN-BACK CABINET TRANSFORMATION With open-back cabinets, low-frequency waveforms exiting through the cabinet back partially cancel the low-frequency waveforms coming out the front. Emulate this effect by reducing bass somewhat; a low-frequency shelving filter works well, as does a high-pass filter. OUT-OF-PHASE PICKUP EMULATION Don’t have an out-of-phase switch? You can come close with a studio-type EQ (Fig. 3). Select both pickups at the guitar itself, and feed its output into a mixer channel. For the EQ, dial in a notch filter around 1,200Hz with a fairly broad Q (0.6 or so) and severe cut—around -15 to -18dB. Use a high shelf to boost about 8dB starting at 2kHz, and a low shelf to cut by -18dB starting at 140Hz. Tweak as needed for your particular guitar and pickups. Boost the level—like a real out-of-phase switch, this thins out the sound. Fig. 3: The Sonitus EQ set for a sound that emulates an out-of-phase sound with guitar pickups. THE BIG BASS ROOM BUILD-UP When a cabinet’s close to a wall, bass waves bouncing off the wall reinforce the waves coming out the cab’s front. This can produce a “rumble” due to walls and objects resonating, which EQ can’t imitate. For a killer rumble, split your guitar signal through an octave divider, then follow the octave divider with a lowpass EQ set to cut highs starting at 120Hz; this muddies the bass frequencies further. Then, mix the octave sound about -15dB below the main signal—just enough to give a “feel” of super-low bass. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. Anderton

    THE MIDI LFO

    When you can’t do tempo sync in your DAW, do MIDI tempo sync By Craig Anderton Lots of effects parameters have tempo sync—like delay time, tremolo LFO rate, envelopes times, etc. But what if you want to sync, say, filter cutoff or resonance variations to tempo? Thanks to MIDI, it’s easy. The key is to use a MIDI controller to control the parameter you want to sync, and fit the controller data to tempo. At least two programs, Cakewalk Sonar and Steinberg Cubase, let you “draw” periodic controller data whose period is quantized to tempo. If your DAW of choice doesn’t offer this option, then create a library of MIDI sequences with rhythmic controller values that you can paste in to MIDI tracks. Remember—because it’s MIDI, any controller shapes you create will work at any tempo. Here’s how to create tempo-synched modulation in Cubase and Sonar. STEINBERG CUBASE With the Key Editor open, in a controller lane choose your controller (this example shows controller #7—main volume) and click on the Line tool’s drop-down menu. You’ll see options for Line, Parabola, Sine, Triangle, Square, and Paint. Suppose you want the volume controller to sync to tempo with a triangular wave, with one period of the waveform equal to a sixteenth note. Select Triangle from the Line menu, then choose the period with the Quantize drop-down menu—in the example in Fig. 1, 1/16 is selected (make sure Snap is selected as well). Fig. 1: A MIDI "triangle LFO" set up in Cubase, The Length parameter sets the amount of space between controllers, with 1/16th or 1/32nd note being a good compromise between resolution and data density. However, choosing coarser values can give cool “step-sequenced” effects, so don’t ignore that possibility! If you choose Quantize Link, then the control signal's resolution depends on the Zoom resolution. Once everything is set up, draw as if you were drawing a line; drag the “line” up and down to set the waveform amplitude. The selected waveform will appear as the “line.” To change the duty cycle with triangle and square waves, hold down Shift-Ctrl (Mac: Shift-Command) as you drag. While still holding down the mouse button and Shift-Ctrl/Cmnd, after defining the waveform’s length, drag right of left to change the duty cycle. There are other keyboard shortcut options; refer to the help for details. CAKEWALK SONAR Sonar allows for automated envelope drawing, including MIDI controller shapes. In Track View, open up an automation lane, and choose the envelope you want to create. Right-click on the toolbar's Draw tool, and from the context menu, choose the desired waveform (your choices are Freehand, Line, Sine, Triangle, Square, Saw, and Random—see Fig. 2). As with Cubase, the quantization value sets the waveform period. Fig. 2: Another MIDI triangle LFO, this time in one of Sonar's automation lanes. Note the waveform's varying amplitude, which is based on how high the draw tool is from your initial click. Click in the automation lane where you want the envelope to start, and also, to set its midpoint value. Drag up or from this point to set the amplitude, then drag left or right to set the envelope length. You don’t have to drag on a straight line; if you vary the height, you’ll vary the waveform amplitude, as shown in the screen shot. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. ...and the winner is...Tequila!! by Craig Anderton Cables are a hassle to organize. There’s the “use coil ties and coil them up neatly” approach, which is good for storing them but then you have to uncoil them and take off the ties whenever you want to use them. Of course, there’s also the “throw into a drawer and worry about de-tangling them later” protocol, but then you have to deal with he detangling part. Those who are super-neat have the cable wall hangers, which unfortunately take up space and cost actual money. Well, I have a cheapo and effective solution for cable clutter. After a serious drinking binge one night in Jalisco, and waking up with the phone number of someone named “Patricia” written on my hand with a Sharpie, I had the answer! (Okay, I’m kidding...but this is a freakin’ article about the ultra-dull topic of cable clutter, so I had to get your attention somehow.) Anyway, liquor stores throw out the cartons that hold bottles, and they’re more than happy for you to take them away. These cartons have dividers to keep the bottles from smashing into each other during transit, but they have other talents too. As it turns out, Patron’s boxes for their 200 milliliter Patron Silver Tequila bottles are perfect for storing small cables, like USB and FireWire. The corrugated cardboard dividers are thicker than most liquor boxes, and you have 12 little compartments for your cables. You don’t need to coil a cable up with the end-over-coil technique to keep it from unravelling; once in the compartment, it won’t come undone. You can even place the cable so you can see the connectors, making it easy to pick the proper cable, and the dividers even sort of "auto-adapt" if some cables take up more space than others. Some compartments are big enough to fit two cables (although they should be of the same type so you don’t have to take them out to find the right one). Wine boxes are good for bigger cables, and you can generally store the box horizontally or vertically on a shelf. Even better they’re free, you get to feel good about recycling something, the liquor store owner will be happy to give you more, and your cables will no longer be cluttered. What’s not to like? Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Whether for mixing or synth programming, touchscreens are having a major impact by Craig Anderton Mixers used to be so predictable: Sheet metal top, faders, knobs, switches, and often, a pretty hefty price tag. Sure, DAWs started including virtual mixers, but unless you wanted to mix with a mouse (you don’t), then you needed a control surface with . . . a sheet metal top, faders, knobs, switches, and a slightly less hefty price tag. Enter the touchscreen—and the paradigm changed. Costly and noisy moving faders have been replaced by the touch of finger on screen, and the controller’s digital soul provides more functionality at lower cost. And if your application can talk to a a wireless network, iOS devices can provide wireless control. GENERAL CONTROL SURFACES iPads now replace expensive mechanical control surfaces. For example, Far Out Labs’ ProRemote for the iPad is Mackie Control Universal-compatible, and offers up to 32 channels (16 simultaneous on an iPad) with metering and 100mm “virtual moving faders.” Ableton Live fans can use Liine’s Griid, a control surface for Live’s clip grid, while Neyrinck’s V-Control Pro serves Pro Tools users but is compatible with several other programs as well. The cross-platform DAW Remote HD from EUM Lab supports the Mackie Control and HUI control surface protocols, and handles pretty much any DAW that can respond to those protocols. MIXER-SPECIFIC CONTROL SURFACES PreSonus is big on straddling the hardware/software worlds with their StudioLive mixers. First came Virtual Studio Live software for computer control; then SL Remote (Fig. 1), which links the computer to iOS devices for wireless mixer remote control. Yes, you can play a CD over your sound system, and tweak your mixer to optimize the sound as you walk around the venue—or control your own monitor mix, EQ, compression, and a lot more from onstage. Fig. 1: PreSonus provides extensive software support for their StudioLive mixers, including an iPad remote and personal monitoring app for all iOS devices. PreSonus also introduced QMix, an iPhone/iPod touch app that basically replaces personal monitoring systems by letting you monitor from the mixer itself through their ingenious “wheel of me”—dial in the proportion of your channel to the rest of the mix (“more me!”). Lots of companies, including high-end ones, like iPad control—Yamaha, Allen & Heath, Behringer, Soundcraft, MIDAS, and others provide remotes for their digital mixers. iPAD ASSISTANCE Some mixers use the iPad as an accessory. Behringer’s XENYX USB Series mixers include an iPad dock; the mixer can send signal both to and from the iPad—use effects processing apps, spectrum analyzers, record into GarageBand, and the like. Alto Professional’s MasterLink Live mixer also has an iPad dock, with the iPad used for mix analysis, recording, and replacing a bunch of rack gear with iPad-controlled signal processing. MIXER MEETS RECORDING Why stop with mixing? The Alesis iO Mix looks like a dock, but it’s a four-channel recorder with an iPad control surface. Take the concept even further with WaveMachine Labs’ brilliant Auria, which packs a full-function 48-track recorder, with a complete mixer interface and plug-ins from PSP Audioware, into an iPad. It works with several tested interfaces; this sounds like science fiction, but it really works. Windows 8 enabled multi-touch for compatible laptops and touch monitors, and Cakewalk’s SONAR adapted the technology to a DAW environment (Fig. 2). Mixing with a touchscreen monitor is an interesting experience—I found it worked best if I laid the monitor on my desk, titled it up at a slight angle like a regular mixer surface, and combined “swiping” for general moves and mixing along with a mouse for precise changes. Fig. 2: Starting with Windows 8, Cakewalk SONAR supported touchscreen control. In the “huge and not exactly cheap” touchscreen category there’s Slate Pro Audio’s Raven MTX (available exclusively from GC Pro) that has not only the same functionality of the big hardware mixers of old, but pretty much the same size as well. And for DJs, SmithsonMartin’s Emulator ELITE is a tour de force of touch control for programs like Native Instruments’ Traktor and Ableton Live. TOUCHSCREEN “SUPERMIXERS” Mackie’s DL1608 (Fig. 3) builds a rugged, pro-level hardware mixer exoskeleton around an iPad brain—although you can also slip out the iPad for wireless remote control. Fig. 3: Mackie’s DL1608 builds a hardware exoskeleton around an iPad brain, with the hardware handling all I/O and audio mixing/processing. It’s a serious mixer with the Mackie pedigree: 16 Onyx preamps with +48V phantom power, balanced outs (XLR mains, 1/4” TRS for the six auxes), and hardware DSP for the mixing and hardware effects—the iPad is solely about control. Each input has 4-band EQ, gate, and compression; the outputs have a 31-band graphic EQ and compressor/limiter, along with global reverb and delay. If you don’t need as many inputs, the 8-channel DL806 also offers iPad control. Line 6’s StageScape M20d (Fig. 4) uses a custom 7” touchscreen for visual mixing based on a graphic, stage-friendly paradigm with icons representing performers or inputs; touching an icon opens up the channel parameters and DSP (including parametric EQs, multi-band compressors, feedback suppression on every input, and more). Fig. 4: The Line M20d uses a custom touch screen whose icons represent an actual stage setup rather than simply showing conventional channel strips. There are also four master stereo effects engines with reverbs, delays and a vocal doubler. You can even do multi-channel recording to a computer, SD card, or USB drive, and it accepts an iPad for remote control. Like the Mackie, it’s serious: 12 mic/line ins (with automatic mic gain setting), four additional mic ins, and balanced XLR connectors for the auto-sensing main and monitor outputs. But the M20d also incorporates the L6 LINK digital networking protocol, so the mixer can communicate with Line 6’s StageSource speakers for additional setup and configuration options. ARE WE THERE YET? Although touch control hasn’t quite taken over the world yet, it’s making rapid strides in numerous areas. Of course smart phones and iPads started the trend, but we’re now seeing applications from those consumer items creeping into our recording- and live performance-oriented world. Granted, sometimes touch isn’t the perfect solution—there’s something about grabbing and moving a hardware fader that’s tough to beat—so the future will likely be a continuing combination of tactile hardware and virtual software. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Looking for a more swirling, brighter chorus sound? You've come to the right place. By Craig Anderton Avid’s Eleven Rack, a guitar-friendly computer interface as well as a live performance rack unit, is having a well-deserved comeback. Although originally thought of as Pro Tools-specific, it works fine as a general-purpose ASIO or Core Audio interface suitable for any Windows or Mac program. The one aspect that was Pro Tools-specific was an inability to edit presets other than with Pro Tools or via the front panel, but fortunately, that’s changed as Avid has now introduced a stand-alone editor for Eleven. Thank you! One you can start programming, you’ll realize there are a lot of novel ways to combine effects that aren’t iinitially obvious For example, the stock chorus sounds like—well, a stock chorus that’s optimized for standard chorus speeds. However, I greatly prefer a much slower, more swirling/randomized chorus sound with a bright, “acoustic guitar” tone. While I couldn’t coax this out of the existing chorus, taking a different approach gave exactly what I wanted. (Note that this concept applies to other guitar effects devices, not just Eleven Rack.) The secret is using a somewhat unconventional order of effects (Fig. 1): first Mod, set to C1 Chorus/Vibrato with Vibrato selected; then FX1, also set to C1 Chorus/Vibrato with Vibrato selected; and finally FX2, set to Graphic EQ. With this program, for the cleanest sound all other effects (and the amp/cabinet) are bypassed. Fig. 1: The basic program. The effects in the Mod, FX1, and FX2 slots are enabled; everything else is bypassed. Note the Vibrato settings in the Chorus/Vibrato effect, and how the rate is synched to tempo. As the Vibrato Rate control doesn’t go slow enough for my purposes, I synched both vibrato rates to tempo, with the Mod Vibrato set to dotted half-note sync and the FX1 Vibrato set to whole note sync. To obtain a more animated, swirling sound, it’s important that they not sync to the same note value. Depth for both vibrato effects is set between 2/3 and 3/4 of the way up. Adjust the Graphic EQ to taste; Fig. 2 shows the settings I used for a bridge+neck humbucker setting, which provides a very bright, present sound designed to cut through a mix even when mixed in at a relatively low level. These settings also reduce some of the “meat” by pulling back at 370Hz and 800Hz. Fig. 2: The Graphic EQ (FX2) settings used for this program. This is optimized for dry guitar going through a flat system, so if you end up using an amp, you’d likely need to change the settings. Check out the audio example, and you’ll definitely hear the difference—the first part uses the stock chorus and the second part plays through my custom program. What’s more, the patch itself is available online, so you can load the same sound into your own Eleven Rack. Have fun, and don’t forget to tweak the Graphic EQ for a tone that fits your particular axe and playing style. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Teach your sims the rhythm method By Craig Anderton Everyone likes a tight rhythm; unfortunately, stompboxes don’t always listen to the drummer. Tap tempo is a big improvement, but amp sims—when used as plug-ins in a sequencer—take the art of rhythm one step further with sync-to-tempo options. This feature allows rhythmically-related controls (delay time, flanger rate, vibrato speed, and the like) to follow the tempo at a rate you specify (quarter note, dotted eighth note, etc.). These plug-ins follow the sequence tempo automatically, but often, you need to enable synchronization. With IK Multimedia’s AmpliTube 3 (Fig. 1), a BPM switch enables sync-to-tempo; when you rotate the Delay knob, the value window shows the delay as a rhythmic value. If BPM isn’t enabled, the window shows delay in milliseconds. Fig. 1: Several AmpliTube 3 effects have a BPM switch to enable sync. Many processors in Native Instruments’ Guitar Rig 4 (Fig. 2) have a button (circled) that opens up a space below the rack with advanced parameters. Fig. 2: Guitar Rig 4 offers tempo sync, but you’ll need to open up the advanced parameters. Here you’ll find any Tempo Sync button (outlined toward the left); when enabled, the delay Time readout shifts from milliseconds to rhythmic values. (Note that this is a Quad Delay module, and another button allows syncing delays to each other.) Fig. 3 shows the Delay stomp box in Waves’ GTR. Click on the Sync button, and again, the display readout shows a rhythmic value instead of a time value. Fig. 3: GTR processors include a sync button for enabling tempo sync. Line 6’s POD Farm (Fig. 4) works similarly, but goes one step further: When you enable sync, the display shows both the rhythmic value and the corresponding time in milliseconds—handy if you want to set devices that don’t have tempo sync to the appropriate number of milliseconds. Fig. 4: POD Farm not only offers a sync button, but displays time in both milliseconds and the corresponding rhythmic value. Finally, Peavey’s ReValver MkIII (Fig. 5) doesn’t have sync to tempo, but with some effects does offer tap tempo (particularly useful for live performance, where there’s no host sequencer) and includes a readout that shows the delay time in milliseconds and as a rhythmic value. If you rotate Delay while holding shift, you can dial in precise tempo values. Fig. 5: ReValver MkIII doesn’t have sync to tempo per se, but you can click on the tap tempo button, as well as set precise tempo values. Some processors, like stereo delays, let you set the tempo sync independently for each channel. This can give some great synchronized ping-ponging echoes. And don’t just stick to the usual eighth and sixteenth notes: dotted values impart a feeling of motion. Tremolo speed is another ideal candidate for sync to tempo, as is chorus rate. Check it out! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Yes, you can make convincing guitar sounds with a sampler—here's how By Craig Anderton I've presented a lot of seminars in my time, and once I found myself in Nashville doing one on synth programming. Someone asked how to get a convincing guitar sound, and after thinking about it, I suggested that hey, you're in Nashville—hire a guitar player! Seriously, nothing sounds or plays like a guitar. But many keyboard players don't play guitar, or don't have someone on call who can record a part. So if you're a keyboard player but would like to add a guitaristic element to your music, read on. RHYTHM GUITAR There are two main types of guitar playing, rhythm (chord-based) and lead (single note-based). Of the two, rhythm guitar is much harder to emulate because of how guitars are voiced and strummed. Guitar voicings tend to be "wider" than piano voicings; for example, consider a simple E major chord on guitar, and where the notes would fall on a keyboard (Fig. 1). Fig. 1: Chord voicings on guitar are quite different compared to typical keyboard voicings; this shows an E major. Furthermore, rhythm guitar often combines open strings, which have a long decay, and fretted strings, which have a shorter decay. And as the strings are strummed, they don't all sound at the same time. One option for a convincing rhythm guitar part is simply to forego the keyboard and use sample CDs. There are several with good strummed, fingerstyle-type patterns, such as Big Fish Audio's Performance Loops - Acoustic Guitars, Sony's Acoustic Excursions, and Ilio's Hot Steel Blues. Also note that there are sample CDs with sampled power chords. These can be effective for simple rock tracks, but usually lack the variations and strums of a "real" guitar part. Still, they can work for non-critical applications. A virtual instrument like Big Fish Audio's Electri6ity (Fig. 2) is another possibility. Fig. 2: Electri6ity offers multiple expressive possibilities associated with guitar, all under the keyboard player's control. It's designed to emulate the guitar playing experience as closely as possible, including such idiomatic options as up-and-down strummed guitar chords, mapped on a keyboard in such a way that it's fairly easy to create convincing strummed parts, and mapping chords played on keyboard to match how they would be voiced on guitar. Strum Electric GS-1 from AAS is another option; it's dedicated to generating strums rather than being a playable instrument. AAS also offers an acoustic version, Strum Acoustic GS-1, as well as a "lite" version (Fig. 3). Fig. 3: AAS makes a "lite" version of Strum Acoustic called Strum Acoustic Session, which is bundled Cakewalk Sonar X3. LEAD GUITAR Lead guitar lends itself well to synthesis, because the gestures involved in creating a single-note solo—pitch bending, vibrato, and sometimes, rapid-fire licks—are part of a synth's standard repertoire. Also, when overdriving an amp, a guitar's waveform will clip and "flat top," producing a more pulse-like waveform. Although some samples have lead guitar samples that already include effects and are basically "plug and play," don't be reluctant to try using a clean electric guitar sound, and using effects to give it more of a distorted lead timbre. You'll usualy have more control over the sound this way. Pitch manipulation, either by a vibrato tailpiece or finger vibrato, is a guitar trademark. With synths, riding the pitch bend wheel with your hand is your tool of choice; a mod wheel controlling LFO vibrato produces a periodic effect that just isn't guitar-like. When imitating tailpiece effects (like "dive bombs"), remember that these primarily bend pitch down, and can bend up over only a limited range (e.g., a half step). With manual vibrato, a typical gesture is to bend pitch up and then add vibrato—again, a perfect job for manipulating the pitch bend wheel manually. It takes a little effort to learn the finger motions necessary to add vibrato, but it's worth it. The slight irregularities add interest to the sound. Another common lead guitar effect is holding a note so that it sustains, during which time the timbre changes (sometimes just from natural changes in string harmonics, sometimes from going into feedback). For this, I have two favorite options. The simplest is to change a waveform's duty cycle (e.g., changing a sawtooth wave into more of a triangle wave) during the sustain. While this isn't exactly how a guitar sounds, the effect can add interest. The other option takes more work. With most of my lead guitar note samples, I create a layer with a sine wave an octave or octave + fifth above the fundamental to simulate the "whine" that feedback produces. This gets tied to mod wheel (or footpedal) so that increasing the mod wheel adds in the pseudo-feedback. Do this toward the tail end of a note's sustain. I've also experimented with using aftertouch to bring in feedback or add pitch bend. However, satisfying results depend on the aftertouch resolution (you need real aftertouch, not "afterswitch" that feels like it's either on or off) and the parameter being controlled. Any "stair-stepping" sounds pretty bad, but if the aftertouch is smooth, this can work well. Note that you may not always want a guitar sample to be the basis of your sound. Many synthesizers can produce a guitar part's vibe with guitar-like pitch-bending, aftertouch, and effects. Sometimes this can be more interesting than the static qualities that are a part of some samples. ADDING EFFECTS Sometimes a guitar sound depends on effects (chorus, flanger, etc.). However, the main "effect" is an amp that creates distortion, and a speaker cabinet, which is actually a very complex filter. Running your synth through a guitar amp is one possibility, but there are some fine guitar amp simulator plug-ins (Fig. 4) such as IK Multimedia AmpliTube, Native Instruments Guitar Rig (which is also part of their Komplete bundle), Waves G|T|R, Line 6 POD Farm, Peavey ReValver, and others. Fig. 4: IK Multimedia's AmpliTube was the first native amp sim; as wtih some other sims, the latest version offers features like virtual miking and mic placement. Sure, you'll get more authentic results with a Real Guitarist... but not always better results. Even though I play guitar, sometimes I break out a synth or sampler to do a "guitar" part because it has more of the vibe I want for a particular tune. Besides, it's fun! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Dimension Pro isn’t just about sample playback—it’s also a quad REX file player By Craig Anderton Although Cakewalk's cross-platform Dimension Pro is known primarily as a sample-playback synthesizer, it can also load up to four REX files—for example drums, percussion, bass, and rhythm guitar to create a rhythm section. You can trigger them from MIDI files for conventional playback, alter the timing and positioning of the MIDI notes to create entirely different loops, or improvise by playing different slices in real time from a keyboard. This technique works with Dimension Pro in stand-alone mode or when inserted as a virtual instrument within a host. To start from scratch, click on Dimension Pro’s Program Handling button (to the right of the program selection field), and choose “Initialize Program.” Next, click on the Options button to the right of the Program Handling button, and select “Set Program as Multitimbral” (Fig. 1). Fig. 1: With multitimbral operation, each REX file can be controlled over its own MIDI channel. Select one of the four Element edit buttons E1 - E4, then drag a REX file into the Element’s “Load Multisample” window. If you're using Dimension Pro with Sonar, you can drag REX files in directly from Sonar’s browser (Fig. 2). Fig. 2: Drag REX files into Dimension Pro from the desktop or with Sonar, from its browser. You can also load a file into an Element by right-clicking on the Element edit button and choosing Load Element, or clicking in the Load Multisample window and navigate to the REX file you want to open. A Note symbol (eighth note tied with 16th note) appears toward the right of the Load Multisample window. This represents the MIDI file whose notes trigger the slices that replay the REX file. Drag the note symbol into a MIDI track driving Dim Pro to trigger the REX file slices (Fig. 3). Fig. 3: Each note in the MIDI file triggers a different slice in the REX file. The lowest pitch triggers the first slice, the highest pitch triggers the last slice. On your keyboard, C3 (colored red in Fig. 4) plays back the entire REX file. If you lift your finger off the key, the file stops playing; if you hold your finger on the key, the file plays through to the end but does not loop. To transpose the file pitch (yet retain a constant tempo), play over the range of C2 to B3 (colored blue). Fig. 4: Different parts of the keyboard play back the REX file in different ways. Starting upward from C4, the keyboard keys play individual REX slices. This lets you improvise in real time to create all kinds of variations on the original loop. You can also edit in your DAW's MIDI track by altering the note pitches, start times, and durations to change how the REX file plays back. Note that using Dim Pro’s “Transpose” parameter can, if needed, transpose the REX file’s root note. You can similarly load up the other elements and drag the REX files’ data into other tracks (Fig. 5), but remember to assign these to the correct channels—channel 2 drives Element 2, channel 3 drives Element 3, and channel 4 drives Element 4. Fig. 5: Four MIDI files loaded into Sonar. Each track triggers the REX file slices in a different Element. Also note that you can automate volume, panning, and other parameters within Dimension Pro if you want to “remix” the REX files. And here’s one final tip: although the DSP (LoFi, Filter, Drive, EQ, etc.) and FX affect the entire file, envelopes affect each slice. For example, you can set a short, percussive decay for each slice using the Amplitude envelope. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Don’t give up on that garage sale special yet! by Craig Anderton So you finally tracked down an ultra-rare, ultra-retro Phase Warper stomp box manufactured back in the mid-’70s. Not surprisingly, it doesn’t seem to work very well (if at all); sitting unused in someone’s garage for over a decade has taken its toll. But if you know a few basic procedures, you can often restore that antique and give it a new life. Here are some ways that have worked well for me to restore vintage effects. OXIDATION ISSUES One of your biggest problems will likely be oxidation, here metal surfaces become corroded due to stuff in the air (whether pollution in LA or salt spray in Maine). Oxidation shows up as scratchy sounds in pots, intermittent problems with switches, and occasional circuit failure. Fortunately, chemicals called contact cleaners can solve a lot of these problems. I’ve had good luck with DeoxIT from Caig Laboratories; they also make an Audio Survival Kit with cleaners for plastic faders and contact restoration as well as cleaning. but there are many other types (such as “Blue Shower” contact cleaner). Here are some ways you’d typically use contact cleaners. Scratchy pots. Pots work by having a metal wiper rub across a resistive strip, so the pot can become an “open circuit” if oxidation or film prevents these from making contact. To solve this, spray a small amount of contact cleaner into the pot’s case. With unsealed rotary pots, there’s usually an opening next to the pot’s three terminals (Fig. 1). Fig. 1: The red line points to an opening in the pot where you can squirt contact cleaner (photo by Petteri Aimonen). Slider (fader) pots have an obvious opening. Sealed pots are more difficult to spray; sometimes the pot can be disassembled, sprayed, and reassembled, and sometimes you can dribble contact cleaner down the side of the pot’s shaft, and hope some of it makes it to the innards. Once sprayed, you have to rotate the pot several times to “smear” the cleaner, and also flush away the gunk it’s dissolving. After rotating it about 20 times or so, spray in a little more contact cleaner. If the problem returns, spray again and see if that solves things. However, at some point a pot’s resistive element becomes so worn that no contact cleaner can restore it—you then need to replace the pot with one of equivalent value. Incidentally, people often forget that trimpots need attention too—even more so, given that they’re more exposed than regular pots. Spray them the way you would regular pots, but be very careful not to spray any trimpots that adjust internal voltages. If you have any doubts, it’s probably best to leave trimpots alone. IC sockets. IC sockets are also subject to oxidation. A quick fix is to simply take an IC extractor (these cost about $3), clamp its sides around the chip, and pull up very slightly on the chip (Fig. 2; just enough to loosen it—about 1/16”). Fig. 2: An IC extractor can pull an IC out of its socket, but that’s not what you want to do—just pull up very slightly. This picture shows a digital chip so it’s easier to see the pins; older effects boxes will likely have smaller analog chips. Spray some contact cleaner sparingly on the IC’s pins. Now push the IC back into its socket. Repeat this pull-push routine one more time, and the scraping of the chip pins against the socket in conjunction with the cleaner should have cleaned things enough to make good electrical contact. Afterward, it’s important to check that all the IC pins are not bent and go straight into the socket (Fig. 3). Fig. 3: Verify that the pins are not bent or compromised before re-applying power. However, use extreme caution—IC pins are fragile, which is why you don’t want to pull the chip out too far, nor do this procedure too often. If you destroy an ancient IC, you may not be able to find a replacement. Toggle switches. Rotary and pushbutton switches respond best to contact cleaners, but toggle switches are often sealed. These are not worth attempting to disassemble, but you may luck out and find a switch that does have some openings where you can squirt some contact cleaner. As with pots, work the switch several times to spread the cleaner. Other connectors. Some effects used nylon “Molex” connectors or similar multipin connectors. Connector pins in general can develop oxidation, and are also candidates for spraying. Sometimes they lift right up from their sockets, but often there are little plastic hooks or tabs to hold the connector in place. If you encounter resistance while trying to remove the connector, don’t force it—look for whatever might be impeding its movment. Battery connectors. Because these connectors carry the most current of anything in the effect, any oxidation here can be a real problem. Spray the connector, and snap/unsnap a battery several times. Two other battery tips: Check the battery connector tabs that mate with the battery’s positive terminal; if it doesn’t make good contact with the battery, push inward on the connector tabs with a pliers or screwdriver to encourage firmer contact. And if the battery has leaked over the connector, forget about trying to salvage it—solder in a new connector. BLOW IT AWAY Most older effects usually come free with large amounts of dust. Take the effect outside, plug a vacuum cleaner’s hose into the exhaust end, let the vacuum blow for a minute or so to clear out any dust stuck in the hose, then blow air on the effect to get rid of as much dust as possible. If you don’t do this, cleaning your pots and connectors may end up being a short-term solution as dust shakes loose over time and works its way back into various components. LOOSE SCREWS While you still have the unit apart, check whether any internal screws are loose—especially if they’re holding circuit boards in place. Enough vibration can loosen screws, and that could mean bad ground connections (many vintage effects use screws to provide an electrical path between circuit board and ground, or panel and ground). Try to turn each screw to determine if there’s any play. If there is, before tightening the screw check to see if there’s a lockwasher between the nut and the panel or other surface. If not, add a lockwasher before tightening the screw—providing the lockwasher teeth don’t contact something they shouldn’t. THOSE #@$$#^ FOOTSWITCHES Many old stomp boxes used push-on, push-off DPDT footswitches that were expensive then, and are even more expensive (and difficult to find) now. One source for replacements is Stewart-McDonald’s Guitar Shop Supply. ELECTROLYTIC CAPACITORS Electrolytic capacitors (Fig. 4), which tend to have a blue or black “jacket” and are polarized (i.e., they have a + and – end, like a battery) contain a chemical that dries up over time. Fig. 4: The two capacitors on the right are typical electrolytic capacitors. The three on the left are variations on ceramic capacitors. With very old effects, or ones that have been subject to environmental extremes (e.g., being on the road with a rock and roll band), it can make a major sonic difference to replace old electrolytic capacitors with newer ones of the same value and voltage rating. Note that ceramic capacitors (which are usually disc-shaped), tantalum caps (like electrolytics, but generally smaller for a given value and with a lower voltage rating), and polyester caps like Orange Drops or mylar capacitors don’t dry up and last a long time. SAFER POWER Many older AC-powered boxes did not use fuses or three-conductor AC cords. Although I’m loathe to modify a vintage box too much, making a concession to safety is a different matter. Fig. 5 shows wiring for a two-wire cord compared to a fused, three-wire type. A qualified technician should be able to modify your effect to use a three-wire power cord. Fig. 5: The 3-wire cord’s ground typically connects to the effect’s main ground point (usually located near the power supply). Good luck! Your toughest task will be finding obsolete parts such as old analog delay chips, custom-made optoisolators, and dealing with effects where they sanded off the IC identification (a primitive form of copy protection). But once you restore an effect, it’s a great feeling…and when it’s closer to like-new condition, it will probably sound better as well. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. The right EQ settings are like seasoning for your live guitar sound By Craig Anderton The most important signal processors for any guitar are distortion and EQ. If you don’t agree, then consider this: what is a guitar amp but a distortion box, coupled with EQ from the cabinet/speaker combination? Although amps are fun, they tend to specialize in one particular sound, which is why rack preamps and parametrics (or a good multieffects) can be invaluable. With a little tweaking, you can shade the sound to best suit the music at hand—something that’s a bit more difficult to do with an amp. Fig. 1 shows the Source Audio Programmable EQ, which is a good choice for live EQ applications because it can store four presets. However, any decent EQ should do the job; also note that most multieffects include at least one available stage of EQ. Fig. 1: Having presets makes it easier to have specific, ready-to-go setups when playing live. MORE EXPRESSIVE DISTORTION Patching EQ before distortion can make the distortion seem more "touch-sensitive." Generally, distortion clips all frequencies more or less equally. Adding a gentle midrange boost before the distortion causes the notes in the boosted range to distort at lower levels, which makes the distortion seem more responsive at the selected frequencies. Start by boosting in the range of 200 Hz to 1 kHz. Note that some multieffects don’t let you place EQ before distortion; but as distortion is often the first effect in a multieffects’ signal path, placing a hardware EQ box before the multieffects will solve the problem. “CLEANING UP” THE GUITAR SOUND Adding a bass cut prior to distortion can clean up the guitar sound because the lower strings will cause less intermodulation distortion, creating a more "biting" sound. Another option is to cut around 2 – 2.5kHz before distortion, which means there’s less distortion on the note harmonics, and more on the fundamental. PEAK FUZZ Using a sharp, narrow boost prior to distortion adds an effect that is almost like a synthesizer’s "hard sync" option. Being able to sweep the boost frequency is even better, as it dramatically changes the guitar’s timbre—sort of like a wa-wa, except that the distortion adds a kind of "resonant toughness." LEAD GUITARS THAT LEAD Many times, making a lead guitar stand out in a band doesn’t have to involve turning up the volume. A rock guitar can be a pretty broad-bandwidth instrument, and overlap with the frequency ranges of vocals, piano, upper bass range, etc. If you turn up the volume too much, you run the risk of drowning out other important sounds. EQ can accent a range of the guitar that doesn’t overlap other instruments, thus letting the guitar stand out without disturbing the rest of the track. A few dB boost at 3 to 4 kHz can really accentuate a guitar lead; as that’s above the range of the toms, bass, and most rhythm-oriented keyboard parts, there’s no interference with these instruments. During solos where no vocals appear, a 1 to 2 kHz boost can often give the guitar a more "vocal-like" quality, as well as make it stand out a bit more. VOCAL SUPPORT Here’s another example of using EQ to make an instrument a "better neighbor" for your band. Suppose you’re playing rhythm guitar behind a vocalist, but the guitar and voice conflict because they occupy a similar frequency range. If you add a shallow, fairly broad midrange cut to the guitar, it opens up more bandwidth for the vocal frequencies. The guitar’s high and low frequencies "frame" the vocals. HUM BANISHMENT 60Hz hum is a drag, but a parametric equalizer can help minimize it. Simply set the equalizer for maximum cut and sharpest bandwidth, then dial in 60Hz (you’ll know you’re at the right frequency because the hum will disappear). If there’s also an harmonic component at 120Hz, use a second parametric stage to take care of that. QUIETING VINTAGE EFFECTS A stereo graphic or shelving EQ can help reduce hiss in older effects by using one channel to boost treble going in to the effect, and the other channel to cut treble after the effect by an equal but opposite amount. Start the boost at around 2 to 5kHz, and boost a reasonable amount (6 to 10dB), short of overload. Cut starting at the same frequency, and by the same amount. This will reduce any hiss coming out of the effect and in theory, the original signal will sound unaltered if the boosting and cutting are symmetrical. ARTIFICIAL STEREO WITH EQ In smaller venues, some guitarists are experimenting with stereo by using delays. However, one problem with using delays is that sometimes, unless you’re careful with the delay settings, strange phase cancellation problems can occur. An alternate approach is to use EQ to spread the signal. For example, suppose you use a Y cord to send a guitar signal to two graphic equalizers (left and right). If you boost the odd-numbered channels and cut the even-numbered channels in one equalizer, and do the opposite with the other (cut odd-numbered channels and boost even-numbered channels), you’ll create a pseudo-stereo spread without any kinds of timing delays. Although this doesn’t create as dramatic a stereo effect as stereo miking, it does help fill out a mono guitar part when mixing. TAME YOUR ECHOES You don’t always want a meaty, substantial echo; something subservient to your main guitar sound might be more appropriate. To trim your echo’s frequency response you’ll need a mixer and Y-cord. Split the guitar into two paths; one goes directly to your amp or a mixer, and the other to an equalizer/delay line combination before it hits the amp or mixer. Cut the bass and lower midrange going to the delay line, and the echo will shimmer on top of your main sound DON’T FORGET THE GUITAR ITSELF! There are a lot of modifications you can do to a stock guitar to greatly alter the sound, such as using tapped pickups, rewiring the tone control, changing pickup phase, etc. Think of this as “mechanical EQ.” But remember, no matter what EQ techniques you use, tweak until everything sounds great—that’s what tone control is all about. The sound you want lies somewhere in those dials; you just have to find it. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...