Jump to content

Anderton

Members
  • Content Count

    18,248
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. When it’s time to mix a recording, you need a strategy By Craig Anderton Mixing is not only an art, it’s the crucial step that turns a collection of tracks into a finished piece of music. A good mix can bring out the best in your music—it spotlights a composition’s most important elements, adds a few surprises to excite the listener, and sounds good on anything from a portable MP3 player with nasty earbuds to an audiophile’s dream setup. Theoretically, mixing should be easy: you just adjust the knobs until everything sounds great. But this doesn’t happen by accident. Mixing is as difficult to master as playing a musical instrument, so let’s take a look at what goes into the mixing process. POINTS OF REFERENCE Start by analyzing well-mixed recordings by top-notch engineers and producers such as Bruce Swedien, Roger Nichols, Shelly Yakus, Steve Albini, Bob Clearmountain, and others. Don’t focus on the music, just the mix. Notice how—even with a "wall of sound"—you can pick out every instrument because each element of the music has its own space. Also note that the frequency response balance will be uniform throughout the audio spectrum, with enough highs to sound sparkly but not screechy, sufficient bass to give a satisfying bottom end without turning the mix into mud, and a midrange that adds presence and definition. One of the best mixing tools is a CD player and a really well-mixed reference CD. Patch the CD player into your mixer, and A-B your mix to the reference CD periodically. If your mix sounds substantially duller, harsher, or less interesting, listen carefully and try to isolate the source of any differences. A reference CD also provides a guideline to the correct relative levels of drums, vocals, etc. Match the CD’s level to the overall level of your mix by matching the peak levels of both signals. If your mix sounds a lot quieter even though its peaks match the reference CD’s peak levels, that probably means that the reference has been compressed or limited a fair amount to restrict the dynamic range. Compression is something that can always be done at the mastering stage—in fact, it probably should be, because a good mastering suite will have top-of-the-line compressors and someone who is an ace at applying them. PROPER MONITORING LEVELS Loud, extended mixing sessions are tough on the ears. Mixing at low levels keeps your ears "fresher" and minimizes ear fatigue; loud mixes may get your juices flowing, but they make it more difficult to hear subtle level variations. Many project studios have noise constraints, so mixing through headphones might seem like a good idea. Although headphones are excellent for catching details that you might not hear over speakers, they are not necessarily good for general mixing because they magnify some details out of proportion. It’s better to use headphones for reality checks. THE ARRANGEMENT Scrutinize the arrangement prior to mixing. Solo project studio arrangements are particularly prone to "clutter" because as you lay down the early tracks, there’s a tendency to overplay to fill up all the empty space. As the arrangement progresses, there’s not a lot of room for overdubs. Remember: the fewer the number of notes, the greater the impact of each note. As Sun Ra once said, "Space is the place." MIXING: THE 12-STEP PROGRAM Although there aren’t any rules to recording or mixing, until you develop your own mixing "style" it’s helpful to at least have a point of departure. So, here’s what has worked for me. You "build" a mix over time by making a variety of adjustments. There are (at least!) twelve major steps involved in creating a mix, but what makes mixing so difficult is that these steps interact. Change the equalization, and you also change the level because you’re boosting or cutting some element of the sound. In fact, you can think of a mix as an "audio combination lock" since when all the elements hit the right combination, you end up with a good mix. Let’s look at these twelve steps, but remember, this is just one person’s way of mixing—you might discover a totally different approach that works better for you. Step 1: Mental Preparation Mixing can be tedious, so set up an efficient workspace. If you don’t have a really good office chair with lumbar support, consider a trip to the local office supply store. Keep paper and a log book handy for taking notes, dim the lighting a little bit so that your ears become more sensitive than your eyes, and in general, psych yourself up for an interesting journey. Take periodic breaks (every 45-60 minutes or so) to "rest" your ears and gain a fresher outlook on your return. This may seem like a luxury if you’re paying for studio time, but even a couple minutes of down time can restore your objectivity and, paradoxically, complete a mix much faster. Step 2: Review The Tracks Listen at low volume to scope out what’s on the multitrack; write down track information, and use removable stick-on labels or erasable markers to indicate which sounds correspond to which mixer channels. Group sounds logically, such as having all the drum parts on consecutive channels. Step 3: Put On Headphones and Fix Glitches Fixing glitches is a "left brain" activity, as opposed to the "right brain" creativity involved in doing a mix. Switching back and forth between these two modes can hamper creativity, so do as much cleaning up as possible—erase glitches, bad notes, and the like—before you get involved in the mix. Listen on headphones to catch details, and solo each track. If you’re sequencing virtual tracks, this is the time to thin out excessive controller information, check for duplicate notes, and avoid overlapping notes on single-note lines (such as bass and horn parts). Fig. 1: Sony's Sound Forge can clean up a mix by "de-noising" tracks. Also consider using a digital audio editor to do some digital editing and noise reduction (although you may need to export these for editing, then re-import the edited version into your project). Fig. 1 shows a file being "de-noised" in Sony's Sound Forge prior to being re-imported. Low-level artifacts may not seem that audible, but multiply them by a couple dozen tracks and they can definitely muddy things up. Step 4: Optimize Any Sequenced MIDI Sound Generators With sequenced virtual tracks, optimize the various sound generators. For example, for more brightness, try increasing the lowpass filter cutoff instead of adding equalization at the console. Step 5: Set Up a Relative Level Balance Between the Tracks Avoid adding any processing yet; concentrate on the overall sound of the tracks—don’t become distracted by left-brain-oriented detail work. With a good mix, the tracks sound good by themselves, but sound their best when interacting with the other tracks. Try setting levels in mono at first, because if the instruments sound distinct and separate in mono, they’ll open up even more in stereo. Also, you may not notice parts that "fight" with others if you start off in stereo. Step 6: Adjust Equalization (EQ) EQ can help dramatize differences between instruments and create a more balanced overall sound. Fig. 2 shows the EQ in Cubase; in this case, it's being applied to a clean electric guitar sound. There's a slight lower midrange dip to avoid competing with other sounds in the region, and a lift around 3.7kHz to give more definition. Fig. 2: Proper use of EQ is essential to nailing a great mix. Work on the most important song elements first (vocals, drums, and bass) and once these all "lock" together, deal with the more supportive parts. The audio spectrum has only so much space; ideally, each instrument will stake out its own "turf" in the audio spectrum and when combined together, will fill up the spectrum in a satisfying way. (Of course, this is primarily a function of the tune’s arrangement, but you can think of EQ as being part of the arrangement.) One of the reasons for working on drums early on the mix is that a drum kit covers the audio spectrum pretty thoroughly, from the low thunk of the kick drum to the sizzle of the cymbal. Once that’s set up, you’ll have a better idea of how to integrate the other instruments. EQ added to one track may affect other tracks. For example, boosting a piano part’s midrange might interfere with vocals, guitar, or other midrange instruments. Sometimes boosting a frequency for one instrument implies cutting the same region in another instrument; to have vocals stand out more, try notching the vocal frequencies on other instruments instead of just boosting EQ on the voice. Think of the song as a spectrum, and decide where you want the various parts to sit. I sometimes use a spectrum analyzer when mixing, not because ears don’t work well enough for the task, but because the analyzer provides invaluable ear training and shows exactly which instruments take up which parts of the audio spectrum. This can often alert you to an abnormal buildup of audio energy in a particular region. If you really need a sound to "break through" a mix, try a slight boost in the 1 to 3kHz region. Don’t do this with all the instruments, though; the idea is to use boosts (or cuts) to differentiate one instrument from another. To place a sound further back in the mix, sometimes engaging the high cut filter will do the job—you may not even need to use the main EQ. Also, applying the low cut filter on instruments that veer toward the bass range, like guitar and piano, can help trim their low end to open up more space for the all-important bass and kick drum. Step 7: Add Any Essential Signal Processing "Essential" doesn’t mean "sweetening," but processing that is an integral part of the sound (such an echo that falls on the beat and therefore changes the rhythmic characteristics of a part, distortion that alters the timbre in a radical way, vocoding, etc.). Step 8: Create a Stereo Soundstage Now place your instruments within the stereo field. Your approach might be traditional (i.e., the goal is to re-create the feel of a live performance) or something radical. Pan mono instruments to a particular location, but avoid panning signals to the extreme left or right. For some reason they just don’t sound quite as substantial as signals that are a little bit off from the extremes. Fig. 3 shows the Console view from Sonar. Note that all the panpots are centered, as recommended in step 5, prior to creating a stereo soundstage. Fig. 3: When you start a mix, setting all the panpots to mono can pinpoint sounds that interfere with each other; you might not notice this if you start off with stereo placement. As bass frequencies are less directional than highs, place the kick drum and bass toward the center. Take balance into account; for example, if you’ve panned the hi-hat (which has a lot of high frequencies) to the right, pan a tambourine, shaker, or other high-frequency sound somewhat to the left. The same concept applies to midrange instruments as well. Signal processing can create a stereo image from a mono signal. One method uses time delay processing, such as stereo chorusing or short delays. For example, if a signal is panned to the left, feed some of this signal through a short delay and send its output to the another channel panned to the right. However, it’s vital to check the signal in mono at some point, as mixing the delayed and straight signals may cause phase cancellations that aren’t apparent when listening in stereo. Stereo placement can significantly affect how we perceive a sound. Consider a doubled vocal line, where a singer sings a part and then doubles it as closely as possible. Try putting both voices in opposite channels; then put both voices together in the center. The center position gives a somewhat smoother sound, which is good for weaker vocalists. The opposite-channel vocals give a more defined, distinct sound that can really help spotlight a good singer. Step 9: Make Any Final Changes to the Arrangement Minimize the number of competing parts to keep the listener focused on the tune, and avoid "clutter." You may be extremely proud of some clever effect you added, but if it doesn’t serve the song, get rid of it. Conversely, if you find that a song needs some extra element, this is your final opportunity to add an overdub or two. Never fall in love with your work until it’s done; maintain as much objectivity as you can. You can also use mixing to modify an arrangement by selectively dropping out and adding specific tracks. This type of mixing is the foundation for a lot of dance music, where you have looped tracks that play continuously, and the mixer sculpts the arrangement by muting parts and doing major level changes. Step 10: Audio Architecture Now that we have our tracks set up in stereo, let’s put them in an acoustical space. Start by adding reverberation and delay to give the normally flat soundstage some acoustic depth. Generally, you’ll want an overall reverb to create a particular type of space (club, concert hall, auditorium, etc.) but you may also want to use a second reverb to add effects, such as a gated reverb on toms. But beware of situations where you have to drench a sound with reverb to have it sound good. If a part is questionable enough that it needs a lot of reverb, redo the part. Step 11: Tweak, Tweak, and Re-Tweak Now that the mix is on its way, it’s time for fine-tuning. If you use automated mixing, start programming your mixing moves. Remember that all of the above steps interact, so go back and forth between EQ, levels, stereo placement, and effects. Listen as critically as possible; if you don’t fix something that bothers you, it will forever haunt you every time you hear the mix. While it’s important to mix until you’re satisfied, it’s equally important not to beat a mix to death. Once Quincy Jones offered the opinion that recording with synthesizers and sequencing was like "painting a 747 with Q-Tips." A mix is a performance, and if you overdo it, you’ll lose the spontaneity that can add excitement. You can also lose that "vibe" if you get too detailed with any automation moves. A mix that isn’t perfect but conveys passion will always be more fun to listen to than one that’s perfect to the point of sterility. As insurance, don’t always erase your old mixes—when you listen back to them the next day, you might find that an earlier mix was the "keeper." In fact, you may not even be able to tell too much difference between your mixes. A veteran record producer once told me about mixing literally dozens of takes of the same song, because he kept hearing small changes which seemed really important at the time. A couple of weeks later he went over the mixes, and couldn’t tell any difference between most of the versions. Be careful not to waste time making changes that no one, even you, will care about a couple days later. Step 12: Check Your Mix Over Different Systems Before you sign off on a mix, check it over a variety of speakers and headphones, in stereo and mono, and at different levels. The frequency response of the human ear changes with level (we hear less highs and lows at lower levels), so if you listen only at lower levels, mixes may sound bass-heavy or too bright at normal levels. Go for an average that sounds good on all systems. With a home studio, you have the luxury of leaving a mix and coming back to it the next day when you’re fresh, after you’ve had a chance to listen over several different systems to decide if any tweaks need to be made. One common trick is to run off some reference CDs and see what they sound like in your car. Road noise will mask any subtleties, and give you a good idea of what elements "jump out" of the mix. I also recommend booking some time at a pro studio to hear your mixes. If the mix sounds good under all these situations, your mission is accomplished. Craig Anderton is Executive Editor of Electronic Musician magazine and Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Why be normal? Use your footpedal to control parameters other than volume and wah By Craig Anderton A lot of guitar hardware multieffects, like the Line 6 POD HD500, Roland ME-70, DigiTech iPB-10 and RP1000, Vox ToneLab ST, and Zoom G3X (Fig. 1) have a footpedal you can assign to various parameters. Fig. 1: Many multieffects, like Zoom's G3X, have built-in pedals. However, if not, some have an expression pedal jack so you can still use a pedal with the effects. If you're into amp sims, you're covered there too: Native Instruments' Rig Kontrol has a footpedal you can assign to any amp sim's parameters, and IK Multimedia's StealthPedal (Fig. 2) also works as a controller for amp sim software, not just IK's own AmpliTube. Fig. 2: IK's StealthPedal isn't only a controller, but includes jacks for plugging in a second expression pedal, as well as a dual footswitch. In most multieffects, volume and wah are the no-brainer, default pedal assignments. However, there are a whole lot of other parameters that are well-suited to pedal control. Doing so can add real-time expressiveness to your playing, and variety to your sound. ASSIGNING PEDALS TO PARAMETERS Some multieffects make this process easy: They have patches pre-programmed to work with their pedals. But sometimes the choices are fairly ordinary and besides, the manufacturer's idea of what you want to do may not be the same as what you want to do. So, it pays to spend a little time digging into the manual so you can figure out how to assign the pedal to any parameter you want. Effects with a computer interface are usually the easiest for making assignments, and they're certainly easiest to show in an article due to the ease of taking screen shots. For example, with DigiTech's iPB-10, you can use the iPad interface to assign the expression pedal to a particular parameter. In Fig. 3, the pedal has been assigned to the Screamer effect Drive parameter. Fig. 3: The iPB-10 pedal now controls the Screamer effect's Drive parameter. Note that you can set a minimum and maximum value for the pedal range; in this case, it's 8 and 58 respectively. This example shows the POD HD500 Edit program, set to the Controllers page. Here, the EXP-1 (main expression pedal) controller has been assigned to delay Feedback (Fig. 4). Fig. 4: It's easy to assign the HD500's pedal to various parameters using the POD HD500 Edit program. Note that like the iPB-10, you can set minimum and maximum values for the pedal range. Most amp sims have a "Learn" option. For example, with Guitar Rig, you can control any parameter by right-clicking on it and selecting "Learn" (Fig. 5). Fig. 5: The Chorus/Flanger speed control is about to "learn" the controller to which it should respond, like a pedal that generates MIDI controller data. With learn enabled, when you move a MIDI controller (like the StealthPedal mentioned previously), Guitar Rig will "learn" that the chosen parameter should respond to that particular controller's motion. Often these assignments are stored with a preset, so the pedal might control one parameter in one preset, and a different parameter in another. THE TOP 10 PEDAL TARGETS Now that we've covered how to assign a controller to parameters, let's check out which parameters are worth controller. Some parameters are a natural for foot control; here are ten that can make a big difference to your sound. Distortion drive This one's great with guitar. Most of the time, to go from a rhythm to lead setting you step on a switch, and there's an instant change. Controlling distortion drive with a pedal lets you go from a dirty rhythm sound to an intense lead sound over a period of time. For example, suppose you're playing eighth-note chords for two measures before going into a lead. Increasing distortion drive over those two measures builds up the intensity, and slamming the pedal full down gives a crunchy, overdriven lead. Chorus speed If you don't like the periodic whoosh-whoosh-whoosh of chorus effects, assign the pedal so that it controls chorus speed. Moving the pedal slowly and over not too wide a range creates subtle speed variations that impart a more randomized chorus effect. This avoids having the chorus speed clash with the tempo. Echo feedback Long, languid echoes are great for accenting individual notes, but get in the way during staccato passages. Controlling the amount of echo feedback lets you push the number of echoes to the max when you want really spacey sounds, then pull back on the echoes when you want a tighter, more specific sound. Setting echo feedback to minimum gives a single slapback echo instead of a wash of echoes. Echo mix Here's a related technique where the echo effect uses a constant amount of feedback, but the pedal sets the balance of straight and echoed sounds. The main differences compared to the previous effect are that when you pull back all the way on the pedal, you get the straight signal only, with no slapback echo; and you can't vary the number of echoes, only the relative volume of the echoes. Graphic EQ boost Pick one of the midrange bands between 1 and 4 kHz to control. Adjust the scaling so that pushing the pedal all the way down boosts that range, and pulling the pedal all the way back cuts the range. For solos, boost for more presence, and during vocals, cut to give the vocals more "space" in the frequency spectrum. Reverb decay time To give a "splash" of reverb to an individual note, just before you play the note push the pedal down to increase the reverb decay time. Play the note, and it will have a long reverb tail. Then pull back on the pedal, and subsequent notes will have the original, shorter reverb setting. This works particularly well when you want to accent a drum hit. Pitch transposer pitch For guitarists, this is like having a "whammy bar" on a pedal. The effectiveness depends on the quality of the pitch transposition effect, but the basic idea is to set the effect for pitch transposed sound only. Program the pedal so that when it's full back, you hear the standard instrument pitch, and when it's full down, the pitch is an octave lower. This isn't an effect you'd use everyday, but it can certainly raise a few eyebrows in the audience as the instrument's pitch slips and slides all over the place. By the way, if the non-transposed sound quality is unacceptable, mix in some of the straight sound (even though this dilutes the effect somewhat). Pitch transposer mix This is a less radical version of the above. Program the transposer for the desired amount of transposition – octaves, fifths, and fourths work well – and set the pedal so that full down brings in the transposed line, and full back mixes it out. Now you can bring in a harmony line as desired to beef up the sound. Octave lower transpositions work well for guitar/bass unison effects, whereas intervals like fourths and fifths work best for spicing up single-note solos. Parametric EQ frequency The object here is to create a wah pedal effect, although with a multieffects, you have the option of sweeping a much wider range if desired. Set up the parametric for a considerable amount of boost (start with 10 dB), narrow bandwidth, and initially sweep the filter frequency over a range of about 600 Hz to 1.8 kHz. Extend this range if you want a wider wah effect. Increasing the amount of boost increases the prominence of the wah effect, while narrowing the bandwidth creates a more intense, "whistling" wah sweep. Increasing the output of anything (e.g.., input gain, preamp, etc.) before a compressor This allows you to control your instrument's dynamic range; pulling back on the pedal gives a less compressed (wide dynamic range) signal, while pushing down compresses the signal. This restricts the dynamic range and gives a higher average signal level, which makes the sound "jump out." Also note that when you push down on the pedal, the dynamics will change so that softer playing will come up in volume. This can make a guitar seem more sensitive, as well as increase sustain and make the distortion sound smoother. And there you have the top ten pedal targets. There are plenty of other options just waiting to be discovered—so put your pedal to the metal, and realize more of the potential in your favorite multieffects or amp sim. Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. It's possible to get "hot" masters without losing dynamics by Craig Anderton I was driving along one of those Floridian roads that goes between the coasts, and is flatter than the Spice Girls without auto-tuning . . . in other words, a perfect place to crank up my car's CD player. As it segued from a recent CD into Simple Minds' "Real Life," which I hadn't heard in quite a while, I noticed it was somewhat quieter, so I turned up the volume. And in the process, I got to experience dynamics—like they used to have on CDs back in the 80s. Much has been said about the evils of overcompression, but we're so used to it that sometimes you need to hear great music, intelligently mixed without excessive compression, to remember what we're missing. Dynamics are an essential component of a tune's overall emotional impact. Yet some engineers kill those dynamics, because "everyone else does it," and they don't want their songs to sound "weak" compared to others. So we're stuck in a rut where each song has to be louder than the last one—listener fatigue, anyone? I sometimes wonder if the decline in sales of recorded music has something to do with today's mastering style, which makes music that although loud, is ultimately not that much fun to listen to. So what's an engineer to do? Compromise—find that sweet spot where you preserve a fair amount of dynamics, but also have a master that's loud enough to be "in the ballpark" of today's music. The following tips are designed to help you do just that. Maybe your tune won't be quite as loud as everyone else's, but I bet it will elicit a more emotional response from those willing to turn up their volume control a bit. NUKE THE SUBSONICS AND DC OFFSET Digital audio can record and reproduce energy well below 20Hz from sources like downward transposition/pitch-shifting, and DSP operations that allow control signals (such as fades) to superimpose their spectra onto the audio. While inaudible, they still take up headroom. You may be able to reclaim a dB or two by simply removing everything below 20Hz. However, note that if you can find individual tracks that contribute to a subsonics problem and do any needed fixes while mixing (Fig. 1), this eliminates the need to add filtering on the entire tune. Fig. 1: The low frequencies are being cut at 48dB/octave starting around 30Hz in a Cakewalk Sonar project track, thus eliminating subsonics before they get into the mix. Another culprit, DC offset, reduces headroom because positive or negative peaks are reduced by the amount of offset. Removing residual DC offset, using the "Remove DC offset" function found in most digital audio editors (Fig. 2) and DAWs, "centers" the waveform around the 0V point. This allows a greater signal level for a given amount of headroom. Fig. 2: Like many other programs, Sony's Sound Forge includes DSP to remove DC offset. DO YOU REALLY NEED MONDO BASS? As the ear is less responsive to bass frequencies, there's a tendency to crank up the bass, especially among those who lack mixing experience. Reducing bass can open up more headroom for other frequencies. To compensate for this and create the illusion of more bass: Use a multiband compressor on just the bass region. The bass will seem as loud, but take up less bandwidth. Try the Waves MaxxBass plug-in (Fig. 3; a hardware version is also available), or the Aphex Big Bottom process. MaxBass isolates the signal's original bass and generates harmonics from it; psycho-acoustically, upon hearing the upper harmonics, your brain "fills in" the bass's fundamental. The Big Bottom process uses a different, but also highly effective, psychoacoustic principle to emphasize bass. Fig. 3: The Waves MaxxBass isolates the signal's original bass and generates harmonics from it. You can then adjust the blend of the original bass with the bass contributed by the MaxxBass. FIND/SQUASH PEAKS THAT ROB HEADROOM Another issue involves peak vs. average levels. To understand the difference, consider a drum hit. There's an initial huge burst of energy (the peak) followed by a quick decay and reduction in amplitude. You will need to set the recording level fairly low to make sure the peak doesn't cause an overload. As a result, there's a relatively low average energy. On the other hand, a sustained organ chord has a high average energy. There's not much of a peak, so you can set the record level such that the sustain uses up the maximum available headroom. Entire tunes also have moments of high peaks, and moments of high average energy. Suppose you're using a hard disk recorder, and playing back a bunch of tracks. Of course, the stereo output meters will fluctuate, but you may notice that at some points, the meters briefly register much higher than for the rest of the tune. This can happen if, for example, several instruments with loud peaks hit at the same time, or if you're using lots of filter resonance on a synth, and a note falls within that resonant peak. If you set levels to accommodate these peaks, then that reduces the song's average level. You can compensate for this while mastering by using limiting or compression, which brings the peaks down and raises the softer parts. However, if you instead reduce these peaks during the mixing process, you'll end up with a more natural sound because you won't need to use as much dynamics processing while mastering. The easiest way to do this is as you mix, play through the song until you find a place where the meters peak at a significantly higher level than the rest of the tune. Loop the area around that peak, then one by one, mute individual tracks until you find the one that contributes the most amount of signal. For example, suppose a section peaks at 0dB. You mute one track, and the peak goes to -2. You mute another track, and the section peaks at -1. You now mute a track and the peak hits -7. Found it! That's the track that's putting out the most amount of energy. Referring to Fig. 4, zoom in on the track, and use automation or audio processing to insert a small dip that brings the peak down by a few dB. Now play that section again, make sure it still sounds okay, and check the meters. In our example above, that 0dB peak may now hit at, say, 3dB. Proceed with this technique through the rest of the tune to bring down the biggest peaks. If peaks that were previously pushing the tune to 0 are brought down to 3 dB, you can now raise the tune's overall level by 3dB and still not go over 0. This creates a tune with an average level that's 3dB hotter, without having to use any kind of compression or limiting. Fig. 4: (A) shows the original signal. In (B), the highest peak has been located and is about to be attenuated by 3dB using Steinberg Cubase's Gain function. © shows what happens after attenuation—it's now only a little higher than the other peaks. In (D), the overall signal has been normalized up to 0.00dB. Note how the signal has a higher average level than in (A)—all the other peaks are higher than they were before—but there was no need to use traditional dynamics processing. CHEAT! The ear is most sensitive in the 3-4 kHz range, so use EQ (Fig. 5) to boost that range by a tiny amount, especially in quiet parts. Fig. 5: A very broad, 0.5dB boost has been added at 3.2kHz in iZotope's Ozone 5. The tune will have more presence and sound louder. But be extremely careful, as it's easy to go from teeny boost to annoying stridency. Even 1dB of boost may be too much. If you still need something slightly hotter, bring on a level maximizer or high-quality multiband compressor. However, by implementing the level maximizing tricks mentioned above, you won't need to add much dynamics processing. If you've been adding, for example, four to six 6dB of maximization, you may be able to get equally satisfying results with only one or two dB of maximization, thus squashing only the highest peaks while leaving everything else pretty much intact. A final consideration involves mastering for the web. While some engineers add massive amounts of compression to audio that will be streamed, in practice data compression allows for a reasonable amount of dynamics. If you're streaming audio, then the sound quality is already taking quite a hit, so preserving dynamics can help make the music sound at least a little bit more natural. If you work with streaming audio, try the techniques mentioned above instead of heavy squashing, so you can judge whether the resulting sound quality is more satisfying overall. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Fix vocal pitch without nasty correction artifacts by Craig Anderton The critics are right: pitch correction can suck all the life out of vocals. I proved this to myself accidentally when working on some background vocals. I wanted them to have an angelic, “perfect” quality; as the voices were already very close to proper pitch anyway, I thought just a tiny bit of manual pitch correction would give the desired effect. (Well, that and a little reverb.) I was totally wrong, because the pitch correction took away what made the vocals interesting. It was an epic fail as a sonic experiment, but a valuable lesson because it caused me to start analyzing vocals to see what makes them interesting, and what pitch correction takes away. And that’s when I found out that the critics are also totally wrong, because pitch correction—if applied selectively—can enhance vocals tremendously, without anyone ever suspecting the sound had been corrected. There’s no robotic quality, it doesn’t steal the vocalist’s soul, and pitch correction can sometimes even add the kind of imperfections that make a vocal sound more “alive.” This article uses Cakewalk Sonar’s V-Vocal as a representative example of pitch correction software, but other programs like Melodyne (Fig. 1), Waves Tune (Fig. 2), Nectar (Fig. 3), and of course the grand-daddy of them all, Antares Auto-Tune (Fig. 4), all work fairly similarly. They need to analyze the vocal file, after which they indicate the pitches of the notes. These can all be quantized to a particular scale with “looser” or “tighter” correction, and often you can correct timing and formant as well as pitch. But more importantly, with most pitch correction software you can turn off automatic quantizing to a particular scale, and correct pitch with a scalpel instead of a machete. That's the technique we're going to describe here. Fig. 1: Celemony Melodyne Fig. 2: Waves Tune LT Fig. 3: iZotope Nectar's pitch correction module Fig. 4: Antares Auto-Tune EVO BEWARE SIGNAL PROCESSING! Pitch correction works best on vocals that are "raw," without any processing; effects like modulation, delay, or reverb can make pitch correction at best glitchy and at worst, impossible. Even EQ, if it emphasizes the high frequencies, can create unpitched sibilants that confuse pitch correction algorithms. The only processing you should use on vocals prior to employing pitch correction is de-essing, as that can actually improve the ability of pitch correction to do its work. If your pitch correction processor inserts as a plug-in (e.g., iZotope's Nektar), then make sure it's before any other processors in the signal chain. WHAT TO AVOID The key to proper pitch correction use is knowing what to avoid, and the prime directive is don’t ever use any of the automatic correction options—unless you specifically want that hard correction, hip-hop vocal effect (in V-Vocal, these are the controls grouped under the “Pitch Correction” or “Formant Control” boxes). Do only manual correction, and then, only if something actually sounds wrong. Avoid any “labor-saving” devices; don’t use options that add LFO vibrato. In V-Vocal, I always use the pencil tool to change or add vibrato. Manual correction takes more effort to get the right sound (and you’ll become best friends with your program’s Undo button), but the human voice simply does not work the way pitch correction software works when it’s on auto-pilot. By making all your changes manually, you can ensure that pitch correction works with the vocal instead of against it. DO NO HARM One of my synth programming “tricks” on choir and voice patches is to add short, subtle upward or downward pitch shifts at the beginning of phrases. Singers rarely go from no sound to perfectly-pitched sound, and the shifts add a major degree of realism to patches. Sometimes I’ll even put the pitch envelope attack time or envelope amount on a controller so I can play these changes in real time. Pitch correction has a natural tendency to remove or reduce these spikes, which is partially responsible for pitch-corrected vocals sounding “not right.” So, it’s crucial not to correct anything that doesn’t need correcting. Consider the “spikey” screen shot (Fig. 5), bearing in mind that the orange line shows the original pitch, and the yellow line shows how the pitch was corrected. Fig. 5: The pitch spikes at the beginning of the notes add character, as do the slight pitch differences compared to the “correct” pitch. Each note attack goes sharp very briefly before settling down to pitch, and “correcting” these removed any urgency the vocal had. Also, all notes except the last one should have been the same pitch. However, the first note being slightly flat, with the next one on pitch (it had originally been slightly sharp), and the next one slightly sharp, added a degree of tension as the pitch increased. This is a subtle difference, but you definitely notice a loss if the difference is “flattened” to the same pitch. In the last section the pitch center was a little flat; raising it up to pitch let the string of notes resolve to something approximating the correct pitch, but note that all the pitch variations were left in and only the pitch center was changed. The final note’s an interesting case: It was supposed to be a full tone above the other notes, but the orange line shows it just barely reached pitch. Raising the entire note, and letting the peak hit slightly sharp, gave the correct sense of pitch while the slight “overshoot” added just the right amount of tension. VIBRATO Another problem is where the vibrato “runs away” from the pitch, and the variations become excessive. Fig. 6 shows a perfect example of this, where the final held note was at the end of a long phrase, and I was starting to run out of breath. Referring to the orange line, I came in sharp, settled into a moderate but uneven vibrato, but then the vibrato got out of control at the end. Fig. 6: Re-drawing vibrato retains the voice’s human qualities, but compensates for problems. Bearing in mind the comments on pitch spikes, note that I attenuated the initial spike a bit but did not flatten it to pitch. Next came re-drawing the vibrato curve for more consistency. It’s important to follow the excursions of the original vibrato for the most natural sound. For example, if the original vibrato went up too high in pitch, then the redrawn version should track it, and also go up in pitch—just not as much. As soon as you go in the opposite direction, the correction has to work harder, and the sound becomes unnatural. This emphasizes the need to use pitch correction to repair, not replace, troublesome sections. Also note that at the end, the original pitch went way flat as I ran out of breath. In the corrected version, the vibrato goes subtly sharp as the note sustains—this adds energy as you build to the next phrase. Again, you don’t hear it as “sharp,” but you sense the psycho-acoustic effect. MAJOR FIXES Sometimes a vocal can be perfect except for one or two notes that are really off, and you’re loathe to punch. V-Vocal can do drastic fixes, but you’ll need to “humanize” them for best results. In the before-and-after screen shot (Fig. 7), the pitch dropped like a rock at the end of the first note, then overshot the pitch for the second note, and finally the vibrato fell flat (literally). The yellow line in the top image shows what typical hard pitch correction would do—flatten out both notes to pitch. On playback, this indeed exhibited the “robot” vibe, although at least the pitches were now correct. Fig. 7: The top image shows a hard-corrected vocal, while the lower image shows it after being “humanized.” The lower image shows how manual re-drawing made it impossible to tell the notes had been pitch-corrected. First, never have a 90 degree pitch transition; voices just don’t do that. Rounding off transitions prevents the “warbling” hard correction sound. Also note that again, the pitch was re-drawn to track the original pitch changes, but less drastically. Be aware that often, the “wrong” singing is instinctively right for the song, and restoring some of the “wrongness” will enhance the song’s overall vibe. Shifting pitch will also change the formant, with greater shifts leading to greater formant changes. However, even small changes may sound wrong with respect to timbre. Like many pitch correction program, V-Vocal also lets you edit the formant (i.e., the voice’s characteristic timbre). When you click on V-Vocal’s F button, you can adjust formant as easily as pitch (Fig. 8). Fig. 8: The formant frequency has been raised somewhat to compensate for the downward timbre shift caused by fixing the pitch. In the screen shot with formant editing, the upper image shows that the vibrato was not only excessive, but its pitch center was higher than the pitch-corrected version. The lower pitch didn’t exactly give a “Darth Vader” timbre, but didn’t sound right in comparison to the rest of the vocal. The lower image shows how the formant frequency was raised slightly. This offset the lower formant caused by pitch correction, and the vocal’s timbre ended up being consistent with the rest of the part. A REAL-WORLD EXAMPLE To hear these kind of pitch correction techniques—or more accurately, to hear a song using the pitch correction techniques where you can’t hear that there’s pitch correction—check out the following music video. This is a cover version of forumite Mark Longworth’s “Black Market Daydreams” (a/k/a MarkydeSad and before that, Saul T. Nads), and there’s quite a bit of touch-up on my vocals. But listen, and I think you’ll agree that pitch correction doesn’t have to sound like pitch correction. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. The scoop on making loops—in 11 steps By Craig Anderton Your drummer just came up with the rhythm pattern of a lifetime, or your guitarist played a rhythm guitar hook so infectious you think you might need to soak the studio in Clorox. And you want to use these grooves throughout a song, while cutting some great vocals on top. There’s something about a loop that isn’t the same as the part played over and over again . . . and vice-versa. Sometimes you want to maintain the human variations that occur from measure to measure, but sometimes you want consistent, hypnotic repetition. When it’s the latter, here’s how to create a loop—from start to finish. 1 CHOOSE YOUR PITCH If you plan to use a loop in different keys, realize that pitch transposition places more demands on a stretching algorithm than time stretching. One solution is to record the loop in two or more keys. Most stretch algorithms can handle three semitones up and down without sounding too unnatural. So, when I was recording loops for the “AdrenaLinn Guitars” loop library, I played each loop in E (to cover the range D–G) and Bb (for G#–C#). In cases where it wasn’t possible to obtain the same chord voicing in the two keys, I used DSP-based time stretching to create the alternate version. This feature is available in several programs, and while files processed with DSP aren’t stretchable, the sound quality is good enough that you can create a loop from the transposed version. 2 PLAY AGAINST A BACKING TRACK One of the easiest way to create a loop involves grabbing part of a track from a multitrack recording. But when creating a loop from scratch, it’s difficult to give a good performance if you’re playing solo. Create a MIDI backing track to play against, and you’ll have a better feel. 3 RECORD AT A SLOWER TEMPO Stretched files sound better when sped up than slowed down, because it’s easier to remove audio and make a loop shorter than try to fill in the gaps caused by lengthening audio. Set the tempo for the right feel, and practice until you nail the part. But before hitting record, slow the tempo down (this is why I recommend a MIDI backing track—not only is it easy to change tempo, you can transpose pitch as needed, and quantize so you have a rhythmic reference). Typically, an Acidized/Apple Loops or REX loop can stretch (if properly sliced and edited) over a range of about –15\% to +60\% or higher. So, a 100 BPM loop will be viable from about 85 BPM to over 160 BPM. For really downtempo material, like some hip-hop and chill, consider cutting at 70 or 80 BPM instead. As a bonus, you may find it easier to play the part more precisely at the slower tempo; also, any timing errors will become less significant, the more you speed up the loop. 4 TO SWING OR NOT TO SWING There are two opposing viewpoints about whether to incorporate swing and other “grooves” in a loop, or go for rhythmic rigidity. Some feel that if a loop wants to swing, let it. Unless it has a huge swing percentage, it will usually play okay against something recorded without swing. However, modern DAWs often let you apply swing and groove templates to audio, so there’s a trend toward recording loops to a rhythmic grid so they can be modified within the DAW for swing and other grooves. 5 HOW MANY MEASURES? Although quite a few loops are one measure long, two-measure loops “breathe” better—the first measure is tension, the second is release. Four-measure loops work well for sounds that evolve over time. Eight- or sixteen-measure loops are more like “construction kits” which you can use in their entirety, but from which you can also extract pieces. It’s easy to shorten a long loop. For example, if you create a four-measure loop that builds over four measures but want to build over eight measures instead, split the loop in the middle, repeat the first two measures twice (to provide the first four measures), then play the full four-measure loop to complete the eight-measure figure (Fig. 1). Fig. 1: If you make a long loop, you can always cut it into smaller pieces. In this example using Sony Acid Pro 7, the original four-measure loop goes from measure 5 to measure 9. But its first two measures have been copied and pasted in measures 1 and 2, as well as measures 3 and 4. 6 CUTTING THE LOOP One of the best ways to create loops is to record for several minutes, so you have a choice of performances. Most DAWs let you create a loop bracket and slide it around to isolate particular portions of the track. You can also experiment with changing the loop length—you might find that what you thought would be a one-measure loop works well as a two- or four-measure loop, which gives some subtle, internal variations. After deciding on the optimum length, use the loop brackets to zero in on the best looping candidates. Say you’re recording rhythm guitar. Solo the track, and listen to the entire rhythm guitar part. Mark off regions (based on the number of measures you want to use) that would make the best loops. After locating the best one, cut the beginning and end to the beat. With human-played loops, neither the beginning nor end will likely land exactly on the beat. Zoom in on the loop beginning, and slide the track so that the loop’s beginning lands exactly at the beginning of a measure. Snap the cursor to the measure beginning, and do a split or cut. You’ll also need to cut at the end of a measure; if the loop extends past the measure boundary or ends before it by a little bit, turn off snap and cut at the end of the loop. Then turn snap back on, and use the DAW’s DSP stretching function to drag the end of the loop to the measure boundary. How to do this varies depending on the program, but it generally involves click-dragging the edge of the audio while holding down a modifier key, like Ctrl or Alt. If you hear a click when the loop repeats because there’s a level shift between the loop start and end, add a very short (3-10 ms) fade at the loop start and end. 7 PROS AND CONS OF AUDIO QUANTIZATION Now scan the loop for note attack transients and see if they line up properly with note divisions. Small timing differences are not a problem and, if done musically (e.g., a snare on a loop’s last beat hits just a shade late), will enhance the loop. But if a note is objectionably late or early, you can use an audio quantization function (like Ableton’s Warp as shown in Fig. 2, Sonar AudioSnap, Cubase Multitrack Quantization, and the like) to quantize the audio. Fig. 2: The upper waveform in Ableton Live has warp markers circled that mark the beginning of a transient, but which aren’t aligned to the beat. The lower waveform shows the results of moving the warp markers on to the beat. If this degrades the fidelity, another option is to isolate the section that needs to be shifted by splitting at the beginning and end, then sliding the attack into place. If this opens a problematic gap between the end of the note you moved and the beginning of the next note, try the following: Add a slight fade to the first note so it glides more elegantly into the gap. Copy a portion of the first note’s decay, and crossfade it with the note end to cover the gap. Use DSP stretching to extend the decay of the note you moved forward in time. If the note was early and you shifted it later, then there will be a gap after the previous note, and the end of the note you moved might overlap the next note. If the gap is noticeable, deal with it as described above. As to the end, either: Shorten it so it butts up against the beginning of the next note. Crossfade it with the next note’s beginning if there’s no strong attack. If you’ve edited the loop, you’ll need to make it one file again. Bounce the region containing the loop to another track, bounce into the same “clip,” or export it and bring it back into the project. 8 CONSIDER SOME PROCESSING A “dry” loop is the most flexible — if you add reverb, then the stretching process has to deal with that. Cut a dry loop instead, and add reverb once the loop is in your DAW. If an effect such as tempo-synced delay is an integral part of the loop, embed the effect in the file for a “plug and play” loop. Otherwise, add the effect during playback. Some people “master” their loops with compression and EQ so the loops really jump out. But when you record other tracks (vocals, piano, etc.) then master the song, if you want to squash the dynamics a bit, then the loop dynamics will be super-squashed, and if you add a bit of brightness, the loop will shatter glass. If there are response anomalies I’ll often add a little EQ, and just enough limiting to tame any rogue peaks, but that’s it. Loops fit better in the track that way, and are more workable when it’s time to mix and master. You can always add processing more easily than you can take it away. 9 CHOOSE YOUR STRETCH METHOD The three main stretchable audio formats are Acidized WAV files, Apple Loops, and REX files. REX files are arguably the most universally recognized, with Acidized WAV files a close second. Mac programs generally recognize Apple Loops, but few Windows programs do. Several programs on both platforms recognize Acidized files. Different formats are best for different types of audio. REX files are optimum for percussive audio, as long as prominent sounds don’t decay over other sounds (e.g., a cymbal that lasts for a measure sounding at the same time as a 16th-note hi-hat pattern). A single-note bass line or simple drum part is the ideal candidate for REXing. WAV and Apple Loops aren’t always as good for percussive sounds as REX files, but are better with everything else—particularly sustained sounds. Your software will likely influence your choice. Apple’s Apple Loops Utility (Fig. 3) is a free program for creating Apple Loops; you’ll need either Sony Acid or Cakewalk Sonar to Acidize WAV files. To create REX files, you’ll need Propellerhead Software’s ReCycle program. Fig. 3: The Apple Loops Utility is a free program that allows optimizing the stretching characteristics of AIFF or WAV files, as well as tagging them for database retrieval. 10 CREATE AN ACIDIZED OR APPLE LOOPS VERSION Acidized and Apple Loops are structurally quite similar, and the techniques that help turn a file into a stretchable loop are similar. Basically, there need to be transient markers at the beginning of each attack transient to turn the loop into a series of slices, each of which represents a distinct “blob” of sound (e.g., kick+snare, bass note, or whatever). The programs themselves take an educated case as to where these transients need to go, but manual optimization is almost always necessary to create a loop that stretches over the widest possible range. A non-optimized file will cause artifacts when stretched (e.g., doubled attack transients that sound like “flamming,” and/or a loss of some of the fullness from percussion). Optimization (Fig. 4) involves several steps. Fig. 4: The upper waveform shows an untweaked version of a difficult-toAcidize file in Sonar’s Loop Construction window. The lower waveform has been optimized—the markers with the purple handles have been either moved from their original positions, or added. Existing strong transients should all have a marker at the transient’s precise beginning. Zoom in if needed to see the transient. Secondary transients, such as those caused by a delay or flam, should have markers as well. Remove spurious markers (i.e., they don’t fall on transients) as they can degrade the sound. With sustained material, add a transient marker at a rhythmic interval like a quarter note or eighth note. This tells the DSP to create a crossfade to help make a more seamless transition; putting it on a beat means that other sounds will likely mask any sonic discontinuities that may result from the stretching process. If you hear a “fluttering” effect during sustained notes, try adding another marker in the middle of the note. Sometimes adding a marker at the end of a note’s decay prevents roughness toward the note’s end. Enter the root key for pitched loops. This allows the loop to follow key changes in the host program. For percussive parts, specify no root key so that only the tempo changes. Transients are not always obvious. For example, a tom fill and cymbal crash might play simultaneously at the end of a drum loop, so you can’t see the individual tom transients. Listen to the part: If there’s a hit every 16th note, then just place a marker at every 16th note. If it’s mostly 16th notes but there are some hits that extend over an 8th note, add hits for the 16th notes but omit them for the sections that are longer. 11 REX FILE TIPS If you want to create a REX file, import the loop into ReCycle. The basic principles of good stretching are the same as for Acidized/Apple Loops files—you want to identify where transients fall—but with REX files these are hard cuts (Fig. 5), not just markers for the DSP to reference. Creating good REX files is an art in itself that goes beyond the scope of this article, but the tips given above regarding Acidization and Apple Loops should help considerably. Fig. 5: Once imported into ReCycle, you add markers at transients (indicated with the inverted triangles or lock icons) to create “slices.” The marker that splits the second chord in half is there for a reason—there are two eighth note chords played in quick succession. Even though you can’t see the transient that marks the beginning of the second chord, it still needs to be marked so that it plays back at the right time. If you followed the above directions and optimized your loops, they should work with a variety of material over a wide range of tempos, while fitting perfectly into asong—and that’s what it’s all about. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Does the XP MIDI Port Limitation Still Exist in Windows 7? It Sure Seems that Way . . . by Craig Anderton I don’t like to write articles that describe what may be a solution to what may be a problem, but I don’t really have a choice . . . let me explain. Windows XP originally had a limit of 10 MIDI ports. If you exceeded that amount, MIDI devices simply wouldn’t show up as available in DAWs and other programs. I believe this was eventually increased to 32 ports, but still, if you exceeded the limit you needed to dive into the registry and delete unused or duplicate ports. Part of the problem was from Windows creating duplicate ports if you plugged a USB MIDI device into different USB ports. Remembering to plug a device into the same port each time you used it, and deleting any duplicates, was an easy way to free up ports. I recently tried installing Korg’s KONTROL software and USB-MIDI driver for the nano- and microKEY series devices, and while Korg’s driver software showed the devices as existing and connected, the KONTROL software insisted they weren’t connected. This seemed like the problem I’d run into before with port limitations when programs couldn’t access something that was connected. Google was of limited help, but the general consensus seemed to be that the port limitation problem still persisted in versions of Windows past XP, even though some thought there was an unlimited number of ports. Who knows? If someone reading this has a definitive answer, let me know so I can update this article. Anyway, I tried the "XP registry diving" approach, but that didn’t work with Windows 7. However, on the Cakewalk forums, I found a very simple batch process that lets you see hidden devices in Device Manager. Simply type the following in Notepad and save it as a .BAT file (e.g., Hidden.BAT): set devmgr\_show\_nonpresent\_devices=1 start Devmgmt.msc Righty-click on the .BAT file, then choose Run as administrator from the context menu; this opens Device Manager. Go View > Show hidden devices, then open Sound, Video, and Game Controllers. A little speaker icon to the left of each item will be solid if the device is connected, and grayed out if not. Note that the following picture shows two entries for the Line 6 POD HD500. With the HD500 plugged in, one driver was active, and the other was not. So I right-clicked on the grayed-out driver, and chose Uninstall. A dialog box showed a checkbox for deleting the driver software; I believe you need to leave it unchecked so as not to render the “real” port inoperable. I found multiple duplicates for multiple pieces of gear, and deleted them. After doing so, the Korg KONTROL software worked perfectly. So while I can’t guarantee this solved the problem, or if it’s the optimal way to do so, the problem was nonetheless resolved. Again, let me emphasize this all falls under the “gee, I dunno, I guess it works” category, so I’d welcome any comments from people who have a definitive answer for all this! Craig Anderton is Editorial Director of Harmony Central and Founding Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Use your iPad to create a reference source for all your gear. It's easy, simple, and free! by Craig Anderton Although some people still like printed manuals, it’s great that so many manufacturers include PDF files with distribution media, or online as a downloadable file. The search function alone makes PDFs handy, but of course, they also save costs and are environmentally responsible (if you really want a paper manual, you can always print out the PDF). With the iPad’s ability to conveniently store PDFs in a library, you can gather all this material in one place for easy reference. If you have an older piece of gear without a PDF manual, scan the pages, then download Open Office from www.openoffice.org—a free (and excellent) office suite from Sun Microsystems. You can insert each scanned image as a page within a text document, then export it as a PDF. THE iPAD CONNECTION Go the App store and download iBooks, a free app that’s a host for buying books, but also has the option to store PDFs. There are several ways to transfer PDFs into iBooks; with some PDFs you access online, you’ll briefly see an “open in iBooks” option (Fig. 1). If this goes away, tap the document’s top right to restore it, and tap “open in iBooks” This stores the manual in iBooks. This isn’t just a link to the online doc; if you have no wi-fi, you’ll still be able to read it. Fig. 1: If you download a PDF document and can “open in iBooks” (see upper right), that automatically saves the file and makes it available for future reference. If there is no “open in iBooks” option, or you’re grabbing a PDF you made, then email the file to yourself from your computer. Open your email program in the iPad, and download it. When it opens, you’ll see the “open in iBooks” option. EDITING IN IBOOKS You can move, delete, and otherwise edit how your manuals are arranged. You can also create “Collections” of a particular type of gear, manufacturer, etc. (Fig. 2). Fig. 2: Tapping the Collections button creates another “bookshelf” you access by swiping. For example, I created a category for documentation for the Casio XW series of keyboards, including the manuals, appendices of sounds, and MIDI implementation (Fig. 3). Fig. 3: This collection consists of only Casio-related manuals. Finally, here’s a shot of the main bookshelf screen (Fig. 4). If it’s too difficult to read the manual “covers,” you can also choose to show a list of manuals. Fig. 4: This shows the pre-categorized manuals from the main bookshelf page. Pretty cool, eh? But credit where credit is due: Thanks to engineer/producer Peter Ratner for suggesting this idea. I’ve found it to be really helpful to just reach for the iPad when I have a question about a piece of gear. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Song order and transitions are a crucial part of the recording and mastering processes By Craig Anderton Your songs are superbly mixed, expertly mastered, and ready to be unleashed on a public thirsting for the soul-stirring slices of artistic triumph that only you can deliver. But before you start thinking about trading in your Toyota Corolla for a Lamborghini, don’t forget the final step of the recording process—assembly. Although there’s talk about the “death of the CD,” the reality is that it’s still a common form of music distribution, particularly at a band’s merch table. And, it’s still the main way of distributing an entire album in one package. The purpose of assembling a CD is to make sure all the disparate pieces hang together as a cohesive listening experience. There are several elements involved in assembling: Running order. Which song should start? Which should close? Is there a particular order that is more satisfying than others? Total length. There’s a reason why most pro bands cut and mix more songs than they’re going to use: It gives you the luxury of weeding out the weaker ones. Transitions. The default space between songs on a standard Red Book CD is 2 seconds, but that’s not a law. Songs can butt right up against each other, or have a longer space if a breather is required. Crossfades. Some songs were meant to intertwine their ends and beginnings, producing a seamless transition that immediately sucks the listener into the next track. Let’s look at these issues in depth, but first, consider the tools you’ll use to assemble your CD. ASSEMBLY TOOLS The greatest thing ever for album assembly is the portable music player (which of course now includes smartphones). You can do your assembly, create an MP3 or AAC file, and listen in a variety of contexts so you can live with your work until you get it exactly as desired. The same could be said of recordable CDs that you can plan in cars, over various stereo systems, and the like. Either sure beats the old school options—acetate copies you could play only a couple times, or “safety” tapes with more hiss than an ancient PA mic preamp. Many programs will let you assemble cuts in order and burn a CD, but make sure the software supports Disk At Once (DAO) burning. Track At Once (TAO) burning means you’re stuck with a space between tracks, so you will not be able to do crossfades, or place markers during the applause in a live set without hearing a disturbing gap. My favorite multitrack programs with sophisticated Red Book CD assembling options are PreSonus Studio One Pro 2 and Magix Samplitude Pro X. Either one is adept at album assembly, but Studio One Pro 2 also has the unusual feature of integrating with its multitrack recording page (Fig. 1). Fig. 1: With Studio One Pro 2, if you make any changes in a multitrack mix, you can update the modified file that's being assembled on the mastering (project) page. Or, as this screen shot shows, you can update the files en masse if multiple changes have been made to the multitrack songs. What this means is that if while assembling an album you decide, for example, that the vocals are just a little too low on one cut, you can zip over to the multitrack project, make your changes, and they’re automatically reflected on the mastering page. Of course this works only with multitrack projects created in Studio One Pro, but still—it’s pretty slick. IS EVERYTHING IN ORDER? You may already think you have an optimum order, but keep an open mind. In particular, you only get one chance to make a good first impression, so the first few seconds of a CD are crucial. If you don’t grab the ear of the listener/program director/booking agent immediately, they’re going to move on. Sorry, but that song with the long, slow build that ends up with everyone in the house shaking their butts is probably better off as your closer than your opener. There are some exceptions; dance music often starts off with something more ambient to set a mood before the beat comes in. Or, you may intend your CD to be an experience that should be listened to from start to finish. That’s fine, but understand that these days, it’s by and large a singles-oriented world . . . the stronger your opener, the better the odds a listener will actually hear the rest of the CD. You also have to plan the overall flow. Will it build over time? Hit a peak in the middle, then cool down? Provide a varied journey from start to finish? Do you want to close with a quiet ballad that will add a sense of completion, or with a rousing number intended to take people to the next level? One of the best models for album assembly is, well, sex. Sometimes it starts off slow and teasing, then proceeds with increasing intensity. Or there might be that instant, almost desperate attraction, that starts off high-energy but over the course of time, evolves into something more gentle and spiritual. Or hey, maybe we’re just talking straight ahead lust from start to finish! In any event, think whether the CD is making love with your audience or not, and whether it follows that kind of flow. FUN WITH SPREADSHEETS When I assemble an album, I boot up Open Office and make a spreadsheet. Aside from title, the categories are key, tempo, core emotion (joy, revenge, discovery, longing, etc.), length, and lead instrument (male vocal, female vocal, instrumental, etc.). This can help you discover problems, like having three songs in a row that are all the same key, or which have wild tempo variations that upset the flow. For more information, check out an article I wrote about using spreadsheets to help optimize song orders. In one project I was able to pretty much start out strong, have the tempo increase over the course of the album (with a few dips in the middle to vary the flow), and have a general upward movement with respect to key, except for a few downward changes to add a little unpredictability. Although there were several instrumental songs, I never had one follow another immediately; they were there to break up strings of songs with vocals. As a result of all this planning, the album had a good feel—it followed a general pattern, but had some cool variations that kept the experience from becoming too predictable. WHAT ABOUT LENGTH? With vinyl, coming up with an order was actually a bit easier. Albums were shorter, so you only had to keep someone’s attention for about 35-40 minutes instead of 70 or more. The natural break between album sides gave the opportunity for two “acts,” each with an opener and closer. Today some people seem to feel that if you don’t use all the available bits in a CD, you’re cheating the consumer. Nonsense. Many people don’t have an hour or more just to sit and listen to music anyway. As a consumer, I’d rather have 40 strong minutes that hang together than 30 minutes of all the best material “front-loaded” at the beginning, followed by 40 minutes of average material that peters out into nothing. As OJ Simpson’s lawyer Johnny Cochran once said, “Less CD time is surely no crime.” (Well okay, he didn’t say that, but you get the point.) TRANSITIONS I have to admit to a prejudice here, which is that I like a continuous musical flow more than a collection of unrelated songs. I’ve been doing continuous live sets most of my life, and that carries over into CDs. I want transitions between songs to feel like a smooth downshift on a Porsche as you take a curve, not something that lurches to a stop and then starts up again. As a result, I pay a lot of attention to crossfades and transitions. On a CD I assembled years ago for the group Function, they had already decided on an order, and it was a good one. However, one song ended with a fading echo; while cool, this had such a sense of completeness that when the next song hit, you weren’t really ready. After wrestling with the problem a bit, I copied the decay, reversed it, and crossfaded it with the end of the tune. So the end result was the tune faded out, but before it was gone, faded back in with the reversed echo effect. As reverse audio tends to do, this ended with a abrupt stop, which turned out to be the perfect setup to launch people right into a butt splice that started the next song. Be alert for “song pairs” that work well together, then figure out a good way to meld them. One lucky accident was assembling a CD where one song ended with a percussive figure, and the song that followed it started with a different percussive figure. With a space between them, the transition just didn’t work. But I took the beginning of the second tune, matched it to the end of the previous tune, and crossfaded the two sections so that during the crossfade, the two percussion parts played together. Instead of a yawning, awkward gap between the tunes, the first tune pushed you into the second, which was simultaneously pulling you in, thanks to the crossfade. Don’t be afraid to adjust the default space between songs, either (Fig. 2). If there’s a significant mood change, leave a little space. If there’s a long fade out, you might not want to have any space before the next song begins, lest the listener’s attention drifts. Fig. 2: In this transition, not only is there not a space between two cuts, but a crossfade has been added. Note that the crossfade curves in Studio One Pro 2 can be customized to a linear, concave, or convex shape—whatever makes for the smoothest transition. BURN, BABY, BURN Once you have everything figured out, test each transition (start playback about 20 seconds before the end of a song, then listen to about 20 seconds of the next song and see if the transition works), then listen from start to finish. If you don’t hear the need for any changes, fine. But burn a CD and live with it for a few days. Listen to it in the background, in your car, on an MP3 player while you’re doing the food shopping, whatever. Listen for parts where you lose interest, any awkward transitions, and other glitches. Next, make all necessary changes, then burn another CD or transfer to a portable music player, and start the process over. At some point, the various strands of the CD will hang together like a well-woven tapestry . . . and assembly is complete. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Just how much of a DAW can you get for under $150? The answer might surprise you $199.99 MSRP, $149.99 street www.acoustica.com by Craig Anderton I was checking stats for Harmony Central’s YouTube channel and was shocked to see that our had 53,000 views, making it the second most-watched video in the last year—bested only by a gear interview with Rush’s Alex Lifeson. (And almost a year after its release, ithe Mixcraft 6 video is still in the top 10 every month.) What’s so special about DAW software from a relatively small company for it to garner that level of attention and curiosity? Mixcraft doesn’t try to be a “Pro Tools killer,” nor is it so “lite” that it just floats away from lack of substance. It has always had the reputation for being inexpensive and easy to use, and pulled off the delicate balancing act of being powerful enough to be satisfying, yet intuitive enough not to be frustrating. Mixcraft 6 manages to keep that balancing act alive, despite adding more depth and power. As a result, Mixcraft has managed to acquire a cult following—a pretty large cult, actually. When it first appeared, Mixcraft appealed primarily to musicians on a budget who didn’t want to deal with something more sophisticated and potentially confusing. But these days, Mixcraft is also picking up some new fans—people who just don’t need all the bells and whistles of more complex programs, and just want something that’s fast, stable, and easy to use. In fact for some types of projects, Mixcraft is the fastest program I’ve found for getting from start to finish . . . more on that later. This review doesn’t need to go into excessive detail, because you can download a trial version and check the program out for yourself. However, like all software, you still need to invest some time into learning the in and outs before you can decide whether it’s right for you or not. So, we’ll concentrate on what Mixcraft has to offer, and then you can decide whether it might be the program you’ve been seeking. DIFFERENT VERSIONS Mixcraft comes in four different versions. This review focuses on Mixcraft Pro Studio 6, which is the line’s flagship. What differentiates it from the standard version of Mixcraft ($74.95 download, $84.95 boxed) are additional plug-ins and virtual instruments, so if you just subtract the Pro Studio 6 plugs (covered later), you’ll know what the standard version is all about. (Mixcraft 6 Home Studio, which lists for $49.95, limits the track count, includes only basic plug-ins, has no automation, and includes about 1/3 of the content included with hte other versions). It’s not the droid you’re looking for.) Another version, to be introduced at Winter NAMM 2013, bundles a USB mic . . . details will be retrofitted to this review after the official announcement. INSTALLATION Mixcraft runs under Windows from version XP onward as a 32-bit program, although it also runs fine under 64-bit versions. CPU and memory requirements are relatively modest (1GHz and 2GB respectively), and its “footprint” is more like a slipper than a boot. Mixcraft 6 Pro Studio is available boxed or as a download, and copy protection is a simple activation code—no dongles or going through hoops. BASICS Mixcraft’s “lay of the land” isn’t significantly different from other DAWs (Fig. 1): It has tracks and buses, a mixer, accepts VST or DirectX plug-ins, offers tabbed views of various sections, and wraps all this in a “unified,” single-screen graphic interface where you can nonetheless undock selected elements if desired. However, if you look a little deeper, Mixcraft has some philosophical differences that relate mostly to creating a faster workflow. Fig. 1: The main Mixcraft graphic user interface. For example, MIDI, instrument, and video tracks have no structural distinction and are treated similarly. In fact Mixcraft doesn’t even bother with MIDI tracks, on the assumption that you’ll be using them primarily for virtual instruments—insert a virtual instrument track, and it takes MIDI in and produces audio out. However, if you have something like a MIDI-aware plug-in, you can just pick up the MIDI from an existing virtual instrument, or insert a new one and de-select any instrument that’s loaded. Mixcraft also does ReWire, but treats ReWire devices as it would any other instrument plug-in. Mixcraft’s instrument tracks also do something I haven’t seen in any DAW (Fig. 2): when inserting the instrument, you can define volume, pan, keyboard range, transposition, velocity range, and outputs—so rather than inserting instruments and then defining splits and layers later on the course of the project, you can define any splits and layers from the gitgo (as well as modify them later). This architecture also makes it easy to layer multiple virtual instruments. Fig. 2: Mixcraft has a way to insert instruments that’s so simple and obvious that apparently, no one thought of it before. However, MIDI’s transparency doesn’t mean it’s ignored. Mixcraft has tabbed sections for editing, and one of them covers MIDI editing—so when it comes to tweaking, MIDI is roughly on the same footing as audio. Audio tracks offer automation lanes, as do MIDI—but the latter are based on MIDI controller information. Audio track automation can be added with “rubber band,” line-style automation, but not recorded from a control surface; however, you can record MIDI controller data from a control surface to automate virtual instrument and effect plug-in parameters. It’s also possible to use a control surface to do remote control of functions like track arming, transport control, loop toggle, insert track, etc. Mixcraft has an easy “MIDI learn” function for control surfaces. Clip automation is also available for audio and MIDI clips, offering volume, pan, low pass filter with resonance, and high pass filter with resonance. There are other track types, like output tracks (essentially buses that can go to various interface outputs), aux/bus tracks for sends, submix (group) tracks, and a master track which is typically where you stereo mix terminates. Mixcraft doesn’t do surround, but that’s hardly surprising given the price, or how many people actually work in surround. LOOPOLOGY Mixcraft isn’t an Ableton Live or FL Studio type of looping program, yet it incorporates looping in a painless and clever way. Acoustica has partnered with zplane to use their digital audio stretching algorithms; files are simply stretched to fit (the downside is that you can’t import REX files directly into Mixcraft). In fact, one of the very coolest features—and again, this is a “why don’t all programs do this?” feature—is that when you first bring a loop into Mixcraft, it asks if you want to conform the project tempo to the loop tempo or vice-versa; if it’s a pitched loop, you can also decide whether to conform the key to the project or loop. Subsequent loops are then matched to that initial default. Loops can of course be “rolled out,” edited, and the like; I also like the “+1” button, where clicking creates one additional iteration. Note also that Mixcraft reads tempo & key information from Acidized and GarageBand loops, so you can use these as you would in their respective programs, and they work identically to Mixcraft’s own loops. As if to drive the point home about looping, Mixcraft comes with a sound library of over 6,300 royalty-free loops and effects. What’s more, these aren’t “bonus filler” loops, but an eminently useable collection that spans a wide variety of genres, from Acid Techno to Zombie Rock. They’re arranged pretty much as construction kits, but files are searchable, tagged, and categorized, making it easy to mix and match among different kits—especially because you can also sort based on tempo, key, instrument, etc.. Fig. 3 shows what happens when you click on the Library tab in the Details area. Fig. 3: The Library not only contains a wide selection of material, but makes it easy to find and use particular sounds and loops. All this content comes with the boxed version, so you might think this would make for a hellacious download. But for the downloaded version, Mixcraft essentially loads “placeholders” for the various loops and samples; clicking on a loop’s play button downloads what you’ve selected to disk. Over time, as you audition more samples you eventually end up with everything on your hard drive although you can also download them all in one fell swoop, or one category at a time. You can also import your own loops and integrate them into the library structure. I can’t emphasize enough how useful this content is, even for pros. Many times I need to come up with a quick music bed at (for example) trade shows when something needs to slide under the video coverage; I have yet to find a program that gets this done faster than Mixcraft. COMPING Here’s another feature where Mixcraft got it right. Each part can go into its own lane, and you can loop or punch within tracks to comp very specific sections. You can even punch within a loop, and choose whether new takes mute old takes or overdub new takes. While in general comping is a fairly sophisticated feature, Mixcraft makes it quite straightforward. EDITING There are four “Details” tabs for editing and other functions, and this entire section can be undocked. Undocking is primarily important for the mixer, as you can place it on a separate monitor in a two-monitor setup, allowing you to see more tracks in the main monitor. Project is the most basic tab—it offers tempo, key, time signature, auto beat match on or off, the project folder location, and a notepad for entering what’s essentially project metadata. It also provides an alternate location to insert individual effects into the master track, although you can do that at the master track as well (which offers the added benefit of effects chains, covered in the plug-ins section). The Sound tab is a little more involved (Fig. 4). Fig. 4: The Sound tab showing an audio clip. The screen shot (which shows Sound undocked) is pretty self-explanatory, except for the Noise Reduction option: this lets you isolate a “noise fingerprint,” then reduce noise matching that fingerprint by a selectable percentage. If the clip you’re editing is MIDI, then Sound shows MIDI data (Fig. 5). Fig. 5: The Sound tab showing a MIDI clip. This keeps improving with newer versions, and now includes several MIDI editing options, a controller strip you can assign to whatever controller you want to edit, primitive notation view, drum maps, snap, and the like. MIDI editing isn’t on a Cubase/Sonar/Logic level by any means, but gets the job done. The Mixer tab (Fig. 6) is your basic hardware mixer emulation (complete with virtual wood end panels!). Fig. 6: Mixcraft’s mixer in action. While it looks pretty cool, it does have limitations; the EQ is three-band fixed EQ, and you can’t customize channel placement, strip width, etc. It’s definitely something I’d reserve for mixing, while sticking to the main track view when tracking and editing. VIDEO Okay, so Mixcraft is a surprising DAW. But it really doesn’t get more surprising than this: Mixcraft has more sophisticated video capabilities than any other DAW I’ve used. If any program has the right to call itself “direct from your garage to YouTube,” this is it. You can load multiple video clips (with their associated audio) into a single video track, mix WMV and AVI formats (but no MP4 or MOV), and even do some editing like trimming, crossfading clips, and changing video and audio stream lengths independently. Furthermore, you can insert still images in the video track (JPG, BMP, PNG, GIF) to supplement the video, or create “slide show” videos by crossfading between images. Text “clips” can be inserted into a text lane and include fades, different background and text colors, and basic intro and outro text animations (like move and reveal). Topping it all off: 25 bundled video effects including brightness, posterize, color channel strength and inversion, emboss, and more (Fig. 7). Fig. 7: As if video wasn’t enough, Mixcraft also includes automatable video effects. These are added to the video track just like adding automation lanes to audio, with automatable video effect parameters. Sure, Mixcraft isn’t exactly Sony Vegas Pro—but if Vegas Pro had a baby, it might look somewhat like this. PLUG-INS The biggest difference between Mixcraft 6 Home Studio, Mixcraft 6, and Mixcraft Pro Studio 6 are the included plug-ins. I’m going to cop out of listing them all here, as Acoustica has a comparison chart on their site that lists the various plug-ins, as well as which version has which plug-ins. Suffice to say there’s a wide range of plug-ins that cover all the usual bases (Fig. 8), with the Pro Studio 6 version expanding the repertoire considerably—for example you get a grand piano, two additional vintage synths, and a lot more effect plug-ins, including some mastering plugs from iZotope. If you're happy with your existing collection of plug-ins then the standard version of Mixcraft will take care of the rest of your DAW needs; but bear in mind that all the additional plug-ins essentially cost you $75, so you get quite a lot in return to add to your collection. Fig. 8: The Messiah virtual analog synthesizer is just one of many virtual instrument plug-ins. Of course, Mixcraft can also load third-party plug-ins. The only problem I experienced was loading UA’s version 6.4 powered plug-ins, but similar problems with this version have also been reported with other 32-bit Windows programs; if UA or Acoustica come up with a workaround or patch, I’ll update this review. In the “pleasant surprise” category, you can create effects chains, as well as save and load them. This even extends to a chain consisting of a virtual instrument with effects. What’s more, Mixcraft can handle instruments with multiple outputs—older versions couldn’t do that. In any event, if you find yourself needing more instrument plug-ins as well as the ability to use REX files, I’d recommend ReWiring Propellerhead Software's Reason Essentials into Mixcraft. It’s a powerful combination, and still a major bargain from a financial standpoint. WHAT’S NOT INCLUDED As you’ve probably gathered by now, Mixcraft is truly a full-featured program. However, it is missing a few significant features that are often found in more expensive DAWs. No recording of mixer fader movements. You can use automation lanes to draw/edit automation envelopes, but not record on-screen or control surface fader movements into these lanes. No MIDI plug-ins No audio quantization No direct REX support, although you can insert instruments that play back REX files No VST3 support CONCLUSIONS Mixcraft isn’t a cut-down version of a flagship program; it is the flagship program. As a result, only Mixcraft 6 Home Studio actually removes features to meet a sub-$50 price point, and the Pro Studio 6 version is more about adding extra features to the core program that, while welcome, may be features that not all users would want (like mastering effects). So whether you’re interested in Mixcraft 6 or Mixcraft 6 Pro Studio, you get a very complete program—and don’t forget about the video features—at what can only be considered a righteous price. What’s also interesting is how many “little touches” are included that show someone’s really thinking about what musicians need in a program. For example, every track has a built-in tuner you can access with one click. Simple? Yes. Effective? Yes. If you hover the mouse over a track’s FX button, you’ll see a list of inserted effects; right-clicking on it opens the GUIs for all effects, including ones that are part of an effects chain. And when you save a file, Mixcraft automatically generates a backup. I can’t believe all programs don’t do this, but at least Mixcraft got the memo that backups are a Good Thing. You can burn projects to CD, but also, mixdown to MP3, WMA, or OGG formats as well as WAV files—no separate encoder needed. Is Mixcraft for you? It’s easy enough to find out: Download the demo. I think you’ll be as surprised as I was at what this low-cost, efficient, and user-friendly DAW can do. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Sony expands its wireless line with multiple digital offerings—including versions for guitar/bass and handheld mic, designed especially for musicians DWZ-B30GB Guitar/Bass ($499.99 MSRP, $399.99 street) DWZ-M50 Handheld Mic ($699.99 MSRP, $549.99 street) by Craig Anderton I was never a fan of analog wireless, because it sometimes had the potential to turn from “wireless” into “w1reL3 ss,” if you catch my drift. I also didn’t like the companding that was usually employed to overcome the inherently questionable signal-to-noise ratio. My initial experience with digital wireless was Line 6’s XD-V70 wireless mic, and it made me a believer. First, of course, was sound quality—no companding, just PCM linear digital audio. Second was what happened when you got out of range: It just stopped. There was no noise, chattering, weird fades, or artifacts; either the receiver picked up the audio, or it didn’t. Chalk up another area where digital has bested the analog world, although in the case of wireless, the tradeoff can be a higher price point. Now Sony has entered the affordable digital wireless arena with their DWZ line. We’ll look at the DWZ-B30GB for guitar/bass first, and then proceed to the DWZ-M50 handheld wireless mic for vocals. DWZ-B30GB BASICS This is a license-free 2.4GHz system that includes a bodypack transmitter, receiver, belt clip, guitar-to-bodypack cable, printed manual, and CD-ROM with manuals in English, French, Spanish, German, and Italian (Fig. 1). Fig. 1: The package contents. Clockwise from top: AC adapter, receiver, cable, bodypack transmitter, belt/strap clip, CD-ROM with the manual in five languages, and printed manual in English. The digital audio format is 24-bit linear PCM, with no compression or other processing, so the sound quality blows away the average analog wireless system. The body pack is about 2.5" x 3.75" x 0.75", with a stubby antenna protruding about 7/8" from the body. It has two switches, for mic/instrument level and attenuation (0, -10, and -20dB). The instrument jack is an 1/8" type, which mates with the included 1/4" to 1/8" cable (it’s about 32"). The only other controls are switches for lock/unlock to prevent accidental changes of channel or power/muting, and channel select (complemented by a seven-segment LED readout to show the channel). Power comes from two AA cells; an LED indicator shows battery strength (but only two states—“good” and “almost dead”), while another LED shows the audio state—signal present, excessive level, weak, or mute enabled. In other words, it’s pretty easy to deal with. The receiver is light and compact—about 5-1/8" x 2-7/8" x 1-5/8". There’s an XLR balanced out (Fig. 2), additional 1/4" main out jack (both can be used simultaneously), and 1/4" tuner out jack. Fig. 2: You can send audio to the front of house mixer from the XLR, while driving a guitar amp from the 1/4" output and feeding a tuner with the second 1/4" jack. A very nice touch is that muting audio at the bodypack doesn’t mute the tuner output, so you can still tune no matter what. A switch chooses between narrow and wide RF modes (more on this later), with one six-position rotary switch for Channel, and an eight-position rotary switch for Cable Tone. The latter lets you match the wired and wireless sounds as closely as possible by using high-cut filtering to emulate the loading effect of different length cables on your pickups; the switch is calibrated in meter lengths, from 1 to 25 (but really, any guitarist who uses a 25m cable is certainly the target market for a wireless system!). There are also several indicator/status LEDs. Power comes from either the included 12V adapter, a 9V negative tip input (e.g., from a pedalboard power source), or 9V transistor radio type battery. The pedalboard power feature is great when you have a wired connection from the pedalboard back to your amp, but want to be liberated from patching into your pedalboard. With alkaline batteries, Sony estimates about 10 hours’ battery life for the belt pack, and 3.5 hours for the receiver. Note that both units also have mystery USB micro-B connectors; these aren’t referenced in the documentation, but Sony confirmed they’re included for potential software updates. DWZ-B30GB OPERATION It’s very easy to get the system going. There are two RF modes: Wide Band (optimized to reduce interference to other wireless equipment) and Narrow Band (optimized for avoiding interference from other wireless equipment). You do need to stick with one mode or the other when using multiple units on different channels. For the bodypack, you choose one mode or the other on power up—hold the channel select button down while turning on power, choose the mode, then do a long press to confirm the mode and set the channel. If you want to change the channel, another long press lets you do so, and short presses cycle through the channels. Normally I wouldn’t go into this level of detail in a review—this is more the province of manuals, right?—but I wanted to get across what’s involved in doing setup, as it’s pretty painless. At the receiver end, just switch-select the mode, then spin the channel dial until it matches what you set on the bodypack. There are six channels total, and as long as the transmitter and receiver are set to the same mode and channel, there’s not much that can go wrong other than a dead battery or going out of range. As to the Cable Tone function, it’s both weird and brilliant—weird because I’m always trying not to load down my guitar, but brilliant because cords do load down guitars with passive pickups, and that has become part of some musicians’ sound. Now they can dial in the desired amount of degradation. THE DWZ-M50 MICROPHONE The DWZ-M50 is somewhat more ambitious and expensive than the DWZ-B30GB, but is equally easy to use and also works in the 2.4GHz band with 24-bit PCM audio. Here’s what’s included (Fig. 3). Fig. 3: The package contents. Clockwise from top: AC adapter, receiver, CD-ROM with the manual in five languages and printed manual in English cable, handheld microphone, and the two antennae. The mic stand clip is in the center. Let’s start with the cardioid, unidirectional dynamic mic. Even with batteries it feels a little lighter than an SM58, despite an overall slightly larger diameter and a body that extends about 3.25" beyond that of an SM58. However, when you consider that the SM58 is invariably wedded to an XLR connector at the end, and of course the Sony isn’t (hey, it’s wireless!), then practically speaking Sony is only about 1.25" longer due to its protruding, stubby antenna. I’m not sure what mic capsule Sony is using, but it’s in the same sonic league as popular stage-oriented dynamic mics. I did find it necessary to use a wind screen (as I do with all mics), and being a dynamic, I appreciated the 5-band EQ included the receiver so I could boost the highs just a bit. The mic has a removable/interchangeable wind screen, and removable/interchangeable capsule (specified as needing a 31.3mm diameter and pitch of 1.0mm pitch; Sony says the mic is compatible with their CU-C31, CU-F31, and CU-F32 mic capsules). Unscrewing the element lets you access an attenuator with settings of 0, -6, and -12dB. Furthermore, you can unscrew the mic grip to reveal the lock/unlock slide switch, channel display, channel selector button, and (like the guitar system) a USB micro-B connector. The power/muting button is always accessible, and the battery/muting indicator is always visible; like the DWZ-B30GB, the battery indicator displays one of two states: good, or “you-better-put-in-new-batteries-soon.” THE DWZ-M50 RECEIVER The receiver is larger than the one for the DWZ-B30GB, although it has the same complement of output jacks (including—yes—a USB micro-B connector); one difference is that the XLR is switchable between mic and line levels (Fig. 4). There are also connectors for the two antennae. The receiver can’t be battery-powered, but uses the included 12V (positive tip) adapter. Fig. 4: The receiver’s rear panel has balanced and 1/4" jacks, as well as all other connectors. The front panel is dominated by a large and extremely readable color LCD, and all adjustments are menu-driven from a variety of menus (Fig. 5). Again, you can choose between wide and narrow band operation, but channel setup works somewhat differently than you might expect; instead of choosing a channel on the mic and having the receiver hone in on it, you instead can have the receiver scan for the optimum channel, or scan for clear channels and display which ones have low, moderate, or high interference. In either case, you then set the mic channel to match. You can also select channels manually, but I don’t see any reason not to the let the receiver do the work for you unless you’re using multiple units. Fig. 5: The display prior to turning on the mic. Goodies in addition to the graphic EQ include the option to set whether the aux/tuner out jack passes or blocks signal when the mic is muted, and the ability to optimize the remaining transmitter battery time display for alkaline, Ni-MH, or lithium batteries. In use, the display shows the selected channel, signal strength for each antenna, audio levels, estimated remaining battery time for the transmitter, and whether the equalizer is on or off (Fig. 6). Fig. 6: The display indicates signal strength, audio levels, and other parameters. CONCLUSIONS I tested both systems for range. It’s important to find a location that’s not next to a major interference source; out of curiosity, I set the receiver up within a few inches of a wireless modem, and not surprisingly the receiver couldn’t find a clear channel. Moving it just a few feet gave a couple clear channels, and a little further away, all the channels were available with minimal interference. Under real-world conditions, when using both the guitar and mic transmitters indoors with various objects in between them and the receivers (as well as some random RF interference), I was able to get a 100\\\% reliable connection at 70 feet away. I ran out of open space at that point, but Sony says the maximum line-of-sight range can be up to 200 feet for the DWZ-B30GB and 300 feet for the DWZ-M50. When I put three walls between the transmitter and receiver at 70 feet, the connection was no longer reliable, which based on prior experience, was expected. The squelching when going from signal to no signal wasn’t as elegant as some more expensive systems, but thanks to digital technology, as long as I was in range the sound quality didn’t change within that range—no cut-outs or pops. You probably can sing from the balcony seats if you’re line-of-sight and there’s not a lot of interference; for typical distances—i.e., anywhere on a big stage—you’re good to go. It’s clear Sony’s intention was to combine performance with cost-effectiveness. Of course, being digital the DWZ systems start off with an inherent advantage; but the implementation is also noteworthy. The guitar/bass version is simpler, less expensive, and slightly easier to set up but also includes clever options, like the cable emulator and the ability to run the receiver from a pedalboard’s power supply in case you want to go wireless to your pedalboard rather than your amp (or if you just don’t want to carry around one more AC adapter). The mic is equally adept at performing its duties, but with a somewhat heftier feel (and price). Again, there are extras—like the graphic EQ, excellent display, ability to use other capsules, as well as somewhat greater range. It also helps that the mic “feels” right, with sound quality comparable to “industry standard” mics; the battery life is excellent, too. This is Sony’s first foray into affordable digital wireless for musicians, but they got it right from a technical standpoint, as well as in terms of the user interface. The bottom line is as long as the power sources are doing their thing, you’re not in a super-dirty RF environment, and the transmitter and receiver are set to the same channel, you really can’t go wrong. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. If you're getting started in desktop mastering, these five tips will serve you well by Craig Anderton Mastering is a specialized skill; but if you want to be able to master your own material, the only way you’ll get good at it is to do it as much as possible. While we’d need a book to truly cover desktop mastering (I like Steve Turnidge’s Desktop Mastering book so much I endorsed it), these five essential tips will make your life a lot easier, regardless of your level of expertise. Save all of a song’s plug-in processor settings as presets.After listening to the mastered version for a while, if you decide to make “just one more” slight tweak—and the odds are you will—it will be a lot easier if you can return to where you left off. (For analog processors, take a photo of the panel knob positions.) Saving successive presets makes it easy to return to earlier version. With loudness maximizers, never set the “ceiling” (maximum level) to 0dB. Some CD pressing plants will reject CDs if they consistently hit 0dB for more than a certain number of consecutive samples, as it’s assumed that indicates clipping. Furthermore, any additional editing—even just crossfading the song with another during the assembly process—could increase the level above 0. Don’t go above -0.1dB; -0.3dB is safer. Setting an output ceiling (i.e., maximum output level) below 0dB will ensure that a CD duplicator doesn't think you've created a master with distortion. Typical values are 0.1dB to 0.5dB. Halve that change. Even small changes can have a major impact—add one dB of boost to a stereo mix, and you’ve effectively added one dB of boost to every single track in that mix. If you’re fairly new to mastering, after making a change that sounds right, cut it in half. For example, if you boost 3dB at 5kHz, change it to 1.5dB. Live with the setting for a while to determine if you actually need more—you probably don’t. Bass management for the vinyl revival. With vinyl, low frequencies must be centered and mono. iZotope Ozone has a multiband image widener, but pulling the bass range width fully negative collapses it to mono. Another option is to use a crossover to split off the bass range, convert it to mono, then mix it back with the other split. Narrowing the bass frequencies can make a more "vinyl-friendly" recording. Here, the bass region (Band 1) has been narrowed to mono with a setting of -100.0\\\%. The “magic” EQ frequencies. While there are no rules, problems involving the following frequencies crop up fairly regularly. Below 25Hz: Cut it—subsonics live there, and virtually no consumer playback system can reproduce those frequencies anyway. 300-500Hz: So many instruments have energy in this range that there can be a build-up; a slight, broad cut helps reduce potential “muddiness.” 3-5kHz: A subtle lift increases definition and intelligibility. Be sparing, as the ear is very sensitive in this range. 15-18kHz: A steep cut above these frequencies can impart a warmer, less “brittle” sound to digital recordings. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. So is this a crazy or brilliant idea? Better read the entire review before making up your mind $999.99 MSRP, $499.99 street www.peavey.com www.autotuneforguitar.com by Craig Anderton When Gibson introduced their Robot self-tuning technology, I took a lot of flak on forums for defending the idea. A typical comment was “I already know how to tune a guitar, that’s a really stupid idea” to which my response was “yes, but can you tune all six strings perfectly in under 15 seconds?” In my world, time is money. Sure, I can tune a guitar. But when I was recording sample and loop libraries with guitar, I’d spend 30-40\% of my time tuning, not playing, because libraries have to be perfectly in tune. To pick up a guitar, pull up a knob, strum, and get back to work was a revelation. And as a side benefit, being able to do alternate tunings live in the blink of an eye, and get back to perfect tuning without making the audience wait, were powerful recommendations for automatic tuning. Which brings us to the AT-200. It’s based on an entirely different approach and technology compared to Robot tuning, but accomplishes many of the same goals—and has its own unique attributes that are made possible only by clever application of DSP. Robot tuning works by using electronics to monitor the string pitch, and servo motors to tune the strings physically by turning the machine heads. The AT-200 is based on Antares’ Auto-Tune—yes, the same vilified/praised technology used on vocalists to do everything from turn their voices into machine-like gimmicky to touching a vocal line so transparently and subtly you don’t even know it’s being used. Sure, Auto-Tune is used to make lousy singers sound bearable. But it’s also a savior for great singers who nail the vocal performance of a lifetime except for that one flat note at the end of a phrase. With the AT-200, Auto-Tune uses DSP-based pitch transposition to correct each string’s audio output so it sounds in tune (Fig. 1). As a result, the physical string itself can be out of tune, but it doesn’t matter; what you hear coming out of the amp is in tune. This leads to a disconnect for some people, because the physically vibrating string may not match what comes out of your amp (this also happens with the Line 6 Variax when you do alternate tunings; Robot technology doesn’t do this, because it’s adjusting the actual string pitch). Fig. 1: The board that serves as the AT-200’s pet brain. This is a little bizarre at first, but it simply means turning up the amp to where it’s louder than the strings (not too hard, given that the AT-200 is a solid-body guitar). In the studio, if you’re using headphones while laying down a part, you won’t hear the strings anyway. As a result there can be times when your brain is saying “it’s not in tune” while your ears are telling you “it’s in tune.” Believe your ears! If you tune close enough to begin with, Auto-Tune doesn’t have to work too hard and the most you’ll hear is a chorusing effect if the strings are slightly off-pitch. There’s a sonic difference between the Auto-Tuned sound and that of the straight pickups; the level is lower, and the sound lacks some of the treble “snap” of the magnetic pickups (I really like the pickups, by the way). However, what you don’t hear are the artifacts typically associated with pitch-shifting. When recording, I simply increased the input level on the interface and added some high-frequency shelving to compensate. More importantly, the “native” Auto-Tuned needs to be fairly neutral to allow for the upcoming guitar emulations; if there’s too much “character” that’s weighted toward a specific guitar, then you have to “undo” that before you can start emulating other guitar sounds. BUT IF YOU THINK THAT’S ALL THERE IS TO IT . . . This might seem like a good time to stop reading if you have other things to do—okay, there are signal processors that tune each string, great, I get it. But keep reading. One of the side benefits is there’s perfect intonation (what Antares calls “Solid-Tune™”) as you play. You know those chords with really difficult fingerings where you end up pushing a string slightly sharp? No more, as long as you strum the chord after fretting (if the pitch changes after strumming, if the note remains within a small pitch window, the AT-200 will correct it; otherwise it will think you’re bending, and not correct it). It’s freakish to play a guitar where no matter how difficult the fingerings or where you are on the neck, the intonation is perfect. Not only is this aesthetically pleasing, but there’s a “domino effect” with distortion: You hear the same kind of “focused” distortion normally associated with simply playing tonics and fifths. Note that it’s not doing Just Intonation; everything still relates to the western 12-tone scale (but I’d love to see an add-on for different intonations). If you think this would cause problems with bends or vibrato, Antares has figured that out. If a pitch is static, Auto-Tune will correct it. But as soon as the pitch starts to move outside of a small pitch window because you’re bending a note or adding vibrato, the correction “unlocks” automatically for that string. You simply don’t run into situations where Auto-Tune tries to correct something you don’t want corrected. The system also allows for alternate tunings, as long as the tuning involves shifting down (future add-ones are slated to address alternate tunings where pitches are shifted up from standard). Auto-Tune works based on the pitch at the nut, but you can fool it into thinking the nut is somewhere else. For example, suppose you want a dropped D tuning. Fret the second fret on the sixth string (F#), strum the strings, and initiate tuning. Auto-Tune will “think” the F# is the open E, and tune F# to E. So now when you play the E open string, you’ll hear a D as the string is transposed down two steps. It gets better. Want that heavy metal drop tuning? Barre on, for example, the fourth fret while tuning, and now whatever you play will be transposed down four semitones. Being a wise guy, I tried this on the 12th fret and—yes, I was now playing bass. What’s more, it actually sounds like a bass. Say what? Or try this: fret the 12th fret on only the 5th and 6th strings. Now when you play chords, you’ll have one helluva bottom end. The manual gives suggested fingering to create various alternate tunings—open G, baritone, DADGAD, open tunings, and the like. The only caution with alternate tunings is that you need to press lightly on the string when engaging the Auto-Tune process. If you press too hard and the string goes slightly sharp, Auto-Tune will obligingly tune those fretted strings slightly flat to compensate. WHAT ABOUT THE GUITAR? Of course, all the technology in the world doesn’t matter if the guitar is sketchy. It seems Peavey wanted to avoid the criticisms the original Variax endured (“great electronics, but what’s with the funky guitar?”). Obviously Line 6 did course corrections fairly quickly with subsequent models, and the recent James Tyler Variax is a honey of a guitar by any standards. But Peavey needed to walk the fine line between a guitar you’d want to play, and a price you’d want to pay. They choose the basic Predator ST “chassis,” which is pretty much Peavey’s poster child for cost-effectiveness. Read the reviews from owners online; I’ve seen several where someone brought a Predator as a replacement or second guitar, but ended up using it as their main axe. The general consensus—which includes me—is that the Predator is a highly playable, fine-sounding guitar whose quality belies its price, with solid action and out-of-the-box setup. Not surprisingly, so is the AT-200. Spec-wise, it has a bolt-on, North American rock maple neck with a 25.5" scale, 24 frets, 15.75" radius, and rosewood fingerboard (Fig. 2). Fig. 2: The AT-200 features a bolt-on neck. The body is solid bassword, with a quilted maple cap; available finishes are black and candy apple red. The pickups are humbuckers with alnico 5 magnets (Fig. 3), and one of the highly welcome AT-200 features it that you can use it like a regular guitar—if the batteries die during the gig, just pull up on the tone knob and the pickups go straight to the audio output. Fig. 3: Pickups and the complement of controls. Other features are a three-way pickup selector, and string-through-body construction for maximum sustain (Fig. 4). Fig. 4: Detail of the bridge pickup and bridge; note the string-through-body construction. The tuners are decent. They’re diecast types with a 15:1 gear ratio, mounted on a functional but plain headstock (Fig. 5). The guitar doesn’t come with a case, so factor that into the price; also figure you’ll want the breakout box, described later. Fig. 5: AT-200 headstock and tuners. EASE OF USE The guitar ships with a removable “quick start” overlay and frankly, it could double as the manual (Fig. 6). Fig. 6: This pretty much tells you everything you need to know to get up and running. You make sure four AA cells are inserted (see Fig. 7; alkalines last about nine hours); plugging in the guitar turns on the electronics. Push down on the Tone control to activate the Auto-Tune technology, strum all six strings, and push down on the volume knob ot initiate tuning. Done. Yes, it’s that simple. If you want Auto-Tune out of the picture, pull up on the Tone knob. Fig. 7: The battery compartment is closed, and to the right of the exposed cavity with the electronics. THE FUTURE I never advise buying a product for what it “might” do, only for what it does, because you never know what the future will bring. That said, though, it’s clear Peavey and Antares have plans. There’s a clear division of labor here: Peavey provides the platform, while Antares provides the software. In addition to the standard 1/4" audio output, the AT-200 has a 8-pin connector + ground that connects to an upcoming breakout box. This is expected early in 1Q 2013, and is slated to sell for under $100. It will provide power to the guitar so you don’t need batteries, as well as an audio output. There will also be MIDI for use with external MIDI footswitches for tasks like preset selection, as well as doing updates. If you want to do updates but don’t want the breakout box, a “MIDI update cable” with the 8-pin connector on one end and MIDI on the other will cost $13 and allow doing updates from your computer. At the Antares end of things, this is a software-based platform so there are quite a few options. They’ve already announced an upcoming editor for live performance that runs on iOS devices; it lets you specify pickup sounds, alternate tunings, pitch shifting, “virtual capo” settings, and the like. I saw this software in prototype form at a press event that introduced the technology, so I would imagine it’s coming very soon. Antares has also announced AT-200 Software Feature Packs that add optional-at-extra-cost capabilities in three versions—Essential, Pro, and Complete. For example, the Essential includes processing for three different guitar sounds, Pro has six, and Complete has nine unique guitar voicings as well as bass. They also include doubling options (including 12 string), various tunings, and the like. These are all described on the www.autotuneforguitar.com web site. ROBOT OR AUTO-TUNE? This review wouldn’t be complete without a comparison. Both work and both are effective, but they’re fundamentally different. The biggest difference is that with the Robot system, because it works directly on tuning physical strings, “what you hear is what you get.” With alternate tunings, the guitar is actually tuned to those tunings. Also, the audio output is the sound of the string; there’s no processing. As a result, there’s zero difference between the sound made by the guitar and the sound coming out of the amp. Robot tuning is for those who prioritize tonal purity, and are willing to pay for the privilege. Auto-Tune trades off the physical string/resulting sound disconnect for more flexibility. You’ll never be able to tune physical strings up or down an octave, but you can do that with virtual strings—and the tuning process is close to instant. Although the audio is processed, the impact on the sound is minimal at best but still, there’s a layer of electronics between the string and you. On that other hand, that’s also what allows for emulating different characteristic guitar sounds. What’s surprising, though, is that there’s no discernible latency. (Well, there has to be some; laws of physics, and all that. But it’s not noticeable, and I’m very sensitive to timing.) Furthermore, the fact that this processing doesn’t add artifacts to the guitar’s tone is, to me, an even more impressive technical accomplishment than changing pitch. BELIEVE IT With apologies to Peavey and Antares, there’s something about this concept that makes you want to dismiss it. C’mon . . . Auto-Tune on a guitar? Taking out-of-tune strings and fixing them? Perfect intonation no matter where you play? Add-on software packs? What the heck does this have to do with my PRS or Les Paul or Strat? Now, those are real guitars! Except for one thing: the AT-200 is a real guitar (Fig. 8). Unless you notice the 8-pin connector, you’d never know there was anything different about this guitar. Play it, and it plays like a guitar . . . and it feels and looks like a guitar. All the magic is “under the hood,” and you don’t know it’s there until you start playing. The ease of use is off the hook. If it takes you more than a minute or two to be up and running, you might want to consider a different career. Fig. 8: The AT-200 doesn’t exactly look like a high-tech marvel . . .which is one of its strengths. Yes, it’s priced so that those getting serious about guitar can afford an AT-200, and derive the benefits of not having to hassle with tuning or worry about intonation. But I suspect a lot of veterans will add this to their guitar collection as well. After I got used to Robot tuning, it was always strange to go back to guitars where I just couldn’t push a button and be in tune. After getting used to the AT-200, it’s disorienting to go back to guitars that don’t have perfect intonation. Nor is it like vocals, where using Auto-Tune arbitrarily to create perfect pitch takes the humanity out of the performance; with guitar chords, out-of-tune notes just sound . . . well, wrong, and well worth fixing. And if you want to bend and slide, go ahead—the correction will wait in the wings until you want it again. Overall, this is a surprising, intelligent, and novel application of technology with extraordinarily practical consequences. After seeing prototypes, I expected to think the AT-200 was clever; I didn’t expect to think it was brilliant . . . but it is. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Check out this collection of tips and techniques from one of today’s most prolific UK Garage producers by Jeremy Sylvester Garage has been around since the 1990s, but it continues to influence other EDM genres as well as retain its own following. Whether you’re interested in creating “pure” Garage music using UK Garage loops or want to incorporate some its elements in other forms of music, the following tips should help get you off to a good start. THE GROOVE Drums are the backbone of any Garage production, and a solid drum groove is the most essential element in any UK Garage track. Before getting into choosing your sounds, remember that timing is everything. Shuffling, swung beats give UK Garage its unique stamp—so when building your drum pattern, it’s important to set your quantize/swing groove to between 50-56\% (Fig. 1). This will set the tone for the rest of the elements added later on. Fig. 1: Setting a little swing for MIDI grooves or quantized audio grooves gives more of a Garage “feel.” This screen shot shows the Swing parameter in Logic Pro's Ultrabeat virtual drum machine. BUILDING THE DRUM KIT Creating good drum patterns requires a good drum kit, so let’s start with the kick drum. Spend time searching for good sounds; for 4x4 Garage tracks, a strong, punchy kick drum that’s naturally not too bass heavy, and with a some midrange frequency presence, is the perfect starting point for any groove. This will leave some headroom for when you start to look for bass sounds to create bass line patterns later on; you don’t want the kick to take over the low end completely. Once you’ve decided on a kick (of course, with DAWs you can always change this later on), search for a nice crispy clap. If it has too much sustain, try to take some release off it and shorten its length. You want it to sound quite short and sharp, but not too short as you still want to hear its natural sound. Next, begin to add all of the other elements for your pattern. It’s very important to keep the groove simple, with enough space in the groove to add all your other sounds later on. Lots of people make the mistake (myself included!) of over-complicating the drum—as they say, less is more. The key is to make sure every element of your pattern has a distinct role, so that every drum element is there for a reason. When programming drums, imagine you are a “drummer” and concentrate on how a drummer plays to help you construct patterns. Another good tip is to make several patterns, all slightly different, to give your overall groove some variety. Also, keep your hi-hats neat and tidy; you don’t want them to sound undefined and “soupy.” PLACEMENT AND EFFECTS Keep the kick drum and other bass parts in mono, with other drum elements (such as hi-hats) in stereo to give the groove a nice spread. Maintaining bass frequencies in mono is particularly important if you ever expect a track to appear on vinyl. Resist temptation, and keep effects on the drums to a bare minimum. Too much FX (such as reverb) can drown out the groove and make it too wet, which sacrifices the energy of the drums. This will be very noticeable over a club sound system, more so than in the studio. Additionally, try playing around with the pitch of the sounds (Fig. 2). De-tuning kick drums or percussive elements of your groove will bring another dimension to your pattern and completely change the overall vibe. Fig. 2: Most samplers and drum modules (this screen shot shows Native Instruments' Battery) provide the option to vary pitch for the drum sounds. CHORDS, STABS, AND MELODIES As well as the groove drum pattern, another important element of UK Garage is the melodic structure. If like many people you don’t play keyboard, then you can always use one-shots/hits to help you. One-shots can be in the form or short chord keyboard hits, bass notes, percussive sounds, or synth stabs. When adding melodic elements to create a pattern, listen to the drum groove you have and work with it, not against it. The rhythmic pattern of your melody must complement the groove; in other words, the drum pattern and melody line must “talk to each other” and the melody must become part of the groove. Try using lowpass filters automated by an envelope, as well as effects, to manipulate and create movement with the sound; then add reverb for depth and warmth. Use parameter controls over velocity maps, for example, to control cutoff and decay and add variations. This will create shape, and adding some compression will really bring out some new life in your sound. If you are going for a rhythmic UK garage 4x4 style, space is important. When I mentioned above about “less is more,” it really means something here. Picture a melody in your head and imagine how people will be “dancing” to it. This will determine the way you create your melodic groove pattern. UKG melodic patterns tend to be “off beat” grooves, not straight line groove patterns. This is what gives Garage its unique style and vibe. When choosing sounds, try to look for rich harmonic sounds; some good options are obscure jazzy chords, deep house chord stabs, or even sounds sampled from classic keyboard synths (such as Korg’s M1 keyboard for those classic organ and house piano patches). ARRANGEMENT When arranging your song, always keep the DJ in mind and imagine how he/she will be mixing your track within their DJ set. The intro is very important for DJ’s as this allows them enough room to mix your track into another. Make your arrangement progress in 16 bar sections, so the DJ and the clubber know when to expect changes within the song. Within each of these sections, some elements of the groove may consist of 1, 2, 4 or 8-bar repeating patterns. These elements tend to move around by adding, removing, or altering every four or eight bars. Breakdowns tend to be in the middle of the track, so if you have a track that is six minutes long, you can drop the breakdown around the three-minute mark. There is no hard and fast rule to this, so use your imagination; this is intended only as a guide. You could also have a mini-breakdown on either side of this, for instance, right after the intro and just before the first major section of the song when everything is in. Be imaginative, and experiment with different arrangement ideas. You could start with drums, then lead into some intro vocals and then the mini drop, or you could start with a non-percussive intro that builds up into a percussive drum section and then goes into the song’s main section; it’s totally up to you and depends on the elements you have within your song. It’s also a good idea to finish the final section of your sing with drums. This is something a DJ really likes, as it allows once again for them to start mixing in another track within their DJ set. VOCALS AND VOCAL CHOPS Garage is known for its very percussive vocal chops; this is an essential part of the genre, especially when you are doing “dub” versions. You can use various kinds of MIDI-based samplers and software instruments to do this. Back in the day, Akai samplers were very popular—you would chop up and edit sounds within the device, map it across a keyboard, and play it manually. Nowadays there are many different ways of doing this, with instruments uch as Ableton Live’s Simpler or Logic’s EXS24 being the most popular. Another option is to slice a file (e.g., like the REX format; see Fig. 3), then map the individual slices to particular notes. Fig. 3: Slicing a file and mapping the slices to MIDI notes makes it easy to re-arrange and play vocal snippets on the fly, or drop them into a production. Furthermore, you can often re-arrange slices within the host program. In this screen shot from Reason, the original REX file mapping is on the right; the slice assignments have been moved around in the version on the left. Play around with vocals by chopping up samples every syllable. You could have a short vocal phrase of 5-6 words, but once chopped up and edited you can create double or even triple the amount of samples; this allows you possibilities to manipulate the phrase in any way you want, even completely disguising the original vocal hook. Map out these vocals across a keyboard or matrix editor, and have fun coming up with interesting groove vocal patterns over your instrumental groove pattern. Also try adding effects and filters, and play around with the sound envelopes in much the same way you would with the one shot chord sounds (as explained earlie)r. Treat the vocals as a percussive element of the track, but listening to the melody and lyrical content so it still makes sense to what the track is about. It’s a good idea to program 4-5 variations from which you can choose. I hope you find these tips useful; now go make some great music! This article is provided courtesy of Producer Pack, who produce a wide variety of sample and loop libraries. This includes the Back to 95 Volume 3 library from the article's author, Jeremy Sylvester.
  14. This highly cost-effective controller makes an auspicious debut $299.99 MSRP, $199.99 street samsontech.com by Craig Anderton Keyboard controllers are available in all flavors—from “I just want a keybed and a minimal hit on my wallet” to elaborate affairs with enough faders and buttons to look like a mixing console with a keyboard attached. Samson’s Graphite 49 falls between those two extremes—but in terms of capabilities leans more toward the latter, while regarding price, leans more toward the former. It’s compact, slick, cost-effective, and well-suited to a wide variety of applications onstage and in the studio. OVERVIEW There are 49 full-size, semi-weighted keys and in addition to velocity, Graphite 49 supports aftertouch (it’s quite smooth, and definitely not the “afterswitch” found on some keyboards; see Fig. 1). Fig. 1: Applying and releasing what seemed like even pressure to me produced this aftertouch curve. Controllers include nine 30mm faders, eight “endless” rotary encoders, 16 buttons, four drum pads, transport controls, octave and transpose buttons, mod wheel, and pitch bend (Fig. 2). Fig. 2: There are dedicated left-hand controls for octave, transpose, pitch bend, and mod wheel (click to enlarge). Connectors consist of a standard-sized USB connector, 5-pin MIDI out, sustain pedal jack, and jack for a 9V adapter—generally not needed as Graphite 49 is bus-powered, but if you’re using it with something like an iPad and Camera Connection Kit that offers reduced power, an external tone module, or other hardware where you're using the 5-pin MIDI connector instead of USB, you’ll need an AC adapter. One question I always have with attractively-priced products is how they’ll hold up over time. This is of course difficult to test during the limited time of having a product for review, but apparently UPS decided to contribute to this review with some pro-level accelerated life testing. The box containing Graphite 49 looked like it had been used as a weapon by King Kong (against what, I don’t know); it was so bad that the damage extended into the inner, second box that held Graphite 49. Obviously, the box had not only been dropped, but smashed into by something else . . . possibly a tractor, or the Incredible Hulk. But much to my surprise, Graphite worked perfectly as soon as I plugged it in. I did take it apart to make sure all the ribbon connectors were seated (and took a photo while I was it it—see Fig. 3), but they were all in place. Pretty impressive. Fig. 3: Amazingly, Graphite 49 survived UPS’s "accelerated life testing" (click to enlarge). OPERATIONAL MODES Graphite 49 is clearly being positioned as keyboard-meets-control surface, and as such, offers four main modes. Performance mode is optimized for playing virtual synthesizers or hardware tone modules, and gives full access to the various hardware controllers. Zone mode has a master keyboard orientation, with four zones to create splits and layers; the pitch bend, modulation, and pedal controllers are in play, but not the sliders, rotaries, and button controllers. Preset mode revolves around control surface capabilities for several popular programs, and is a very important feature. Setup mode is for creating custom presets or doing particular types of edits. There’s a relationship among these modes; for example, any mode you choose will be based on the current preset. So, if you create a preset with Zone assignments and then go to Performance mode without changing presets, the Performance will adopt Zone 1’s settings. PRESET MODE: DAW CONTROL Although many keyboards now include control surface capabilities, Graphite 49 provides a lot of options at this price in the form of templates for popular programs (Fig. 4). Unfortunately, though, the control surface capabilities are under-documented; the manual doesn’t even mention that Graphite 49 is Mackie Control-compatible. However, it works very well with a variety of DAWs, so I’ve written a companion article (don't miss it!) with step-by-step instructions for using Graphite 49 and smiilar Mackie Control-compatible devices with Apple Logic, Avid Pro Tools, Ableton Live, Cakewalk Sonar, Propellerhead Reason, MOTU Digital Performer, Sony Acid Pro (also Sony Vegas), Steinberg Cubase, and PreSonus Studio One Pro. (I found that Acid Pro and Vegas didn’t recognize Graphite 49 as a Mackie Control device, but they both offer the option to choose an “emulated” Mackie Control device, and that works perfectly.) Fig. 4: Graphite 49 contains templates for multiple DAWs, including Ableton Live (click to enlarge). The faders control level, the rotaries edit pan, and the buttons usual controlling solo and mute, but with some variations based on how the DAW’s manufacturer decided to implement Mackie Control (for example with Logic Pro, the button that would normally choose solo controls record enable). The Bank buttons change the group of 8 channels being controlled (e.g., from 1-8 to 9-16), while the Channel buttons move the group one channel at a time (e.g., from 1-8 to 2-9), and there are also transport controls. (Note that as Pro Tools doesn’t support Mackie Control you need to select HUI mode, which doesn’t support the Bank and Channel shifting.) Reason works somewhat differently, as Graphite 49 will control whichever device has the focus—for example if SubTractor is selected, the controls will vary parameters in SubTractor and if the Mixer 14:2 is selected, then Graphite 49 controls the mixer parameters the same way it controls the mixers in other DAWs. However Reason 6, which integrates the “SSL Console” from Record, treats each channel as its own device; therefore Graphite 49 controls one channel at a time with that one particular mixer. I tested all the programs listed above with Graphite 49, but there are additional presets for Nuendo, Mackie Tracktion, MK Control (I’m not quite sure what that is), Adobe Audition, FL Studio, and Magix Samplitude. There are also 14 user-programmable presets, and a default, general-purpose Graphite preset. This preset provides a good point of departure for creating your own presets (for example, when coming up with control assignments for specific virtual instruments). The user-programmable presets can’t be saved via Sys Ex, but 14 custom presets should be enough for most users. The adoption of the Mackie Control protocol is vastly more reassuring than, for example, M-Audio’s proprietary DirectLink control for their Axiom keyboards, which usually lagged behind current software versions. We’ll see whether these presets can be updated in the future, but it seems that the “DAW-specific preset element” relates mostly to labeling what the controls do, as the Mackie protocol handles the inherent functionality. There’s also a certain level of “future proofing” because you can create your own presets so if some fabulous new DAW comes out in six months, with a little button-pushing you’re covered. CREATING YOUR OWN PRESETS Editing custom assignments follows the usual cost-saving arrangement of entering setup mode, then using the keyboard keys (as well as some of the hardware controls) to enter data. Thankfully, the labels above the keys are highly legible—it seems that in this case, the musicians won out over the graphic designers. The relatively large and informative display (Fig. 5) is also helpful. Fig. 5: When you adjust various parameters, the display gives visual confirmation of the parameter name and its value (click to enlarge). Although I’d love to see Samson develop a software editor, the front-panel programming is pretty transparent. CONTROLS AND EDITS Let’s take a closer look at the various controls, starting with the sliders (Fig. 6). Fig. 6: There are nine 30mm sliders. While 30mm is a relatively short throw, the sliders aren’t hard to manipulate, and their size contributes to Graphite 49’s compact form factor (click to enlarge). One very important Graphite 49 feature is that there are two virtual banks of sliders—essentially doubling the number of physical controls. For example, the sliders could control nine parameters in a soft synth, but then with a bank switch, they could control another nine parameters. Even better, the rotaries and buttons (Fig. 7), as well as the pads, also have two banks to double the effective number of controls. Fig. 7: The rotary controls are endless encoders. Note that there are 16 buttons, and because there are two banks, that’s 32 switched controls per preset. Speaking of pads (Fig. 8), these provide comfortably big targets that not only respond to velocity, but aftertouch. Fig. 8: The pads are very useful for triggering percussion sounds, as well as repeatitive sounds like effects or individual notes. Rather than describe all the possible edits, some of the highlights are choosing one of seven velocity curves as well as three fixed values (individually selectable for the keyboard and pads), reversing the fader direction for use as drawbars with virtual organ instruments, assigning controls to the five virtual MIDI output ports, changing the aftertouch assignment to a controller number, and the like. Don’t overlook the importance of the multiple MIDI output ports. In its most basic form, this allows sending the controller data for your DAW over one port while using another port to send keyboard notes to a soft synth—but it also means that you can control multiple parameter in several instruments or MIDI devices within a single preset. Finally, the bundled software—Native Instruments’ Komplete Elements—is a much-appreciated addition. I’m a huge fan of Komplete, so it was encouraging to see that NI didn’t just cobble together some throwaway instruments and sounds; Elements gives you a representative taste of what makes the full version such a great bundle. A lot of “lite” versions are so “lite” they don’t really give you much incentive to upgrade, but Elements will likely leave you wanting more because what is there is quite compelling. CONCLUSIONS I’ve been quite impressed by Graphite 49, and very much enjoy working with it. The compact form factor and light weight make it very convenient to use in the studio, and UPS (along with the keyboard’s inherent capabilities) proved to my satisfaction that Graphite 49 would hold up very well for live performance. During some instances when my desktop was covered with devices I was testing, I’ve simply put Graphite 49 on my lap. There are few, if any, keyboard controllers that could fit on my lap so easily while offering this level of functionality. My only significant complaint is I feel the documentation could be more in-depth—not necessarily because there’s a problem with the existing documentation, but because I suspect that Graphite 49’s cost-effective pricing will attract a lot of newbies who may not be all that up to speed on MIDI. Veterans who are familiar with MIDI and have used controllers will have no problem using Graphite 49, but it would be shame if newbies didn’t take full advantage of Graphite 49’s considerable talents because they didn’t know how to exploit them. Samson is a new name in controllers; my first experience with their line was Carbon, which I also reviewed for Harmony Central. Its iPad-friendly design and exceptionally low cost got my attention, but Graphite 49 gives a clearer picture of where the company is heading: not just inexpensive controllers, but cost-effective ones that are suitable for prosumer and pro contexts. Graphite 49’s full-size keys, compact footprint, comfortable keybed, control surface capabilities, and pleasing aesthetic design are a big deal—and at this price, you’re also getting serious value. I’d be very surprised if Samson doesn’t have a hit on their hands. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. I like things that solve problems . . . $32.99 MSRP, $19.99 street www.planetwaves.com by Craig Anderton Reviewing a guitar strap may seem ridiculous, but this isn’t your normal guitar strap—here’s why. Have you ever had a strap slip off an end pin? I sure have. Fortunately, thanks to quick reflexes developed in my errant youth by playing excessive amounts of pinball, I was usually able to grab the guitar before it went crashing to the floor. Except twice: Once when it happened to a blonde Rickenbacker 360 12-string, which was heartbreaking, and once with a Peavey Milano. (Fortunately, it landed on its end pin and survived unscathed. Then again, this guitar has survived Delta Airlines’ baggage handlers on transcontinental trips, so it’s proven an inherent indestructibility.) Since then, I’ve tried various arcane ways of holding straps to end pins—the kind of strap that’s screwed in between the end pin and guitar, custom straps, and the like. They all worked, but had some kind of limitation—usually that it was hard to remove the strap to use on a different guitar, or before slipping the guitar in its case. IT’S A LOCK Then I got turned on to the Planet Waves Planet Lock Guitar Strap. I don’t know who at the company thinks up these weirdly genius things, but it’s pretty cool. Each end of the strap has an open and closed position. In the open position, a rotating disc exposes an opening (Fig. 1). The large hole fits over the end pin head, then you pull on the strap end so that the end pin’s bevelled section fits in the small hole. Fig. 1: The strap end in the open position. Rotating a clickwheel/thumbwheel rotates the disk around the end pin’s bevelled section (Fig. 2), gripping it firmly. The disc doesn’t have to rotate around it completely in order to be effective. Fig. 2: The strap end in the closed position. The clickwheel has ratchets to hold it in place. If you want to remove the strap end, you simply push a release button; this allows rotating the disc to the open position so you can slide the strap off the end pin. ADDITIONAL OPTIONS There are several variations on the strap I reviewed, including multiple styles (Fig. 3). Fig. 3: Different Planet Lock strap styles. There’s also a slightly more costly polypropylene version, and a Joe Satriani model. Although the strap works with most end pins, it doesn’t work with all of them. If you have incompatible end pins, Planet Waves will send you a set of guaranteed-to-work end pins (black, gold, or silver; see Fig. 4) if you send them a copy of your store receipt and $2.50 shipping/handling. Fig. 4: Universal end pins for the Planet Lock strap. These are also available for sale individually for $7.99 street if you have multiple guitars with incompatible end pins, and want to use the strap with them. However, these end pins weren’t designed specifically for the Planet Lock strap, so they’ll work with other straps as well. INDEED . . . IT’S A LOCK For $20 there’s not much to complain about, except that the strap lacks heavy padding (and also, that it didn’t exist when I bought my Rickenbacker). However, the 2” width distributes weight evenly, and I haven’t found it tiring to wear for hours at a time. But more importantly, I don’t have to worry about the guitar turning into a runaway and crashing to the floor. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Ignore Casio at your own peril. I’m going to start this review with a story about the Peavey DPM-3 keyboard. Say what?!? Well, you’ll see why. The DPM-3 came out in 1989 and I took an immediate liking to it, in large part because it had non-volatile, flash RAM so it could store its own samples (and the RAM was expandable to a whopping 1 megabyte! Well, it was 23 years ago). But it also had a very “analog” filter, and could transpose sounds over a wide range and still sound musical. Shortly after it was introduced, I was visiting the offices of Keyboard magazine to visit a friend and saw a DPM-3. “Hey, are you doing a review of that?” I asked. My friend laughed, and basically said “Yeah, right. It’s a Peavey keyboard. C’mon, give me a break.” Or as someone else said to me, “They make amps for country musicians, I’m sure they don’t know how to make a synth.” Fast forward six months later...same friend, same Keyboard offices. He asked if I’d been doing any music lately, so I brought out a copy of a new song I’d recorded. He listened to it, then said “Wow, that sounds incredible!! What synthesizer did you use?” You guessed it. Indeed, you can’t judge a book by looking at its cover. So if you think Casio can’t make a good synth, put tape over the name on the back, then get back to me. Casio has made quite the comeback, which became abundantly clear when I reviewed their WK-7500 workstation. I’d seen it at trade shows, thought it was pretty cool, and assumed the review would basically say it was a good consumer keyboard with a few pro features folded in. Instead, it’s more like a pro keyboard with some good consumer features folded in. It hits on all cylinders: Solid feature set, transparent operating system, arranger options (or what I call an "audio for video soundtrack generator"), a variety of fine sounds, recording/sequencing options, and a $499 street price. Amazing. Of course, some people on the internet ridiculed it and in true internet fashion, the most vocal critics had never tried it. “Oh, it couldn’t possibly be any good, it’s a Casio.” But then people who actually used it started to chime in, which helped set the stage for NAMM 2012. Before the show, Casio hyped the introduction of two new synths. But by now, people were intrigued. Casio had pulled off a bunch of seriously cool low-cost instruments, and there was curiosity about the next step. Well, here's one of them: The XW-P1 (the XW-G1, a more groove-oriented keyboard, isn't out yet). [Edit: It has hit the stores since this post was written.] If you want to do some prep work, check out Casio's XW-P1 landing page for background info and specs. The list price is $699.99, but the street price is typically $500—if you can find one. I was fortunate to get one of the very first units, and it appears the “retail pipeline” hasn’t been filled yet. However, Guitar Center seems to have them, as shown on their XW-P1 web page with a “just arrived” banner. As is traditional with pro reviews, we’ll start off with a photo tour of the unit. But I have to admit that I’ve already plugged in headphones, and poked around the sounds. The XW-P1 can adopt an "analog" or "digital" character, and the step sequencer is extremely cool. It’s a very versatile synth that’s optimized for live performance, and it’s easy to get around on the buttons. And while the emphasis is on a combination of vintage and cutting-edge sounds, you can still find solid electric pianos, drums, and a wonderful drawbar organ. Of course, like any other product it’s not perfect; I could hear some stepping when sweeping the filter cutoff, the headphone jack is on the back, and the pitch bend wheel is just a bit smaller than I’d like—although its positioning with respect to the mod wheel makes it easy to do dual-wheel modulation moves that are killer for leads, so I can't complain too much. Most of all, though, my first impression is that this is a fun and inspiring keyboard. It was a major challenge to resist the temptation to start laying down tracks when I hit some really grooving sequences, but I was able to maintain my professional demeanor and keep typing...I’ll have time enough to play when we get to the audio examples, which I'm pretty sure will turn a few heads. Assuming I have the discipline to stop playing and start recording So here’s a preview of what to expect from the photo tour I’ll be posting tomorrow. I'm really looking forward to digging into this instrument in true pro review fashion--see you then! Note: Posts that contain attachments, like audio examples or patches, are shown with blue text to make it easier to find them as you scan through the thread.
  17. Go ahead—shake the floor (and the rafters)! by Craig Anderton There’s a saying that “You can never be too thin or too rich.” That may work for models, but it’s only half-right for keyboard synth bass: Rich is good, but thin isn’t. Looking for a truly corpulent bass sound that’s designed to dominate your mix? These techniques will take you there. LAYERS GOOD, LAYERS BAD A common approach to crafting a bigger sound is to layer slightly detuned oscillators. However, that can actually create a thinner sound because slight detunings cause volume peaks but also, volume valleys. This not only diffuses the sound, but often makes it hard for a bass to sit solidly in the mix because of the constant sonic shapeshifting. Following are some layering approaches that do work. Three oscillators with two pitch detunings. Pan your main oscillator to center, and set it to be the loudest of the three by a few dB. Pan the other two oscillators somewhat left and right of center, and detune both of them four cents sharp. Yes, this will skew the overall sound a tiny bit sharp; think of it as the synth bass equivalent of “stretch” tuning. Dual oscillators with detuning. You can get away with detuning more easily if there are only two oscillators, as the volume peaks and valleys are more predictable. Pan the two oscillators slightly left and right of center, set them to the same approximate level, tune one oscillator four cents sharp, and tune the other four cents flat. If that’s still too diffused, pan them both to center, tune one to pitch, detune the other one eight cents sharp, and reduce the level of the detuned oscillator by -3 to -6dB. Three oscillators with multiple detunings. If you must shift one oscillator sharp and one flat in a three-oscillator setup, consider mixing the two shifted oscillators somewhat lower (e.g., -3 to -6dB) than the on-pitch oscillator panned to center. This will still give an animated sound, but reduce any diffusion. Three oscillators with layered octaves. This is one of the most common Minimoog bass patches (Fig. 1), and yes, it sounds very big. Adding a slight amount of detuning to the lowest and highest oscillators thickens the sound even more, as this simulates the drift of a typical analog synthesizer. Fig. 1: This Arturia Minimoog V shot shows an archetypal Minimoog patch, with three oscillators set an octave apart via the Range controls. Note that the lowest and highest oscillators are tuned a bit off-pitch to add more sonic animation. Two oscillators with layered octaves. While this doesn’t sound quite as huge as three oscillators with layered octaves, removing the third oscillator creates a tighter, more “compact” sound that will cede some low-end territory to other instruments (e.g., kick drum). Sub-bass layer. Drum ’n’ bass fans, this one’s for you! Layer a triangle wave one octave below any other waveforms you’re using. (You can also try a sine wave, but at that low a frequency, a little harmonic content helps the bass cut through a mix better.) For a really low bass end, layer three triangle waves with two tuned to the same octave (offset one by +10 cents), and the third tuned one octave lower and offset by -10 cents. Sub-bass patches also are excellent candidates for added “punch,” which provides the perfect segue to . . . PUNCH! There are two main ways to add punch to a synth sound. Percussive punch. This requires adding a rapid amplitude decay from maximum level to about 66\\\% of maximum level over a period of 20-25ms (Fig. 2, top). Fig. 2: The upper envelope generator picture from Cakewalk’s Rapture shows a quick percussive decay that adds punch. The lower envelope setting achieves a more sustained punch effect by kicking the envelope full on for a couple dozen milliseconds. To emphasize the percussiveness even further, if a lowpass filter is in play, give its cutoff a similarly rapid decay. However, for the filter, bring the envelope down from maximum to about 50\\\% of maximum over about 20-25ms. Sustained punch. This emulates the characteristics of the Minimoog’s famous “punchy” sound. (Interestingly, the amplitude envelopes in Peavey’s DPM-3 produced the same kind of punch; after I described why this phenomenon occurred in Keyboard magazine, Kurzweil added a “punch” switch to their keyboards to create this effect.) Sustained punch is simple to create with most envelope generators: Program an amplitude envelope curve that stays at maximum for about 20-25ms (Fig. 2, bottom). This is too short for your ear to perceive as a “sustained sound,” but instead comes across as “punch.” PSEUDO-HARD SYNC If your soft synth doesn’t do hard sync, there’s a nifty trick that gives a very similar sound—providing you can add distortion following the filter section. Fig. 3 shows Rapture set up for a hard sync sound on one of its elements. Fig. 3: Feeding a lowpass filter with a reasonable amount of resonance through distortion can create a sound that resembles hard sync. Note the setting of the Cutoff, Reso(nance), and BitRed controls (Bit Reduction is set for a Tube distortion effect, as shown in the label above the control). The envelope shown toward the bottom sweeps the filter cutoff from high to low. As it sweeps, the filter’s resonant frequency distorts, producing a hard-sync like sound. The crucial parameter here is resonance; too little and the effect disappears, two much and the effect becomes overbearing . . . not that there’s necessarily anything wrong with that . . . CHOOSING THE RIGHT WAVEFORM For most big synth bass sounds, a sawtooth wave passed through low-pass filtering (to tame excessive brightness) is the waveform of choice. As a bonus, if you kick the lowpass filter up a tad more, it brings in higher harmonics that add a “brash” quality. For a “rounder” sound that’s more P-Bass than Synth Bass, try a pulse waveform instead (Fig. 4). Fig. 4: Remember Native Instruments’ Pro-53? It's one of many soft synths that provides pulse waveforms. Better yet, a pulse width control determines whether the pulse is narrow or wide. I prefer narrow pulses (around 10-15\\\% duty cycle), but wider pulse widths can also be effective. The same layering techniques mentioned earlier work well with pulse waves, but also experiment with layering a combination of pulse and sawtooth waves. This produces a timbre somewhere between “tough” and “round.” Triangle and sine waves have a hard time cutting though a mix because they contain so few harmonics. If you want a very muted bass sound, use a waveform with more harmonics like sawtooth or pulse, then close a lowpass filter way down to reduce the harmonic content. This provides a rougher, grittier sound due to the residual harmonics that remain despite the filtering. However, while triangle waves aren’t necessarily great solo performers, they’re excellent for layering with pulse and sawtooth waveforms to provide more low-end girth. THE ALL-IMPORTANT MOD WHEEL Just because you’re playing in the lower registers doesn’t let you off the hook to add as much expressiveness as possible. Some programmers get lazy and do the default move of programming the mod wheel to add vibrato, but that’s of limited use with bass. If you want vibrato, tie it to aftertouch, and reserve the mod wheel for parameters where you need more control over the sound. Creative use of modulation could take up an article in itself, but these quick tips on useful modulation targets will help you get started. Filter cutoff. This lets you control the timbre easily. If the filter is being modulated by an envelope, assigning the mod wheel to filter cutoff (Fig. 5) can also create a more percussive effect when you lower the cutoff frequency. Try negative modulation, so that rotating the mod wheel forward reduces highs. Fig. 5: Cubase’s Monologue synth makes it easy to assign the mod wheel to filter cutoff as one of the filter modulation sources (circled in red). Volume envelope attack. To transform a sound’s character from percussive and punchy to something “mellower,” edit the mod wheel to increase attack time as you rotate it forward. Layer level. Assign the mod wheel to bring in the octave-lower layer of a sub-bass patch. This pumps up the level and really fills out the bottom end. Distortion. Yeah, baby! Kick up the distortion for a bass that cuts through the mix like a buzzsaw, then pull back when the sound needs to be more polite. Resonance. I’m not a fan of highly resonant synth bass sounds (they sound too “quacky” to me), but tying resonance to mod wheel provides enough control to make resonance more useable. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. You’ve Recorded the Vocal—But Don’t Touch that Mixer Fader Quite Yet By Craig Anderton As far as I’m concerned, the vocal is the most important part of a song: It’s the conversation that forms a bond between performer and listener, the teller of the song’s story, and the focus to which other instruments give support. And that’s why you must handle vocals with kid gloves. Too much pitch correction removes the humanity from a vocal, and getting overly aggressive with composite recording (the art of piecing together a cohesive part from multiple takes) can destroy the continuity that tells a good story. Even too much reverb or EQ can mean more than bad sonic decisions, as these can affect the vocal’s emotional dynamics. But you also want to apply enough processing to make sure you have the finest, cleanest vocal foundation possible—without degrading what makes a vocal really work. And that’s why we’re here. THE GROUND RULES Vocals are inherently noisy: You have mic preamps, low-level signals, and significant amounts of amplification. Furthermore, you want the vocalist to feel comfortable, and that too can lead to problems. For example, I prefer not to sing into a mic on a stand unless I’m playing guitar at the same time; I want to hold the mic, which opens up the potential for mic handling noise. Pop filters are also an issue, as some engineers don’t like to use them but they may be necessary to cut out low-frequency plosives. In general, I think you’re better off placing fewer restrictions on the vocalist and having to fix things in the mix rather than having the vocalist think too hard about, say, mic handling. A great vocal performance with a small pop or tick trumps a boring, but perfect, vocal. Okay, now let’s prep that vocal for the mix. REMOVE HISS The first thing I do with a vocal is turn it into one long track that lasts from the start of the song to the end, then export it to disk for bringing into a digital audio editing program. Despite the sophistication of host software, with a few exceptions (Adobe Audition and Samplitude come to mind), we’re not quite at the point where the average multitrack host can replace a dedicated digital audio editor. Once the track is in the editor, the first stop is generally noise reduction. Sound Forge, Adobe Audition, and Wavelab have excellent built-in noise reduction algorithms, but you can also use stand-alone programs like iZotope’s outstanding RX 2. The general procedure is to capture a “noiseprint” of the noise, then the noise reduction algorithm subtracts that from the signal. This requires finding a portion of the vocal that consists only of hiss, saving that as a reference sample, then instructing the program to subtract anything with the sample’s characteristics from the vocal (Fig. 1). Fig. 1: A good noise reduction algorithm will not only reduce mic preamp hiss, but can help create a more “transparent” overall sound. This shot from iZotope RX (the precursor to RX 2) shows the waveform in the background that's about to be de-noised, and in the front window, a graph that shows the noise profile, input, and output. There are two cautions, though. First, make sure you sample the hiss only. You’ll need only a hundred milliseconds or so. Second, don’t apply too much noise reduction; 6-10dB should be enough, especially for reasons that will become obvious in the next section. Otherwise, you may remove parts of the vocal itself, or add artifacts, both of which contribute to artificiality. Removing the hiss makes for a much more open vocal sound that also prevents “clouding” the other instruments. DELETE SILENCES Now that we’ve reduced the overall hiss level, it’s time to delete all the silent sections (which are seldom truly silent) between vocal passages. If we do this the voice will mask hiss when it’s present, and when there’s no voice, there will be no hiss at all. Some programs offer an option to essentially gate the vocal, and use that as a basis to remove sections below a particular level. While this semi-automated process saves time, sometimes it’s better (albeit more tedious) to remove the space between words manually. This involves defining the region you want to remove; from there, different programs handle creating silence differently. Some will have a “silence” command that reduces the level of the selected region to zero. Others will require you to alter level, like reducing the volume by “-Infinity” (Fig. 2). Fig. 2: Cutting out all sound between vocal passages will help clean up the vocal track. Note that with Sound Forge, an optional automatic crossfade can help reduce any abrupt transition between the processed and unprocessed sections. Furthermore, the program may introduce a crossfade between the processed and unprocessed section, thus creating a less abrupt transition; if it doesn’t, you’ll probably need to add a fade-in from the silent section to the next section, and a fade-out when going from the vocal into a silent section. REDUCE BREATHS AND ARTIFACTS I feel that breath inhales are a natural part of the vocal process, and it’s a mistake to get rid of these entirely. For example, an obvious inhale cues the the listener that the subsequent vocal section is going to “take some work.” That said, though, applying any compression later on will bring up the levels of any vocal artifacts, possibly to the point of being objectionable. I use one of two processes to reduce the level of artifacts. The first option is to simply define the region with the artifact, and reduce the gain by 3-6dB (Fig. 3). This will be enough to retain the essential character of an artifact, but make it less obvious compared to the vocal. Fig. 3: The highlighted section is an inhale, which is about to be reduced by about -7dB. The second option is to again define the region, but this time, apply a fade-in (Fig. 4). This also may provide the benefit of fading up from silence if silence precedes the artifact. Fig. 4: Imposing a fade-in over an artifact is another way to control a sound without killing it entirely. Speaking of fade-ins, they're also useful for reducing the severity of "p-pops" (Fig. 5) This is something that can be fixed within your DAW as well as in a digital audio editing program. Fig. 5: Splitting a clip just before a p-pop, then fading in, can minimize the p-pop. The length of the fade can even control how much of the "p" sound you want to let through. Mouth noises can be problematic, as these are sometimes short, “clickey” transients. In this case, sometimes you can just cut the transient and paste some of the adjoining signal on top of it (choose an option that mixes the signal with the area you removed; overwriting might produce a discontinuity at the start or end of the pasted region). PHRASE-BY-PHRASE NORMALIZATION A lot of people rely on compression to even out a vocal’s peaks. That certainly has its place, but there’s something else you can try first: Phrase-by-phrase normalization. Unless you have the mic technique of a K. D. Lang, the odds are excellent that some phrases will be softer than others—not intentionally due to natural dynamics, but as a result of poor mic technique, running out of breath, etc. If you apply compression, the lower-level passages might not be affected very much, whereas the high-level ones will sound “squashed.” It’s better to edit the vocal to a consistent level first, before applying any compression, as this will retain more overall dynamics. If you need to add an element of expressiveness later on that wasn’t in the original vocal (e.g., the song gets softer in a particular place, so you need to make the vocal softer), you can do this with judicious use of automation. Unpopular opinion alert: Whenever I mention this technique, self-appointed “audio professionals” complain in forums that I don’t know what I’m talking about, because no real engineer ever uses normalization. However, no law says you have to normalize to zero—you can normalize to any level. For example, if a vocal is too soft but part of that is due to natural dynamics, you can normalize to, say, -6dB or so in comparison to the rest of the vocal’s peaks. (On the other hand with narration, I often do normalize everything to as consistent a level as possible, as most dynamics with narration occurs within phrases.) Referring to Fig. 6, the upper waveform is the unprocessed vocal; the lower waveform shows the results of phrase-by-phrase normalization. Note how the level is far more consistent in the lower waveform. Fig. 6: In the lower waveform, the sections in lighter blue have been normalized. Note that these sections have a higher peak level than the equivalent sections in the upper waveform. However, be very careful to normalize entire phrases. You don’t want to get so involved in this process that you start normalizing, say, individual words. Within any given phrase there will be a certain internal dynamics, and you definitely want to retain this. ARE WE PREPPED YET? DSP is a beautiful thing: Now our vocal is cleaner, of a more consistent level, and has any annoying artifacts tamed — all without reducing any natural qualities the vocal may have. At this point, you can start doing more elaborate processes like pitch correction (but please, apply it sparingly and rarely!), EQ, dynamics control, and reverb. But as you add these, you’ll be doing so on a firmer foundation. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Keyboard controller series with advanced hardware-to-software parameter mapping 25-key $329.99 MSRP, $249.99 street 49-key $449.99 MSRP, $349.99 street 61-key $499,99 MSRP, $399.99 street www.novationmusic.com By Craig Anderton It’s not exactly like we’re experiencing The Great Keyboard Controller Shortage of 2012—from basic models with keys ’n’ wheels to sophisticated control surfaces, never has so much been available, from so many, for so little. Yet Novation has jumped into the fray with a new line of keyboard controllers, and they think they can bring something new to the party . . . so let’s see if they’re right. BASICS The Impulse series consists of 25, 49, and 61-key USB keyboard controllers. All are functionally equivalent, except the 25-key model doesn’t have room for the full nine-fader control surface, but does have a single fader They all have eight assignable “endless” knob encoders, eight backlit drum pads with velocity and pressure, transport controls, pitch and mod wheels, USB and 5-pin DIN MIDI I/O (props to Novation for remembering that 5-pin DIN still matters), and jacks for sustain switch and expression pedal. All units are bus-powered. Not only do you not need an AC adapter, you can’t add an AC adapter. As a result, those with laptops may need to power their computer with an AC adapter if the batteries are running low. If the impulse is serving as a stand-alone controller, you can use any USB power adapter. LET'S SEE ACTION As soon as I started playing the keys, I immediately noticed that the action has been improved—the semi-weighted keyboard action has a little more resistance than the average synth keyboard, but not so much as to detract from the “fly across the keys” appeal of typical synth keyboards. There’s also predictable channel aftertouch and velocity—if I used the same amount of pressure or dynamics, I heard the same results. The LCD has large and readable characters, and doesn't suffer from the lack of a contrast control. The blue, backlit LCD is large and readable, and the knobs and pads have a positive feel. Two fader caps were a little close to the panel and I could feel some friction; pulling up slightly on the cap solved that. (Note: Upon reading of the issue I had with the two faders—which really was minor enough that I almost didn't mention it—Novation nonetheless took it quite seriously, and said their project manager will work with the factory to make sure this is tested more rigorously.) The control surface on the 49- and 61-key models includes nine faders, typically for channels and master. Several of the 20 presets are loaded with factory defaults (Basic MIDI Control, Reason, GarageBand, MainStage, Kontakt, FM8, and a few others) but of course, you can create, save, and load your own. REGARDING SELF-CONTROL Impulse supports Mac OS X 10.6.8 (32/64-bit) and Lion 10.7.2 or higher, as well as Windows XP SP3 (32-bit) and 7 (32- or 64-bit). In theory, Vista isn’t supported yet I checked out the system with 64-bit Vista and everything worked as expected. Of course I’m not suggesting you go against Novation’s recommendations (and if you do you’re on your own), but this indicates to me that they’re pretty conservative in how they spec system requirements. The keyboard is class-compliant so you don’t need drivers, but Novation’s software is necessary to run Automap, which automatically and intelligently correlates hardware controllers to virtual effect, instrument, and DAW parameters. Although Impulse includes the Automap 4 application on its bundled DVD-ROM, I of course checked the web site for a newer version, and found Automap 4.2. Installation was painless; just click and go—the latest version also updated the firmware automatically. When setting up the software you can choose templates for any of the following programs that are installed on your computer: Cubase 6, Pro Tools, Live, Sonar X1, Reason, Logic, or “advanced,” which involves general purpose MIDI control for programs like Studio One Pro or Reaper. However, note that Impulse is HUI-compatible (but not Mackie Control-compatible), so you can vary level, solo, mute, etc. with programs that accept HUI messages. After selecting the VST effects path, I chose setup for Sonar X1. Using Impulse with Ableton Live offers some additional mojo, as you can launch clips with the percussion pads. In this mode, the pads glow either yellow, green, or red depending on whether a clip is available, playing, or recording respectively. The lights flash if Live is waiting for the specified quantization timing before firing the clip. AUTOMAP 4 Automap creates a “wrapped” version of your plug-ins (VST, AU, RTAS, and TDM, but not DirectX) so the program can read and edit their parameters. The setup program walks you through setting up your DAW with Automap, and Novation makes the process transparent and automatic. Automap can correlate the eight rotary encoders to processor and instrument parameters. Note the transport controls below the knobs. When I started using Automap with various effects, Novation had apparently already created logical mappings between controls and parameters; they claim they’ve already developed mappings for many effects, so that’s not too surprising. Of course, you can also come up with your own custom mappings, as well as exchange mappings with other users. The Impulse LCD shows an abbreviated name of the parameter being controlled. With instruments, as there are only eight encoders there can be dozens of scrollable parameter pages to cover all available parameters, but note that you can edit mappings to move your most-used parameters to the first couple pages for easy access. Although many of the mappings make logical sense, you can re-assign parameters as needed. Note in the lower screen shot that you can also modify a parameter's range, as well as invert the control's response. Furthermore, the Automap 4 user interface is really slick, and shows mappings on-screen; sometimes it’s easier to use this to find particular pages, especially for lesser-used parameters. Also, a small pop-up balloon shows the parameter being controlled by the hardware—helpful, although you can disable this if it’s distracting. The pop-up in the lower right confirms the parameter currently being adjusted. ARPEGGIATOR AND ROLL The pads are also where you control some of the arpeggiator's characteristics. The arpeggiator is very cool, as you can use the pads to alter how the arpeggiator plays—drop out notes, insert notes, and even use pad velocity to affect note levels. Other options include gate time, note quantize (sync), pattern type (up, down, random, etc.), octave range, sequence length, and swing. You can also set the pads to create rolls, for example, repetitive drum hits. Both these functions can follow tempo, or tap tempo. CONCLUSIONS Those are the main “sexy” features, but there are also the expected ones—transmit program changes, split the keyboard into four independent MIDI zones, set four velocity curves (or full velocity) for the keyboard and three curves (or again, full velocity) for the pads, local control on/off, sys ex dump (save settings with your DAW project), and more. There’s even a help menu whose scrolling messages give hints on how particular functions work. The Impulse series is priced competitively, while offering several features you won’t find elsewhere. Of these, Automap 4 is the most significant; the program has matured to the point where using a controller soon becomes a natural part of your workflow. As a bonus, new wizards make it easier to set up than previous versions, and it’s also easier to tweak. Wizards simplify setting up Automap by showing which DAWs are installed on your computer; after specifying a particular DAW, Automap takes over the setup and optimization process. And while the templates are welcome, because the control surface generates MIDI continuous controller data, you can always use the target program’s learn function to create custom mappings. Throw in a generous software bundle (including Ableton Live Lite, the Novation BassStation plug-in, and a bunch of samples and loops) and while there’s certainly no lack of keyboard controllers available, Novation has indeed brought something new to the party. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Check out the latest advances and techniques for mixing with DAWs by Craig Anderton The best mics, recording techniques, and players don’t guarantee great results unless they’re accompanied by a great mix. But the face of mixing has changed dramatically with the introduction of the DAW, both for better and worse. Better, because you don’t need to spend $250,000 for a huge mixer with console automation, but worse because we’ve sacrificed hands-on control and transparent workflow. Or have we? Today’s DAWs have multiple options—from track icons to color-coding to configurable mixers—that help overcome the limitations of displaying tons of tracks on a computer monitor. While this can’t replace the one-function/one-control design of analog gear, some tasks (such as grouping and automation) are now actually easier to do than they were back in the days when analog ruled the world. As to hands-on control, controller-related products keep expanding and offering more possibilities, from standard control surfaces with motorized faders, to FireWire or USB mixers, to pressing keyboard workstations (such as Yamaha’s Motif XS/XF series or Korg’s M3 or Kronos) into service as controllers. These all help re-create “the analog experience.” Although we’ll touch a bit on gear in this article, it’s only to illustrate particular points—the main point of interest here is techniques, not features, and how those techniques are implemented in various DAWs. And speaking of DAWs, if you’ve held off on upgrading your DAW of choice, now might be the time to reconsider. As DAW feature sets mature, more companies focus their efforts on workflow and efficiency. While these kinds of updates may not seem compelling when looking over specs on a web site, in practice they can make the recording and mixing process more enjoyable and streamlined. And isn’t that what we all want in the studio? So pull up those faders, dim the lights, and let’s get started (click on any image to enlarge). GAIN-STAGING The typical mixer has several places where you can set levels; proper gain-staging makes sure that levels are set properly to avoid either distortion (levels too high) or excessive noise (levels too low). There’s some confusion about gain-staging, because the way it works in hardware and software differs. With hardware, you’re always dealing with a fixed, physical amount of headroom and dynamic range, which must be respected. Modern virtual mixers (with 32-bit floating point resolution and above) have almost unlimited dynamic range in the mixer channels themselves—you can go “into the red” yet never hear distortion. However, at some point the virtual world meets the physical world, and is again subject to hardware limitations. Gain-stage working backward from the output; you need to make sure that the output level doesn’t overload the physical audio interface. I also treat -6 to -10dB output peaks as “0.” Leave some headroom to allow for inter-sample distortion (Fig. 1) and also, it seems converters like to have a little “breathing room.” Fig. 1: SSL’s X-ISM metering measures inter-sample distortion, and is available as a free download from solidstatelogic.com. Remember, these levels can—and usually will—be brought up during the mastering process anyway. Then, set individual channel levels so that the mixed output’s peaks don’t exceed that -6 to -10dB range. CONFIGURABLE MIXERS One of the most useful features of virtual mixers is that you can configure them to show only what’s needed for the task at hand, thus reducing screen clutter (Fig. 2). Fig. 2: This collage outlines in red the toolbars that show/hide various mixer elements (left Steinberg Cubase 5, middle Cakewalk Sonar 8.5, and right Ableton Live 8). Mixing often happens in stages: First you adjust levels, then EQ, then stereo placement, aux busing, etc. Granted, you’ll go back and forth as you tweak sounds—for example, changing EQ might affect levels—but if you save particular mixer configurations, you can recall them as needed. Here are some examples of how to use the configurable mixer feature when mixing. The meter bridge. This is more applicable to tracking than mixing, but is definitely worth a mention. If you hide everything except meters (and narrow the mixer channel strips, if possible), then you essentially have a meter bridge. As software mixers often do not adjust incoming levels from an interface when recording (typically, the interface provides an applet for that task), you can leave the “meter bridge” up on the screen to monitor incoming levels along with previously-recorded tracks. Hiding non-essentials. Visual distractions work against mixing; some people even turn off their monitors, using only a control surface, so they can concentrate on listening. While you might not want to go to that extreme, when mixing you probably don’t need to see I/O setups, and once the EQ settings are nailed, you probably won’t need those either. You may want to adjust aux bus sends during the course of a mix, but that task can be relegated to automation, letting you hide buses as well. Channel arrangement. With giant hardware mixers, it was common to re-patch tape channel outs to logical groupings on the mixer, so that all the drum faders would be adjacent to each other; ditto vocals, guitars, etc. With virtual mixers, you can usually do this just by dragging the channels around: Take that final percussion overdub you added on track 26, and move it next to the drums. Move the harmony vocals so they’re sitting next to the lead vocal, and re-arrange the rhythm tracks so they flow logically. And while you’re at it, think about having a more or less standardized arrangement in the future—for example starting off with drums on the lowest-numbered tracks, then bass, then rhythm guitars and keyboards, and finally moving on to lead parts and “ear candy” overdubs. The less you need to think about where to find what you want, the better. Track icons. When I first saw these on GarageBand, I thought the concept was silly—who needs cute little pictures of guitars, drums, etc.? But I loaded track icons once when I wanted to make an article screen shot look more interesting, and have been using them ever since. The minute or two it takes to locate and load the icons pays off in terms of parsing tracks rapidly (Fig. 3). Coupled with color-coding, you can jump to a track visually without having to read the channel name. Fig. 3: Acoustica’s Mixcraft 5 is one of several programs that offers track icons to make quick, visual identification of DAW tracks. Color coding. Similarly, color-coding tracks can be tremendously helpful if done consistently. I go by the spectrum mnemonic: Roy G. Biv (red, orange, yellow, green, blue, indigo, violet). Drums are red, bass orange, melodic rhythm parts yellow, vocals green, leads blue, percussion indigo, and effects violet. When you have a lot of tracks, color-coding makes it easy to scroll to the correct section of the mixer (if scrolling is necessary, which I try to avoid if possible). WHY YOU NEED A DUAL MONITOR SETUP If you’re not using two (or even three) monitors, you’ll kick yourself when you finally get an additional monitor and realize just how easy much easier DAW-based mixing can be—especially with configurable mixers. Dedicate the second monitor to the mixer window and the main monitor to showing tracks, virtual instrument GUIs, etc., or stretch the mixer over both monitors to emulate old-school hardware-style mixing. Your graphics card will need to handle multiple monitors; most non-entry-level cards do these days, and some desktop and laptop computers have that capability “out of the box.” However, combining different monitor technologies can be problematic—for example, you might want to use an old 19” CRT monitor along with a new LCD monitor, only to find that the refresh rate has to be set to the lowest common frequency. If the LCD wants 60Hz, then you’re stuck with 60Hz (i.e., flicker city!) on the CRT. If possible, use matched monitors, or at least matching technology. CHANNEL STRIPS Several DAWs include channel strips with EQ and dynamics control (Fig. 4), or even more esoteric strips (e.g., a channel strip dedicated to drums or vocals). Fig. 4: Cakewalk Sonar X1 (left) and Propellerhead Reason (right) have sophisticated channel strips with EQ, dynamics control, and with X1, saturation. However, also note that third-party channel strips are available—see Fig. 5. Fig. 5: Channel strips, clockwise from top: iZotope Alloy, Waves Renaissance Channel, Universal Audio Neve 88RS. If there are certain settings you return to frequently (I’ve found particular settings that work well with my voice for narration, so I have a vocal channel strip narration preset), these can save time compared to inserting individual plug-ins. Although I often do make minor tweaks, it’s easier than starting from scratch. Even if you don’t have specific channel strips, many DAWs let you create track presets that include particular plug-in configurations. For example, I made a “virtual guitar rack” track preset designed specifically for processing guitar with an amp sim, compression, EQ, and spring reverb. BUSING There are three places to insert effects in a typical mixer: Channel inserts, where the effect processes only that channel Master inserts, where the processor affects the entire mix (e.g., overall limiting or EQ) Buses, where the processor affects anything feeding that bus Proper busing can simplify the mixing process (Fig. 6), and make for a happier CPU. Fig. 6: Logic Pro’s “Inspector” for individual channels shows not only the channel’s level on the left, but also, on the right you’ll see the parameters for whatever send you select (or the output bus). In the days of hardware, busing was needed because unlike plug-ins, which you can instantiate until your CPU screams “no more,” a hardware processor could process only one signal path at a time. Therefore, to process multiple signals, you had to create a signal path that could mix together multiple signals—in other words, a bus that fed the processor. The most common effects bus application is reverb, for two reasons. First, high-quality reverbs (particularly convolution types) generally uses a lot of CPU power, so you don’t want to open up multiple instances. Second, there’s an aesthetic issue. If you’re using reverb to give a feeling of music being in an acoustic space, it makes sense to have a single, common acoustic space. Increasing a channel’s reverb send places the sound more in the “back,” and less send places it more in the “front.” A variation on this theme is to have two reverb buses and two reverbs, one for sustained instruments and one for percussive instruments. Use two instances of the same reverb, with very similar settings except for diffusion. This is because you generally want lots of diffusion with percussive sounds to avoid hearing discrete echoes, and less diffusion with sustained instruments (like vocals or lead guitar) so that the reverb isn’t too “thick,” thus muddying the sustained sound. You’ll still have the feeling of a unified acoustic space, but with the advantage of being able to decide how you want to process individual tracks. Of course, effects buses aren’t good only for reverb. I sometimes put an effect with very light distortion in a bus, and feed in signals that need a little “crunch”—for example, adding a little grit to kick and bass can help them stand out more when playing the mix through speakers that lack bass response. Tempo-synched delay for dance music cuts also lends itself to busing, as you may want a similar rhythmic delay feel for multiple tracks. GROUPING Grouping is a way to let one fader control many faders, and there are two main ways of doing this. The classic example of old-school grouping is a drum set with multiple mics; once you nail the relative balance of the individual channels, you can send them to a bus, which allows raising and lowering the level of all mics with a single control. With this method, the individual fader levels don’t change. The other option is not to use a bus, but assign all the faders to a group (Fig. 7). Fig. 7: In PreSonus Studio One Pro, the top three tracks have been selected, and are about to be grouped so edits applied to one track apply to the other grouped tracks. In this case, moving one fader causes all the other faders to follow. Furthermore, with virtual mixers it’s often possible to choose whether group fader levels move linearly or ratiometrically. With a linear change, moving one fader a certain number of dB raises or lowers all faders by the same number of dB. When using ratiometric changes, raising or lowering a fader’s level by a certain percentage raises or lowers all grouped fader levels by the same percentage, not by a specific number of dB. In almost all cases you’ll want to choose a ratiometric response. Another use for grouping is to fight “level creep” where you raise the level of one track, then another, and then another, until you find the master is creeping up to zero or even exceeding it (see the section on Gain-Staging). Temporarily group all the faders ratiometrically, then bring them down (or up, if your level creep went in the opposite direction) until the output level is in the right range. CONTROL SURFACES Yes, I know people mix with a mouse. But I highly recommend using a control surface not because I was raised with hardware mixers, but because a control surface is a “parallel interface”—you can control multiple aspects of your mix simultaneously—whereas a mouse is more like a serial interface, where you can control only one aspect of a mix at a time. Furthermore, I prefer a mix to be a performance. You can add a lot more life to a mix by using faders not just to set static levels, but to add dynamic and rhythmic variations (i.e., moving faders subtly in time with the music) that impart life and motion to the mix. In any event, you have a lot of options when it comes to control surfaces (Fig. 8). Fig. 8: A variety of hands-on controllers. Clockwise from upper left: Behringer BCF2000, Novation Nocturn, Avid MC Mix, and Frontier Design AlphaTrack. One option is to use a control surface, dedicated to mixing functions, that produces control signals your DAW can interpret and understand. Typical models include the Avid Artist Series (formerly from Euphonix), Mackie Control, Cakewalk VS-700C, Behringer BCF2000, Alesis Master Control, etc. The more advanced models use motorized faders, which simplify the mixing process because you can overdub automation moves just by grabbing faders and punching in. If that option is too expensive, there are less costly alternatives, like the Frontier Design AlphaTrack, PreSonus Faderport, Cakewalk VS-20 for guitarists, and the like. These generally have fewer faders and options, but are still more tactile than using a mouse. There’s yet another option that might work even better for you: An analog or digital mixer. I first got turned on this back in the (very) early days of DAWs, when I had a Panasonic DA7 digital mixer. It had great EQ and dynamics that often sounded better than what was built into DAWs, as well as motorized faders and decent hardware busing options. It also had two ADAT cards so I could run 16 digital audio channels into the mixer, and I used the Creamware SCOPE interface with two ADAT outs. So, I could assign tracks to the SCOPE ADAT outs, feed these into the DA7, and mix using the DA7. Syncing the motorized faders moves to the DAW allowed for automated mixes. This had several advantages, starting with hands-on control. Also, by using the DA7’s internal effects, I not only had better sound quality but lightened the computer’s CPU load. And it was easier to interface hardware processors with the DA7 compared to interfacing them with a DAW (although most current DAWs make it easy to treat outboard hardware gear like plug-ins if your audio interface can dedicate I/O to the processors). Finally, the DA7 had a MIDI control layer, so it was even possible to control MIDI parameters in virtual instruments and effects plug-ins from the same control surface that was doing the mixing. While the DA7 is long gone, Yamaha offers the 02R96VCM and 02R96VCM digital mixers, which offer the same general advantages; also check out the StudioLive series from PreSonus. However, that’s just one way to deal with deploying a control surface. You can use a high-quality analog mixer, or something like the Dangerous Music 2-BUS and D-BOX. Analog mixing has a somewhat different sonic character compared to digital mixing, although I wouldn’t go so far as to say one is inherently better than the other (it’s more like a Strat vs. Les Paul situation—different strokes for different folks). The main issue will be I/O limitations, because you have to get the audio out of the DAW and into the mixer. If you have 43 tracks and your interface has only 8 discrete outs—trouble. The workaround is to create stems by assigning related tracks (e.g. drums, background vocals, rhythm guitars, etc.) to buses, then sending the bus outputs to the interface. In some ways this is a fun way to mix, as you have a more limited set of controls and it’s harder to get “lost in the tracks.” Today’s FireWire and USB 2.0 mixers (M-Audio, Alesis, Phonic, Mackie, etc.) can provide a best-of-both-worlds option. These are basically traditional mixers that can also act as DAW interfaces—and while recording, they have enough inputs to record a multi-miked drum set and several other instruments simultaneously. Similarly, when it’s time to mix you might have enough channels to mix each channel individually, or at least mix a combination of individual channels and stems. SCREEN SETS Different programs call this concept by different names, but basically, it’s about being able to call up a particular configuration of windows with a simple keyboard shortcut or menu item (Fig. 9) so you can switch instantly among various views. Fig. 9: Logic Pro 9’s Screensets get their own menu for quick recall and switching among views. Like many of today’s DAW features (track icons, color-coding, configuring mixers, and the like) it requires some time and thought to create a useful collection of screen sets, so some people don’t bother. But this initial time investment is well worth it, because you’ll save far more time in the future. Think of how often you’ve needed to leave a mixer view to do a quick edit in the track or arrange view: You resize, move windows, change window sizes, make your changes, then resize and move all over again to get back to where you were. It’s so much simpler to have a keyboard shortcut that says “hide the mixer, pull up the arranger view, and have the piano roll editing window ready to go” and after doing your edits, having another shortcut that says “hide all that other stuff and just give me the mixer.” DIGITAL METERING LIMITATIONS And finally . . . they may be digital, but you can’t always trust digital metering: As just one example, to indicate clipping, digital meters sometimes require that several consecutive samples clip. Therefore, if only a few samples clip at a time, your meters may not indicate that clipping has occurred. Also, not all digital gear is totally consistent—especially hardware. In theory, a full-strength digital signal where all the bits are “1” should always read 0 dB; however, some designers provide a little headroom before clipping actually occurs—a signal that causes a digital mixer to hit -1dB might show as 0dB on your DAW. It's a good idea to use a test tone to check out metering characteristics of all your digital gear. Here are the steps: Set a sine wave test tone oscillator to about 1 kHz, or play a synthesizer sine wave two octaves about middle C (a little over 1 kHz). Send this signal into an analog-to-digital converter. Patch the A/D converter's digital out to the digital in of the device you want to measure. Adjust the oscillator signal level until the indicator for the device being tested just hits -6dB. Be careful not to change the oscillator signal level! Repeat step 3 for any other digital audio devices you want to test. In theory, all your other gear should indicate -6dB but if not, note any variations in your studio notebook for future reference. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Once Again, We Ask the Question: Why Be Normal? by Craig Anderton Many synthesizers and samplers, whether hardware or software, combine digital sample-based oscillators with synthesis techniques like filtering and modulation. These synthesis options can turn on-board samples into larger-than-life acoustic timbres, impart expressiveness to static sounds, and create entirely new types of sounds—but only if you know how to do a little editing. Don’t believe the hype that editing a synth preset is difficult. All you really need to know is how to select parameters for adjustment, and how to change parameter values. Then, just play around: Vary some parameter values and listen to what happens. As you experiment, you’ll build up a repertoire of techniques that produce sounds you like. When it comes to using oscillators creatively, remember that just because a sample says “Piano” doesn’t mean it can only make piano sounds. As with so many aspects of recording, doing something “wrong” can be extremely right. Such as . . . 1. BOMB THE BASS Transpose bass samples up by two octaves or more, and their characters change completely: So far I’ve unearthed great dulcimer, zither, and clavinet sounds. Furthermore, because transposing up shortens the attack time, bass samples can supply great attack transients for other samples that lack punch (although it may be necessary to add an amplitude envelope with a very short attack time so that you hear only the attack). Also, bass samples sometimes make very “meaty” keyboard sounds when layered with traditional keyboard samples. 2. THE VIRTUAL 12-STRING Many keyboards include 12-string guitar samples, but these are often unsatisfying. As an alternative, layer three sets of guitar multisamples (Fig. 1). The first multisample becomes the “main” sample and extends over the full range of the keyboard. Transpose the second set of multisamples an octave higher, and remember that the top two strings of a 12-string are tuned in unison, not octaves. So, limit the range of the octave higher set of multisamples to A#3. Detune the third multisample set a bit compared to the primary sample, and limit its range to B3 on up. (You may want to fudge with the split point between octave and unison a bit, as a guitarist may play the doubled third string higher up on the neck.) Fig. 1: A simple 12-string guitar patch in Reason’s NN-XT sampler. The octave above samples are colored red for clarity, while the unison samples are colored yellow. (This example uses a limited number of samples to keep the artwork at a reasonable size.) If you can delay the onset of the notes in the octave above and unison layers by around 20 to 35ms, the effect will be more realistic. 3. THE ODD COUPLE Combining samples with traditional synth waveforms can create a much richer overall effect, as well as mask problems that may exist in the sample, such as obvious loops or split points. For example, mixing a sawtooth wave with a string section sample gives a richer overall sound (the sawtooth envelope should mimic the strings’ amplitude envelope). Combining triangle waves with nylon string guitars and flutes also works well. And to turn a sax patch into a sax section, mix in some sawtooth wave set for a bit of an attack time, then detune it compared to the main sax. Sometimes combining theoretically dissimilar samples works well too. For example, on one synth I felt the piano sample lacked a strong bottom end. Layering an acoustic bass sample way in the background, with a little bit of attack time so you didn’t hear the characteristic acoustic bass attack, solved the problem. Sometimes adding a sine wave fundamental to a sound also increases the depth; this worked well with a Chapman Stick sample to increase the low end “boom.” Try other “unexpected” combinations as well, such as mixing choir and bell samples together, or high-pitched white noise and choir. 4. FUN WITH INTERGALACTIC COSMIC EXPLOSIONS Transpose percussion sounds (cymbals, drums, tambourines, shakers, etc.) way down—at least two octaves—for weird sound effects and digital noises. If this causes any quantization noise or grunge to the sound, you may want to keep it but if not, consider closing the lowpass filter down a bit to take out some of the high frequencies, where any artifacts will be most noticeable. For truly massive thunder effects, spaceship sounds, and exploding galaxies (which are always tough to sample!), choose a complex waveform, transpose it down as far as it will go, and close the filter way down . . . then layer it with a similar sound. 5. GENTLEMEN, START YOUR SAMPLES Changing the start point of a sample (a feature available on most synths and samplers) can radically affect the timbre and add dynamics. Move the start point further into the sample (Fig. 2) until you obtain the desired “minimum dynamics” sound, then tie the start point time to keyboard velocity so that more velocity moves the start point closer to the beginning of the sample (this usually requires negative modulation, but check your manual). Fig. 2: The green line indicates the initial sample start point (minimum velocity). Hitting higher velocities moves the sample point further to the left, toward the beginning of the sample, so the sound picks up more of the attack. The red part of the waveform is the area affected by velocity. This seems to work best with percussive sounds, as changing the start point dynamically can cause clicks that are obvious with sustained sounds, but blend in with percussion. An alternative is to use two versions of the same sample, with one sample’s start time set into the sample and the other left alone; then use velocity switching to switch from the altered sample to the unaltered one as velocity increases. 6. DETUNING: WHO SAYS SUBTLE IS GOOD? Detuning isn’t just about subtle changes. When creating an unpitched sound such as drums or special effects, use two versions of the same sample for the two oscillators, but with their pitches offset by a few semitones to thicken the sound. You may need to apply a common envelope to both of them in case the transposition is extreme enough that one sample has a noticeably longer decay than the other one. 7. THE REVENGE OF HARRY PARTCH Microtonal scales (17-tone, 21-tone, exotic even-tempered scales) are good for experimental music, but they’re also useful for special effects. After all, car crashes are seldom even-tempered, and you may want a somewhat more “stretched” sound—either higher or lower—than what the sample provides. To get these kinds of scales (or even a 1-tone scale where all notes on the keyboard play at the same pitch), assign note position (keyboard) as an oscillator modulation source. Adjusting the degree of modulation can “stretch” or “compress” the keyboard so that an octave takes up more or less keys than the usual 12. Note that you may need to adjust the tuning so that the “base” key of a scale falls where you want it. 8. CROSSING OVER Use waveform crossfading to cover up samples with iffy loops. For example, one keyboard had a very realistic flute sound, but the manufacturer assumed you’d be playing the flute in its “normal” range, so the highest sample was looped and stretched to the top of the keyboard. This flute sound actually was very useable in the upper ranges, except that past a certain point the loop became overly short and “tinny.” So, I used the flute sample for one oscillator and a triangle wave for the other, and faded out the flute as it hit the looped portion, while fading in the triangle wave (Fig. 3). Fig. 3: As the natural flute loop fades out, a looped triangle wave fades in to provide a smoother looped sound for the decay. The flute sample gave the attack and the triangle wave, a smooth and consistent post-attack sound. Similar techniques work well for brass, but you’ll probably want to crossfade with a sawtooth wave or other complex waveform. 9. BETTER LIVING THROUGH LAYERING Try layering two samples, and assigning velocity control to the secondary sample’s amplitude so that hitting the keys harder brings in the second sample. This can be very effective in creating more complex sounds. One option for the second sample is to bring in a detuned version, so that playing harder brings in a chorusing effect; or, you use variations on the same basic sound (e.g., nylon and steel string guitars) so that velocity “morphs” through the two sounds. 10. TAKE THE LEAD WITH GUITAR “FEEDBACK” With lead guitar patches, tune one lead sample an octave higher than the other lead sample and tie both sample levels to keyboard pressure. However, set the initial volume of the main sample to maximum level, with pressure adding negative modulation that lowers the level; the octave-higher sample should start at minimum level, with pressure adding positive modulation that increases the level. Pressing down on the key during a sustaining note brings in the octave higher “feedback” sound and fades outs the fundamental. For a variation on this theme, have pressure introduce vibrato and perhaps bend pitch up a half-tone at maximum pressure. Also experiment with other waveforms and pitches for the octave-higher sound; a sine wave tuned an octave and a fifth above the fundamental gives a very convincing “feedback” effect. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. Are you fighting technology, or flowing with it? By Craig Anderton Technology can be overwhelming. But does it have to be? Why can some people roll with any technological punch that’s thrown their way, while others struggle to keep up? Some musicians and engineers feel that technology “gets in the way” of the recording or music-making process. Conversely, there’s also no denying that technology makes possible music that was never possible before, and can even provide the means to streamline its production. If you feel there’s some kind of dichotomy between technology and music, you’re not imagining things: Your brain’s “firmware” is hardwired to deal with artistic and technological tasks differently. In this article, we’ll explore why this division exists, describe how your brain’s firmware works, and provide some tips on how to stay focused on the art when you’re up to your neck in tech. COOPERATION AND CONFLICT Technology and art cooperate in some areas, but conflict in others. Regarding cooperation, think of how technology has always pushed the instrument-making envelope (the piano was quite high-tech in its day). And recording defies time itself: We can not only enjoy music from decades ago, but also sing a harmony with ourselves — essentially, going backward in time to sing simultaneously with our original vocal. Cool. Then there’s the love affair between music and mathematics. Frequencies, tempos, rhythms, SMPTE time code — they’re all based on math. Music loves Math. When my daughter was getting into fractions, I created a sequence that included half notes, quarter notes, sixteenth notes, etc. She immediately “got” the concept upon hearing fractions expressed as rhythms. As to conflicts, first there’s the dichotomy of how the brain processes information (as we’ll discuss next); and second, there are a few societally-induced conflicts. For example, some people think that using technology is somehow cheating (e.g., lowering a sequence’s tempo so you can play along more easily, then speeding it back up). Furthermore, the accelerated rate of technological change itself causes conflicts. Which gear should I buy? Which platform is better? And why do the skills I learned just a few years ago no longer matter? Let’s look at how physiology influences our perceptions of both technology and art, as this will provide some clues on how best to reconcile the two. THE MAN WITH TWO BRAINS Our brain has two hemispheres; each one processes information differently. Consider the following quote from the essay “2044: One Hundred Years of Innovation,” presented by William Roy Kesting (founder of Kesting Ventures) and Kathy Woods (VP and Principal of Woods Creative Services) at a 1994 meeting of the Commercial Development Association: “The right brain is the older of the two hemispheres and functions in an all-at-once mode to produce a complete picture. In contrast, the left hemisphere excels in sequential functions such as words, abstract thinking and numbers.” Essentially, the right brain is the “Macintosh GUI” side that handles intuitive, emotional tasks — like being creative. The left brain is more like the “MS-DOS command line interface” side that works in a more linear fashion, and deals with sequential thought processes. Use Color to Your Advantage. The right brain parses color rapidly. Many programs let you customize color schemes, and hardware companies are becoming more aware of this too. For example, the Alesis Ion synth changed the transpose LED’s intensity when transposing by different octaves, making it easy to see the transposition range without having to read anything. And, its programs were arranged in four banks by color rather than letters or numbers. The “breakthrough” in understanding this difference between the hemispheres comes from the work of Drs. Roger W. Sperry, David H. Hubel, and Torsten N. Wiesel, who shared the 1981 Nobel prize in Physiology. Later studies have modified their findings a bit, but some comments in the Nobel Awards presentation speech, by David Ottoson, are well worth noting. “The left brain half is . . . superior to the right in abstract thinking, interpretation of symbolic relationships and in carrying out detailed analysis. It can speak, write, carry out mathematical calculations and in its general function is rather reminiscent of a computer. It is with this brain half that we communicate. The right cerebral hemisphere is mute. . . It cannot write, and can only read and understand the meaning of simple words in noun form. It almost entirely lacks the ability to count and can only carry out simple additions up to 20. However . . . is superior to the left in the perception of complex sounds and in the appreciation of music . . . it is, too, absolutely superior to the left hemisphere in perception of nondescript patterns. It is with the right hemisphere we recognize the face of an acquaintance, the topography of a town, or landscape earlier seen. “Pavlov . . . that mankind can be divided into thinkers and artists. Pavlov was perhaps not entirely wrong. Today we know from Sperry’s work that the left hemisphere is cool and logical in its thinking, while the right hemisphere is the imaginative, artistically creative half of the brain.” As a result, one option is to explain the art/technology dichotomy as the hemispheres being not necessarily in conflict, but working at cross-purposes. Once “stuck” in a hemisphere’s mode of thought, it’s difficult to transition seamlessly into working in the other one, let alone integrate the two. The “Unified Interface” and the Brain. A “unified interface,” which avoids opening multiple overlapping windows in favor of a single screen where elements can be shown or hidden as needed, speaks to both hemispheres. The right brain takes in the “big picture,” while the left brain can focus on details if needed. Ableton Live has two unified interfaces — a “right brain” one optimized for live improvisation, and a “left brain” one optimized for “offline” editing. But if that’s the case, why are so many good programmers musicians? And why have many mathematicians — going back as far as Pythagoras — been fascinated with music, and vice-versa? THE MUSICIAN’S "FIRMWARE" The NAMM campaign “music makes you smarter” is rooted in truth. Recent research shows that many musicians indeed use both halves of the brain to a greater extent than non-musicians. According to Prof. Dr. Lars Heslet (Professor of Intensive Care Medicine at Copenhagen State Hospital in Denmark, and a researcher into the effects of music on the body): “The right brain hemisphere is specialized in the perception of spatial musical elements, that is the sense of harmony and pitch, whereas the left hemisphere perceives the progress of the melody, which requires musical memory.” In other words, both halves of the brain need to be in play to fully appreciate music. This may explain why musicians, critics, and average listeners have seemingly different tastes in music: The critics listen with the analytical (left) side of their brain, the non-musicians react emotionally with their right brain, and the musicians use both hemispheres. Here’s an interesting quote from Frederick Turner (Founders Professor of Arts and Humanities at the University of Texas at Dallas) and Ernst Pöppel, the distinguished German neuropsychologist: “Jerre Levy . . . characterizes the relationship between right and left as a complementarity of cognitive capacities. She has stated in a brilliant aphorism that the left brain maps spatial information into a temporal order, while the right brain maps temporal information onto a spatial order.” Does that sound like a sequencer piano roll to you? Indeed, it uses both temporal and spatial placement. The same thing goes for hard disk recording where you can “see” the waveforms. Even though some programs allow turning off waveform drawing, I’d bet very few musicians do: We want to see the relationship between spatial and temporal information. We Want Visual Feedback. Which track view do you like better — the one that shows MIDI and audio data, or the blank tracks? Odds are you prefer a relationship between spatial and temporal information. Again, from Turner and Pöppel: “Experienced musicians use their left brain just as much as their right in listening to music shows that their higher understanding of music is the result of the collaboration of both ‘brains,’ the music having been translated first from temporal sequence to spatial pattern, and then ‘read,’ as it were, back into a temporal movement.” HEMISPHERIC INTEGRATION: JUST DO IT! The ideal bridge between technology and art lies in “hemispheric integration” — the smooth flow of information between the two hemispheres, so that each processes information as appropriate. For example, the right brain may intuitively understand that something doesn’t sound right, while the left brain knows which EQ settings will fix the problem. Or for a more musical example, a songwriter may experience a distinct emotional feeling in the right hemisphere, but the left hemisphere knows how to “map” this onto a melody or chord progression. Without hemispheric integration, the brain has to bounce back and forth between the two hemispheres, which (as noted earlier) is difficult. This is why integration may expedite the creative process. Here’s another quote from William Roy Kesting and Kathy Woods: “ . . . just as creative all-at-once activities like art need left-sided sequence, so science and logic depend on right-sided inspiration. Visionary physicists frequently report that their insights occur in a flash of intuition . . . Einstein said: ‘Invention is not the product of logical thought, even though the final product is tied to a logical structure.’” Mozart also noted the same phenomenon. He once stated that, when his thoughts flowed best and most abundantly, the music became complete and finished in his mind, like a fine picture or a beautiful statue, with all parts visible simultaneously. He was seeing the whole, not just the individual elements. MEET THE INFORMATION SUPERHIGHWAY The physical connection between the two hemispheres is called the corpus callosum. As Dr. Lars Heslet notes,“To attain a complete musical perception, the connection and integration between the two brain hemispheres (via the corpus callosum) is necessary. This interaction via the corpus callosum can be enhanced by music.” Interestingly, according to the article “Music of the Hemispheres” (Discover, 15:15, March 1994), “The corpus callosum — that inter-hemisphere information highway — is 10-15\\% thicker in musicians who began their training while young than it is in non-musicians. Our brain structure is apparently strongly molded by early training.” Bingo. Musical training forges connections between the left and right hemispheres, resulting in a measurable, physical change. And that also explains why some musicians are just as much at home reading about some advanced hardware technique in our articles library as they are listening to music: They have the firmware to handle it. THE RIGHT/LEFT BRAIN “GROOVE” Producer/engineer Michael Stewart (who produced Billy Joel’s “Piano Man”), while studying interface design, noticed that someone involved in a mostly left- or right-brain activity often had difficulty switching between the two, and sometimes worked better when able to remain mostly in one hemisphere. (Some of his research was presented in an article in EQ magazine called “Recording and the Conscious Mind.”) For example, as a producer, he would often have singers who played guitar or keyboards do so while singing, even if he didn’t record the instruments. He felt this kept the left brain occupied instead of letting it be too self-critical or analytical, thus allowing the right brain to take charge of the vocal. Another one of his more interesting findings was that you could sort of “restart” the right brain by looking at pictures — the right brain likes visual stimulation. Stewart was also the person who came up with the “feel factor” concept, quantifying the effects that small timing differences have on the brain’s perception of music, particularly with respect to “grooves.” This is a fine example of using left-brain thinking to quantify more intuitive, right-brain concepts. Quantization and Feel. Quantization can hinder or help a piece of music, depending on how you use it. For example, set any quantization “strength” parameter to less than 100\\% (e.g., 70\\%) to move a note closer to the rhythmic grid but retain some of the original feel. Also, quantization “windows” can avoid quantizing notes that are already close to the beat, and “groove” quantizing (which quantizes parts to another part’s rhythm, not a fixed rhythmic grid) can give a more realistic feel. Timing shifts for notes are also important. For example, if in rock music you shift the snare somewhat later than the kick, the sound will be “bigger.” If you move the hi-hat a little bit ahead of the kick, the feel will “push” the beat more. TECHNOLOGICAL TRAPS Technology has created a few traps that meddle with hemispheric integration. When the left hemisphere is processing information, it wants certainty and a logical order. Meanwhile, the right brain craves something else altogether. As mentioned earlier with the examples regarding Michael Stewart, in situations where hemispheric integration isn’t strong — or where you don’t want to stress out the brain to switch hemispheres — trying to stay in one hemisphere is often the answer to a good performance or session. Quite a few people believe pre-computer age recordings had more “feel.” But I think they may be looking in the wrong place for an answer as to why. Feel is not found in a particular type of tube preamp or mixer; I believe it was found in the recording process. When Buddy Holly was cutting his hits, he didn’t have to worry about defragmenting hard drives. In his day, the engineer handled the left brain activities, the artist lived in the right brain, and the producer integrated the two. The artist didn’t have to be concerned about technology, and could stay in that “right brain groove.” Cycle Recording: Let the Computer Be Your Engineer. Cycle (or loop) recording repeats a portion of music over and over, adding a new track with each overdub. You can then sort through the overdubbed tracks and “splice” together the best parts. This lets you slip into a right-brain groove, then keep recording while you’re in that groove without having to worry about arming new tracks, rewinding, etc. If you record by yourself, you’ve probably experienced a situation where you had some great musical idea and were just about to make it happen, but then you experienced a technical glitch (or ringing phone, or whatever). So you switched back into left brain mode to work on the glitch or answer the phone. But when you tried to get back into that “right brain groove,” you couldn’t . . . it was lost. That’s an example of the difficulty of switching back and forth between hemispheres. In fact, some people will lose that creative impulse just in the process of arming a track and getting it ready to record. Now, if you have an Einsteinian level of hemispheric integration, maybe you would see the glitch or phone call as merely a thread in the fabric of the creative process, and never leave that right-brain zone. We’ll always be somewhat beholden to the differences between hemispheres, but at least we know one element to reprogramming your firmware: Get involved with music, early on, in several different facets, and keep fattening up that corpus callosum. And it’s probably not a bad idea to exercise both halves of your brain. For example, given that the left hand controls the right brain and the right hand controls the left brain, try writing with the hand you normally don’t use from time to time and see if that stimulates the other hemisphere. JUST BECAUSE WE CAN . . . SHOULD WE? Technology allows us to do things that were never possible before. And maybe we were better off when they weren’t possible! For example, technology makes it possible to be artist, engineer, and producer. But this goes against our very own physiology, as it forces constant switching between the hemispheres. Would some of our greatest songwriters have written such lasting songs if they’d engineered or produced themselves? Maybe, but then again, maybe not. And what about mixing with a mouse? Sure, it’s possible to have a studio without a mixing console, but this reduces the mixing process to a linear, left-brain activity. A hardware mixing console (or control surface) allows seeing “the big picture” where all the channels, EQ, pans, etc. are mapped out in front of you. AVOIDING OPTION OVERLOAD Part of the fix for hemispheric integration is to use gear you know intimately, so you don’t have to drag yourself into left brain mode every time you want to do something. When using gear becomes second nature, you can perform left-brain activities while staying in the right brain. As just one example, if you’re a guitarist and want to play an E chord, when you were first learning you probably had to use your left brain to remember which fingers to place on which frets. Now you can do it instinctively, even while you stay in the right brain. The same principle holds true for using any gear, not just a guitar. Ultimately, simplification is a powerful antidote to option overload. When you’re writing in the studio, the point isn’t to record the perfect part, but to get down ideas. Record fast before the inspiration goes away, and worry about fixing any mistakes later. Don’t agonize over level-setting, just be conservative so you don’t end up with distortion. Find a good “workstation” plug-in or synthesizer and master it, then use that one plug-in as a song takes shape. You can always substitute fine-tuned parts later. Also maintain a small number of carefully selected presets for signal processors and instruments; you can always tweak them later. And if you’re a plug-o-holic, remove the ones you don’t use. How much time do you waste scrolling through long lists of plug-ins? Use placeholders for parts if needed, and don’t edit as you go along — that’s a left brain activity. With software, templates and shortcuts are powerful simplifying tools that let you stay in right brain mode. Templates mean you don’t have to get bogged down setting up something, and hitting computer keys (particularly function keys) is more precise than mouse movements. Efficiency avoids bogging down the creative process. MAKING MUSICAL INSTRUMENTS MAGICAL As Robert Pirsig’s “Zen and the Art of Motorcycle Maintenance” says, “If the machine produces tranquility, it’s right.” Reviews and other opinions don’t matter if something feels right to you. Which Type Of Graphic Interface Works for You? The interface is crucial to making an instrument feel right. Compare the screen shot for one of the earliest software synths, Seer Systems’ Reality, to that of G-Media’s Oddity. Reality has more of a spreadsheet vibe, whereas the Oddity portrays the front panel of the instrument it emulates; this makes the signal flow more obvious. Companies can supply technology, but only you can supply the magic that makes technology come alive. No instrument includes soul; fortunately, you do. As we’ve seen, though, to let the soul and inspiration come through, you need to allow the creative hemisphere of your right brain full rein, while the left brain makes its seamless contribution toward making everything run smoothly. Part of mastering the world of technology is knowing when not to use it. Remember, all that matters in your music is the emotional impact on the listener. They don’t want perfection; they want an emotionally satisfying experience. Be very careful when identifying “mistakes” — they can actually add character to your recording. And finally, remember that no amount of editing can fix a bad musical part . . . yet almost nothing can obscure a good one. The bottom line is that you need to master the technology you use so that operating it becomes automatic, then set up a work flow that makes it easier to put your left brain on autopilot. That frees up the right brain to help you keep the “art” in the state of the art. We’ll leave the last word on why you want to do this to Rolf Jensen, director of the Copenhagen Institute for Futures Studies: “We are in the twilight of a society based on data. As information and intelligence become the domain of computers, society will place a new value on the one human ability that can’t be automated: Emotion.” Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. USB memory sticks give huge performance gains with Ableton Live By Craig Anderton Many musicians use Ableton Live with a laptop for live performance, but this involves a compromise. Laptops often have a single, fairly slow (5400 RPM) disk drive, and a limited amount of RAM compared to desktop computers. Live gives you the choice of storing clips to RAM or hard disk, but you have to choose carefully. If you assign too many clips to disk, then eventually the disk will not be able to stream all of these clips successfully, and there will be audio gaps and dropouts. But if you assign too many clips to RAM, then there won’t be enough memory left for your operating system and startup programs. Fortunately, there’s a very simple solution that solves all these problems: Store your Ableton projects on USB 2.0 RAM sticks. That way, you can assign all the clips to stream from the solid-state RAM “disk,” so Ableton thinks they’re disk clips. But, they have all the advantages of being stored in RAM—there are no problems with seek times or a hard disk’s mechanical limitations. Best of all, the clips place no demands on your laptop’s hard drive or RAM, leaving them free for other uses. Here’s how to convert your project to one that works with USB RAM sticks. 1. Plug your USB 2.0 RAM stick into your computer’s USB port. 2. Call up the Live project you want to save on your RAM stick. 3. If the project hasn’t been saved before, select "Save" or "Save As" and name the project to create a project folder. Fig. 1: The "Collect All and Save" option lets you make sure that everything used in the project, including samples from external media, are saved with the project. 4. Go File > Collect All and Save (Fig. 1), then click on "OK" when asked if you are sure. Fig. 2: This is where you specify what you want to save as part of the project. 5. When you’re asked to specify which samples to copy into the project, select "Yes" for all options, and then click OK (Fig. 2). Note that if you’re using many instruments with multisamples, this can require a lot of memory! But if you’re mostly using audio loops, most projects will fit comfortably into a 1GB stick. 6. Copy the project folder containing the collected files to your USB RAM stick. 7. From the folder on the USB RAM stick, open up the main .ALS Live project file. 8. Select all audio clips by drawing a rectangle around them, typing Ctrl-A, or Ctrl-click (Windows) on the clips to select them. Fig. 3: All clips have been selected. Under "Samples," click on RAM until it's disabled (i.e., the block is gray). 9. Select Live’s Clip View, and under Samples, uncheck "RAM" (Fig. 3). This converts all the audio clips to “disk” clips that “stream” from your USB stick. Now when you play your Live project, all your clips will play out of the USB stick’s RAM, and your laptop’s hard disk and RAM can take a nice vacation. This technique really works—try it! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. This Simple Technique Can Make Amp Sims Sound Warmer and More Organic by Craig Anderton All amp sims that I've used exhibit, to one degree or another, what I call "the annoying frequency." For some reason this seems to be inherent in modeling, and adds a sort of "fizzy," whistling sound that I find objectionable. It may be the result of pickup characteristics, musical style, playing technique, etc. adding up in the wrong way and therefore emphasizing a resonance or it may be something else...but in any event, it detracts from the potential richness of the amp sound. This article includes audio examples from Avid’s Eleven Rack and Native Instruments’ Guitar Rig 4, but I’m not picking on them – almost every amp sim program I’ve used has at least one or two amps that exhibit this characteristic. It also seems like an unpredictable problem; one amp might have this “fizz” only when using a particular virtual mic or cabinet, but the same mic or cabinet on a different amp might sound fine. Normally, if you found this sound, you'd probably just say "I don't like that" and try a different cabinet, amp, or mic (or change the amp settings). But, you don't have to if you know the secret of fizz removal. All you need is a stage or two of parametric (not quasi-parametric) EQ, a good set of ears, and a little patience. BUT FIRST... Before getting into fizz removal, you might try a couple other techniques. Physical amps don’t have a lot of energy above 5kHz because of the physics of cabinets and speakers, but amp sims don’t have physical limitations. So eEven if the sim is designed to reduce highs, you’ll often find high-frequency artifacts, particularly if you run the sim at lower sample rates (e.g., 44.1kHz). One way to obtain a more pleasing distorted amp sim sound is simply to enable any oversampling options; if none are available, run the sim at an 88.2kHz or 96kHz sample rate. Another option is removing unneeded high frequencies. Many EQs offer a lowpass filter response that attenuates levels above a certain frequency. Set this for around 5-10kHz, with as steep a rolloff as possible (specified in dB/octave; 12dB/octave is good, 24dB/octave is better). Vary the frequency until any high-frequency “buzziness” goes away. Similarly, it’s a good idea to trim the very lowest bass frequencies. Physical cabinets—particularly open-back cabinets—have a limited low frequency response; besides, recording engineers often roll off the bass a bit to give a “tighter” sound. A quality parametric EQ will probably have a highpass filter function. As a guitar’s lowest string is just below 100Hz, set the frequency for a sharp low-frequency rolloff around 70Hz or so to minimize any “mud.” FIZZ/ANNOYING FREQUENCY REMOVAL Although amp sims can do remarkably faithful amp emulations, with real amps the recording process often “smooths out” undesirable resonances and fizz due to miking, mic position, the sound traveling through air, etc. When going direct, though, any “annoying frequencies” tend to be emphasized. Please listen to this audio example on the Harmony Central YouTube channel. The sound is from Avid’s Eleven Rack; the combination of the Digidesign Custom Modern amp, 2x12 Black Duo Cab, and on-axis Dyn 421 mic creates a somewhat “fizzy” sound. Listen carefully while the section plays that says original file, and you'll hear a high, sort of "whistling" quality that doesn't sound at all organic or warm, but "digital." Follow these steps to reduce this whistling quality. 1. Turn down your monitors because there may be some really loud levels as you search for the annoying frequency (or frequencies). 2. Enable a parametric equalizer stage. Set a sharp Q (resonance), and boost the gain to at least 12dB. 3. Sweep the parametric frequency as you play. There will likely be a frequency where the sound gets extremely loud and distorted—more so than any other frequencies. Zero in on this frequency. 4. Now use the parametric gain control to cut gain, thus reducing the annoying frequency. In the part of the video that says sweeping filter to find annoying frequency, I've created a sharp, narrow peak to localize where the whistle is. You'll hear the peak sweep across the spectrum, and while the sharp peak is sort of unpleasant in itself, toward the end (in the part that says here it is!) you'll note that it's settled on that whistling sound we heard in the first example. In this case, after sweeping the parametric stage, the annoying whistle is centered around 7.9kHz. In the next example that says now we'll notch it out, you'll hear the whistle for the first couple seconds, then hear it disappear magically as the peak turns into a notch (check out the filter response in Fig. 1). Note how the amp now sounds richer, warmer, more organic, and just plain more freakin' wonderful A little past the halfway point through the clip, I switched the filter out of the circuit so the response was flat (no dip). You'll hear the whistle come back. Fig. 1: Here's what was used to remove the fizz. This single parametric notch makes a huge difference in terms of improving the sound quality. DUAL NOTCH TECHNIQUES AND EXAMPLES Sometimes finding and removing a second fizz frequency can improve the sound even more; check out Example 2 in the video. First you'll hear the original file from Guitar Rig's AC30 emulation. It sounds okay, but there’s a certain harshness in the high end. Let’s find the fizzy frequencies and remove them, using the same procedure we used with the Eleven Rack. After sweeping the parametric stage, I found an annoying whistle centered at 9,645 Hz. The part called annoying fequency at 9645 Hz uses the parametric filter to emphasize this frequency, while the part labelled notch at 9645 Hz has a much smoother high end. But we’re not done yet; let’s see if we can find any other annoying frequencies. The section labelled annoying frequency at 5046 Hz again uses a filter to emphasize this frequency. The next section, with notches at 9645 Hz and 5046 Hz has notches at both frequencies (Fig, 2). Compare this to original file at the end without any notches; note how the version without notches sounds more “digital,” and lacks the “warmth” of the filtered versions. Fig. 2: The above image shows the parametric EQ notches that were applied to the signal, using the Sonitus EQ in Cakewalk's SONAR DAW. MUCH BETTER! Impressive, eh? This is the key to getting good amp sim sounds. Further refinements on this technique are: Experiment with the notch bandwidth. You want the narrowest notch possible that nonetheless gets rid of the whistle, otherwise you'll diminish the highs...although that may be what you want. As I said, experiment! Some amp sims exhibit multiple annoying frequencies. On occasion, sometimes three notches is perfect. Generally, the more notches you need to use, the more narrow you need them to be. When you’re done, between the high/low frequency trims and the midrange notches, your amp sim should sound smoother, creamier, and more realistic. Enjoy your new tone! ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Use MIDI Controller Data to Add Expressiveness to Software Synths by Craig Anderton When Sonic Foundry's Acid made its debut in 1998, it was a breakthrough product: Prior to that time, you couldn't simply drop a digital audio clip in a digital audio workstation track and be able to "stretch" both tempo and pitch in real time. (Propellerhead Software had introduced the REX file format four years previously, which also allows for time and pitch stretching. However, it was a specialized file format, whereas Acid could work with any digital audio file and "Acidize" it - more or less successfully - for stretching.) Over the years other programs started to acquire similar capabilities, and as Sonic Foundry's fortunes declined, so did Acid's. However, Sony bought the Sonic Foundry family of programs in 2003, and started the rebuilding process. Acid's hard disk recording capabilities became on a par with other programs, and more recently, MIDI has been beefed up to where Acid can handle software synthesizers, MIDI automation, and external controllers with ease. In this article, we'll show how to add MIDI controller messages to MIDI tracks (for clarity, MIDI note data isn't shown). Begin by selecting a MIDI track, then choosing "Automation Write (Touch)" from the Automation Settings drop-down menu. If you instead want to overwrite existing automation data instead of write new data, choose Latch (right below the Touch option). Latching creates envelope points when you change a control; if you stop moving the control, its current setting overwrites existing envelope points until you stop playback. You'll see four control sliders toward the bottom of the MIDI track. If you don't see the controller you want, click on a controller's label; this reveals a pop-up menu with additional controller options, and you can then select the desired controller from this menu. In the screen shot, Modulation is replacing Aftertouch. As with other programs (e.g., Cakewalk Sonar), it's not necessary to enter record mode to record automation data. Simply click on the Play button, then click and drag the appropriate controller slider to create an automation envelope in real time. However, note that MIDI controllers can generate a lot of data. When computers were slower, this could sometimes cause problems because older processors couldn't keep up with the sheer amount of data. While this is less of an issue with today's fast machines, lots of tracks with controller data can "clog" the MIDI stream, particularly if you're driving external MIDI hardware rather than an internal software synthesizer. Acid has an option that lets you thin the amount of controller data. To do this, click on the Envelope button to the right of the controller's slider, then select "Thin Envelope Data" from the drop-down menu. What's more, Acid offers automatic smoothing/thinning of automation data. To set this up, go Options > Preferences > External Control & Automation tab and check "Smooth and thin automation data after recording or drawing." To add a point (what some other programs call a node) manually but still use the slider to set the value, choose the Pencil tool and click at the time where you want to add the point. Then, move the slider to change the newly-added point's value. To add a point manually that can be moved in any direction, place the cursor over the automation curve until it turns into a pointing hand, then double-click to create a point. Click and drag on the point to move it. In this example, a modulation value of 27 is being entered at measure 1, beat 2, 192 ticks. Another way to add an automation point is to right-click on the automation curve, and select "Add Point." Click and drag on the point to move it. Note that this same pop-up menu also lets you change the shape of the curve between points. In this example, Fast Fade has been chosen. You can continue to add and edit automation until the automation "moves" are exactly as desired. So why bother? Because automation can add expressiveness to synthesizer parts by keeping sounds dynamic and moving, rather than static. The next step would be to add an external control surface, so you can create these changes manually using physical faders...but that's another story, for another time! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...