Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Anderton

  1. Check out the latest advances and techniques for mixing with DAWs by Craig Anderton The best mics, recording techniques, and players don’t guarantee great results unless they’re accompanied by a great mix. But the face of mixing has changed dramatically with the introduction of the DAW, both for better and worse. Better, because you don’t need to spend $250,000 for a huge mixer with console automation, but worse because we’ve sacrificed hands-on control and transparent workflow. Or have we? Today’s DAWs have multiple options—from track icons to color-coding to configurable mixers—that help overcome the limitations of displaying tons of tracks on a computer monitor. While this can’t replace the one-function/one-control design of analog gear, some tasks (such as grouping and automation) are now actually easier to do than they were back in the days when analog ruled the world. As to hands-on control, controller-related products keep expanding and offering more possibilities, from standard control surfaces with motorized faders, to FireWire or USB mixers, to pressing keyboard workstations (such as Yamaha’s Motif XS/XF series or Korg’s M3 or Kronos) into service as controllers. These all help re-create “the analog experience.” Although we’ll touch a bit on gear in this article, it’s only to illustrate particular points—the main point of interest here is techniques, not features, and how those techniques are implemented in various DAWs. And speaking of DAWs, if you’ve held off on upgrading your DAW of choice, now might be the time to reconsider. As DAW feature sets mature, more companies focus their efforts on workflow and efficiency. While these kinds of updates may not seem compelling when looking over specs on a web site, in practice they can make the recording and mixing process more enjoyable and streamlined. And isn’t that what we all want in the studio? So pull up those faders, dim the lights, and let’s get started (click on any image to enlarge). GAIN-STAGING The typical mixer has several places where you can set levels; proper gain-staging makes sure that levels are set properly to avoid either distortion (levels too high) or excessive noise (levels too low). There’s some confusion about gain-staging, because the way it works in hardware and software differs. With hardware, you’re always dealing with a fixed, physical amount of headroom and dynamic range, which must be respected. Modern virtual mixers (with 32-bit floating point resolution and above) have almost unlimited dynamic range in the mixer channels themselves—you can go “into the red” yet never hear distortion. However, at some point the virtual world meets the physical world, and is again subject to hardware limitations. Gain-stage working backward from the output; you need to make sure that the output level doesn’t overload the physical audio interface. I also treat -6 to -10dB output peaks as “0.” Leave some headroom to allow for inter-sample distortion (Fig. 1) and also, it seems converters like to have a little “breathing room.” Fig. 1: SSL’s X-ISM metering measures inter-sample distortion, and is available as a free download from solidstatelogic.com. Remember, these levels can—and usually will—be brought up during the mastering process anyway. Then, set individual channel levels so that the mixed output’s peaks don’t exceed that -6 to -10dB range. CONFIGURABLE MIXERS One of the most useful features of virtual mixers is that you can configure them to show only what’s needed for the task at hand, thus reducing screen clutter (Fig. 2). Fig. 2: This collage outlines in red the toolbars that show/hide various mixer elements (left Steinberg Cubase 5, middle Cakewalk Sonar 8.5, and right Ableton Live 8). Mixing often happens in stages: First you adjust levels, then EQ, then stereo placement, aux busing, etc. Granted, you’ll go back and forth as you tweak sounds—for example, changing EQ might affect levels—but if you save particular mixer configurations, you can recall them as needed. Here are some examples of how to use the configurable mixer feature when mixing. The meter bridge. This is more applicable to tracking than mixing, but is definitely worth a mention. If you hide everything except meters (and narrow the mixer channel strips, if possible), then you essentially have a meter bridge. As software mixers often do not adjust incoming levels from an interface when recording (typically, the interface provides an applet for that task), you can leave the “meter bridge” up on the screen to monitor incoming levels along with previously-recorded tracks. Hiding non-essentials. Visual distractions work against mixing; some people even turn off their monitors, using only a control surface, so they can concentrate on listening. While you might not want to go to that extreme, when mixing you probably don’t need to see I/O setups, and once the EQ settings are nailed, you probably won’t need those either. You may want to adjust aux bus sends during the course of a mix, but that task can be relegated to automation, letting you hide buses as well. Channel arrangement. With giant hardware mixers, it was common to re-patch tape channel outs to logical groupings on the mixer, so that all the drum faders would be adjacent to each other; ditto vocals, guitars, etc. With virtual mixers, you can usually do this just by dragging the channels around: Take that final percussion overdub you added on track 26, and move it next to the drums. Move the harmony vocals so they’re sitting next to the lead vocal, and re-arrange the rhythm tracks so they flow logically. And while you’re at it, think about having a more or less standardized arrangement in the future—for example starting off with drums on the lowest-numbered tracks, then bass, then rhythm guitars and keyboards, and finally moving on to lead parts and “ear candy” overdubs. The less you need to think about where to find what you want, the better. Track icons. When I first saw these on GarageBand, I thought the concept was silly—who needs cute little pictures of guitars, drums, etc.? But I loaded track icons once when I wanted to make an article screen shot look more interesting, and have been using them ever since. The minute or two it takes to locate and load the icons pays off in terms of parsing tracks rapidly (Fig. 3). Coupled with color-coding, you can jump to a track visually without having to read the channel name. Fig. 3: Acoustica’s Mixcraft 5 is one of several programs that offers track icons to make quick, visual identification of DAW tracks. Color coding. Similarly, color-coding tracks can be tremendously helpful if done consistently. I go by the spectrum mnemonic: Roy G. Biv (red, orange, yellow, green, blue, indigo, violet). Drums are red, bass orange, melodic rhythm parts yellow, vocals green, leads blue, percussion indigo, and effects violet. When you have a lot of tracks, color-coding makes it easy to scroll to the correct section of the mixer (if scrolling is necessary, which I try to avoid if possible). WHY YOU NEED A DUAL MONITOR SETUP If you’re not using two (or even three) monitors, you’ll kick yourself when you finally get an additional monitor and realize just how easy much easier DAW-based mixing can be—especially with configurable mixers. Dedicate the second monitor to the mixer window and the main monitor to showing tracks, virtual instrument GUIs, etc., or stretch the mixer over both monitors to emulate old-school hardware-style mixing. Your graphics card will need to handle multiple monitors; most non-entry-level cards do these days, and some desktop and laptop computers have that capability “out of the box.” However, combining different monitor technologies can be problematic—for example, you might want to use an old 19” CRT monitor along with a new LCD monitor, only to find that the refresh rate has to be set to the lowest common frequency. If the LCD wants 60Hz, then you’re stuck with 60Hz (i.e., flicker city!) on the CRT. If possible, use matched monitors, or at least matching technology. CHANNEL STRIPS Several DAWs include channel strips with EQ and dynamics control (Fig. 4), or even more esoteric strips (e.g., a channel strip dedicated to drums or vocals). Fig. 4: Cakewalk Sonar X1 (left) and Propellerhead Reason (right) have sophisticated channel strips with EQ, dynamics control, and with X1, saturation. However, also note that third-party channel strips are available—see Fig. 5. Fig. 5: Channel strips, clockwise from top: iZotope Alloy, Waves Renaissance Channel, Universal Audio Neve 88RS. If there are certain settings you return to frequently (I’ve found particular settings that work well with my voice for narration, so I have a vocal channel strip narration preset), these can save time compared to inserting individual plug-ins. Although I often do make minor tweaks, it’s easier than starting from scratch. Even if you don’t have specific channel strips, many DAWs let you create track presets that include particular plug-in configurations. For example, I made a “virtual guitar rack” track preset designed specifically for processing guitar with an amp sim, compression, EQ, and spring reverb. BUSING There are three places to insert effects in a typical mixer: Channel inserts, where the effect processes only that channel Master inserts, where the processor affects the entire mix (e.g., overall limiting or EQ) Buses, where the processor affects anything feeding that bus Proper busing can simplify the mixing process (Fig. 6), and make for a happier CPU. Fig. 6: Logic Pro’s “Inspector” for individual channels shows not only the channel’s level on the left, but also, on the right you’ll see the parameters for whatever send you select (or the output bus). In the days of hardware, busing was needed because unlike plug-ins, which you can instantiate until your CPU screams “no more,” a hardware processor could process only one signal path at a time. Therefore, to process multiple signals, you had to create a signal path that could mix together multiple signals—in other words, a bus that fed the processor. The most common effects bus application is reverb, for two reasons. First, high-quality reverbs (particularly convolution types) generally uses a lot of CPU power, so you don’t want to open up multiple instances. Second, there’s an aesthetic issue. If you’re using reverb to give a feeling of music being in an acoustic space, it makes sense to have a single, common acoustic space. Increasing a channel’s reverb send places the sound more in the “back,” and less send places it more in the “front.” A variation on this theme is to have two reverb buses and two reverbs, one for sustained instruments and one for percussive instruments. Use two instances of the same reverb, with very similar settings except for diffusion. This is because you generally want lots of diffusion with percussive sounds to avoid hearing discrete echoes, and less diffusion with sustained instruments (like vocals or lead guitar) so that the reverb isn’t too “thick,” thus muddying the sustained sound. You’ll still have the feeling of a unified acoustic space, but with the advantage of being able to decide how you want to process individual tracks. Of course, effects buses aren’t good only for reverb. I sometimes put an effect with very light distortion in a bus, and feed in signals that need a little “crunch”—for example, adding a little grit to kick and bass can help them stand out more when playing the mix through speakers that lack bass response. Tempo-synched delay for dance music cuts also lends itself to busing, as you may want a similar rhythmic delay feel for multiple tracks. GROUPING Grouping is a way to let one fader control many faders, and there are two main ways of doing this. The classic example of old-school grouping is a drum set with multiple mics; once you nail the relative balance of the individual channels, you can send them to a bus, which allows raising and lowering the level of all mics with a single control. With this method, the individual fader levels don’t change. The other option is not to use a bus, but assign all the faders to a group (Fig. 7). Fig. 7: In PreSonus Studio One Pro, the top three tracks have been selected, and are about to be grouped so edits applied to one track apply to the other grouped tracks. In this case, moving one fader causes all the other faders to follow. Furthermore, with virtual mixers it’s often possible to choose whether group fader levels move linearly or ratiometrically. With a linear change, moving one fader a certain number of dB raises or lowers all faders by the same number of dB. When using ratiometric changes, raising or lowering a fader’s level by a certain percentage raises or lowers all grouped fader levels by the same percentage, not by a specific number of dB. In almost all cases you’ll want to choose a ratiometric response. Another use for grouping is to fight “level creep” where you raise the level of one track, then another, and then another, until you find the master is creeping up to zero or even exceeding it (see the section on Gain-Staging). Temporarily group all the faders ratiometrically, then bring them down (or up, if your level creep went in the opposite direction) until the output level is in the right range. CONTROL SURFACES Yes, I know people mix with a mouse. But I highly recommend using a control surface not because I was raised with hardware mixers, but because a control surface is a “parallel interface”—you can control multiple aspects of your mix simultaneously—whereas a mouse is more like a serial interface, where you can control only one aspect of a mix at a time. Furthermore, I prefer a mix to be a performance. You can add a lot more life to a mix by using faders not just to set static levels, but to add dynamic and rhythmic variations (i.e., moving faders subtly in time with the music) that impart life and motion to the mix. In any event, you have a lot of options when it comes to control surfaces (Fig. 8). Fig. 8: A variety of hands-on controllers. Clockwise from upper left: Behringer BCF2000, Novation Nocturn, Avid MC Mix, and Frontier Design AlphaTrack. One option is to use a control surface, dedicated to mixing functions, that produces control signals your DAW can interpret and understand. Typical models include the Avid Artist Series (formerly from Euphonix), Mackie Control, Cakewalk VS-700C, Behringer BCF2000, Alesis Master Control, etc. The more advanced models use motorized faders, which simplify the mixing process because you can overdub automation moves just by grabbing faders and punching in. If that option is too expensive, there are less costly alternatives, like the Frontier Design AlphaTrack, PreSonus Faderport, Cakewalk VS-20 for guitarists, and the like. These generally have fewer faders and options, but are still more tactile than using a mouse. There’s yet another option that might work even better for you: An analog or digital mixer. I first got turned on this back in the (very) early days of DAWs, when I had a Panasonic DA7 digital mixer. It had great EQ and dynamics that often sounded better than what was built into DAWs, as well as motorized faders and decent hardware busing options. It also had two ADAT cards so I could run 16 digital audio channels into the mixer, and I used the Creamware SCOPE interface with two ADAT outs. So, I could assign tracks to the SCOPE ADAT outs, feed these into the DA7, and mix using the DA7. Syncing the motorized faders moves to the DAW allowed for automated mixes. This had several advantages, starting with hands-on control. Also, by using the DA7’s internal effects, I not only had better sound quality but lightened the computer’s CPU load. And it was easier to interface hardware processors with the DA7 compared to interfacing them with a DAW (although most current DAWs make it easy to treat outboard hardware gear like plug-ins if your audio interface can dedicate I/O to the processors). Finally, the DA7 had a MIDI control layer, so it was even possible to control MIDI parameters in virtual instruments and effects plug-ins from the same control surface that was doing the mixing. While the DA7 is long gone, Yamaha offers the 02R96VCM and 02R96VCM digital mixers, which offer the same general advantages; also check out the StudioLive series from PreSonus. However, that’s just one way to deal with deploying a control surface. You can use a high-quality analog mixer, or something like the Dangerous Music 2-BUS and D-BOX. Analog mixing has a somewhat different sonic character compared to digital mixing, although I wouldn’t go so far as to say one is inherently better than the other (it’s more like a Strat vs. Les Paul situation—different strokes for different folks). The main issue will be I/O limitations, because you have to get the audio out of the DAW and into the mixer. If you have 43 tracks and your interface has only 8 discrete outs—trouble. The workaround is to create stems by assigning related tracks (e.g. drums, background vocals, rhythm guitars, etc.) to buses, then sending the bus outputs to the interface. In some ways this is a fun way to mix, as you have a more limited set of controls and it’s harder to get “lost in the tracks.” Today’s FireWire and USB 2.0 mixers (M-Audio, Alesis, Phonic, Mackie, etc.) can provide a best-of-both-worlds option. These are basically traditional mixers that can also act as DAW interfaces—and while recording, they have enough inputs to record a multi-miked drum set and several other instruments simultaneously. Similarly, when it’s time to mix you might have enough channels to mix each channel individually, or at least mix a combination of individual channels and stems. SCREEN SETS Different programs call this concept by different names, but basically, it’s about being able to call up a particular configuration of windows with a simple keyboard shortcut or menu item (Fig. 9) so you can switch instantly among various views. Fig. 9: Logic Pro 9’s Screensets get their own menu for quick recall and switching among views. Like many of today’s DAW features (track icons, color-coding, configuring mixers, and the like) it requires some time and thought to create a useful collection of screen sets, so some people don’t bother. But this initial time investment is well worth it, because you’ll save far more time in the future. Think of how often you’ve needed to leave a mixer view to do a quick edit in the track or arrange view: You resize, move windows, change window sizes, make your changes, then resize and move all over again to get back to where you were. It’s so much simpler to have a keyboard shortcut that says “hide the mixer, pull up the arranger view, and have the piano roll editing window ready to go” and after doing your edits, having another shortcut that says “hide all that other stuff and just give me the mixer.” DIGITAL METERING LIMITATIONS And finally . . . they may be digital, but you can’t always trust digital metering: As just one example, to indicate clipping, digital meters sometimes require that several consecutive samples clip. Therefore, if only a few samples clip at a time, your meters may not indicate that clipping has occurred. Also, not all digital gear is totally consistent—especially hardware. In theory, a full-strength digital signal where all the bits are “1” should always read 0 dB; however, some designers provide a little headroom before clipping actually occurs—a signal that causes a digital mixer to hit -1dB might show as 0dB on your DAW. It's a good idea to use a test tone to check out metering characteristics of all your digital gear. Here are the steps: Set a sine wave test tone oscillator to about 1 kHz, or play a synthesizer sine wave two octaves about middle C (a little over 1 kHz). Send this signal into an analog-to-digital converter. Patch the A/D converter's digital out to the digital in of the device you want to measure. Adjust the oscillator signal level until the indicator for the device being tested just hits -6dB. Be careful not to change the oscillator signal level! Repeat step 3 for any other digital audio devices you want to test. In theory, all your other gear should indicate -6dB but if not, note any variations in your studio notebook for future reference. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Once Again, We Ask the Question: Why Be Normal? by Craig Anderton Many synthesizers and samplers, whether hardware or software, combine digital sample-based oscillators with synthesis techniques like filtering and modulation. These synthesis options can turn on-board samples into larger-than-life acoustic timbres, impart expressiveness to static sounds, and create entirely new types of sounds—but only if you know how to do a little editing. Don’t believe the hype that editing a synth preset is difficult. All you really need to know is how to select parameters for adjustment, and how to change parameter values. Then, just play around: Vary some parameter values and listen to what happens. As you experiment, you’ll build up a repertoire of techniques that produce sounds you like. When it comes to using oscillators creatively, remember that just because a sample says “Piano” doesn’t mean it can only make piano sounds. As with so many aspects of recording, doing something “wrong” can be extremely right. Such as . . . 1. BOMB THE BASS Transpose bass samples up by two octaves or more, and their characters change completely: So far I’ve unearthed great dulcimer, zither, and clavinet sounds. Furthermore, because transposing up shortens the attack time, bass samples can supply great attack transients for other samples that lack punch (although it may be necessary to add an amplitude envelope with a very short attack time so that you hear only the attack). Also, bass samples sometimes make very “meaty” keyboard sounds when layered with traditional keyboard samples. 2. THE VIRTUAL 12-STRING Many keyboards include 12-string guitar samples, but these are often unsatisfying. As an alternative, layer three sets of guitar multisamples (Fig. 1). The first multisample becomes the “main” sample and extends over the full range of the keyboard. Transpose the second set of multisamples an octave higher, and remember that the top two strings of a 12-string are tuned in unison, not octaves. So, limit the range of the octave higher set of multisamples to A#3. Detune the third multisample set a bit compared to the primary sample, and limit its range to B3 on up. (You may want to fudge with the split point between octave and unison a bit, as a guitarist may play the doubled third string higher up on the neck.) Fig. 1: A simple 12-string guitar patch in Reason’s NN-XT sampler. The octave above samples are colored red for clarity, while the unison samples are colored yellow. (This example uses a limited number of samples to keep the artwork at a reasonable size.) If you can delay the onset of the notes in the octave above and unison layers by around 20 to 35ms, the effect will be more realistic. 3. THE ODD COUPLE Combining samples with traditional synth waveforms can create a much richer overall effect, as well as mask problems that may exist in the sample, such as obvious loops or split points. For example, mixing a sawtooth wave with a string section sample gives a richer overall sound (the sawtooth envelope should mimic the strings’ amplitude envelope). Combining triangle waves with nylon string guitars and flutes also works well. And to turn a sax patch into a sax section, mix in some sawtooth wave set for a bit of an attack time, then detune it compared to the main sax. Sometimes combining theoretically dissimilar samples works well too. For example, on one synth I felt the piano sample lacked a strong bottom end. Layering an acoustic bass sample way in the background, with a little bit of attack time so you didn’t hear the characteristic acoustic bass attack, solved the problem. Sometimes adding a sine wave fundamental to a sound also increases the depth; this worked well with a Chapman Stick sample to increase the low end “boom.” Try other “unexpected” combinations as well, such as mixing choir and bell samples together, or high-pitched white noise and choir. 4. FUN WITH INTERGALACTIC COSMIC EXPLOSIONS Transpose percussion sounds (cymbals, drums, tambourines, shakers, etc.) way down—at least two octaves—for weird sound effects and digital noises. If this causes any quantization noise or grunge to the sound, you may want to keep it but if not, consider closing the lowpass filter down a bit to take out some of the high frequencies, where any artifacts will be most noticeable. For truly massive thunder effects, spaceship sounds, and exploding galaxies (which are always tough to sample!), choose a complex waveform, transpose it down as far as it will go, and close the filter way down . . . then layer it with a similar sound. 5. GENTLEMEN, START YOUR SAMPLES Changing the start point of a sample (a feature available on most synths and samplers) can radically affect the timbre and add dynamics. Move the start point further into the sample (Fig. 2) until you obtain the desired “minimum dynamics” sound, then tie the start point time to keyboard velocity so that more velocity moves the start point closer to the beginning of the sample (this usually requires negative modulation, but check your manual). Fig. 2: The green line indicates the initial sample start point (minimum velocity). Hitting higher velocities moves the sample point further to the left, toward the beginning of the sample, so the sound picks up more of the attack. The red part of the waveform is the area affected by velocity. This seems to work best with percussive sounds, as changing the start point dynamically can cause clicks that are obvious with sustained sounds, but blend in with percussion. An alternative is to use two versions of the same sample, with one sample’s start time set into the sample and the other left alone; then use velocity switching to switch from the altered sample to the unaltered one as velocity increases. 6. DETUNING: WHO SAYS SUBTLE IS GOOD? Detuning isn’t just about subtle changes. When creating an unpitched sound such as drums or special effects, use two versions of the same sample for the two oscillators, but with their pitches offset by a few semitones to thicken the sound. You may need to apply a common envelope to both of them in case the transposition is extreme enough that one sample has a noticeably longer decay than the other one. 7. THE REVENGE OF HARRY PARTCH Microtonal scales (17-tone, 21-tone, exotic even-tempered scales) are good for experimental music, but they’re also useful for special effects. After all, car crashes are seldom even-tempered, and you may want a somewhat more “stretched” sound—either higher or lower—than what the sample provides. To get these kinds of scales (or even a 1-tone scale where all notes on the keyboard play at the same pitch), assign note position (keyboard) as an oscillator modulation source. Adjusting the degree of modulation can “stretch” or “compress” the keyboard so that an octave takes up more or less keys than the usual 12. Note that you may need to adjust the tuning so that the “base” key of a scale falls where you want it. 8. CROSSING OVER Use waveform crossfading to cover up samples with iffy loops. For example, one keyboard had a very realistic flute sound, but the manufacturer assumed you’d be playing the flute in its “normal” range, so the highest sample was looped and stretched to the top of the keyboard. This flute sound actually was very useable in the upper ranges, except that past a certain point the loop became overly short and “tinny.” So, I used the flute sample for one oscillator and a triangle wave for the other, and faded out the flute as it hit the looped portion, while fading in the triangle wave (Fig. 3). Fig. 3: As the natural flute loop fades out, a looped triangle wave fades in to provide a smoother looped sound for the decay. The flute sample gave the attack and the triangle wave, a smooth and consistent post-attack sound. Similar techniques work well for brass, but you’ll probably want to crossfade with a sawtooth wave or other complex waveform. 9. BETTER LIVING THROUGH LAYERING Try layering two samples, and assigning velocity control to the secondary sample’s amplitude so that hitting the keys harder brings in the second sample. This can be very effective in creating more complex sounds. One option for the second sample is to bring in a detuned version, so that playing harder brings in a chorusing effect; or, you use variations on the same basic sound (e.g., nylon and steel string guitars) so that velocity “morphs” through the two sounds. 10. TAKE THE LEAD WITH GUITAR “FEEDBACK” With lead guitar patches, tune one lead sample an octave higher than the other lead sample and tie both sample levels to keyboard pressure. However, set the initial volume of the main sample to maximum level, with pressure adding negative modulation that lowers the level; the octave-higher sample should start at minimum level, with pressure adding positive modulation that increases the level. Pressing down on the key during a sustaining note brings in the octave higher “feedback” sound and fades outs the fundamental. For a variation on this theme, have pressure introduce vibrato and perhaps bend pitch up a half-tone at maximum pressure. Also experiment with other waveforms and pitches for the octave-higher sound; a sine wave tuned an octave and a fifth above the fundamental gives a very convincing “feedback” effect. Craig Anderton is Editor Emeritus of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. Are you fighting technology, or flowing with it? By Craig Anderton Technology can be overwhelming. But does it have to be? Why can some people roll with any technological punch that’s thrown their way, while others struggle to keep up? Some musicians and engineers feel that technology “gets in the way” of the recording or music-making process. Conversely, there’s also no denying that technology makes possible music that was never possible before, and can even provide the means to streamline its production. If you feel there’s some kind of dichotomy between technology and music, you’re not imagining things: Your brain’s “firmware” is hardwired to deal with artistic and technological tasks differently. In this article, we’ll explore why this division exists, describe how your brain’s firmware works, and provide some tips on how to stay focused on the art when you’re up to your neck in tech. COOPERATION AND CONFLICT Technology and art cooperate in some areas, but conflict in others. Regarding cooperation, think of how technology has always pushed the instrument-making envelope (the piano was quite high-tech in its day). And recording defies time itself: We can not only enjoy music from decades ago, but also sing a harmony with ourselves — essentially, going backward in time to sing simultaneously with our original vocal. Cool. Then there’s the love affair between music and mathematics. Frequencies, tempos, rhythms, SMPTE time code — they’re all based on math. Music loves Math. When my daughter was getting into fractions, I created a sequence that included half notes, quarter notes, sixteenth notes, etc. She immediately “got” the concept upon hearing fractions expressed as rhythms. As to conflicts, first there’s the dichotomy of how the brain processes information (as we’ll discuss next); and second, there are a few societally-induced conflicts. For example, some people think that using technology is somehow cheating (e.g., lowering a sequence’s tempo so you can play along more easily, then speeding it back up). Furthermore, the accelerated rate of technological change itself causes conflicts. Which gear should I buy? Which platform is better? And why do the skills I learned just a few years ago no longer matter? Let’s look at how physiology influences our perceptions of both technology and art, as this will provide some clues on how best to reconcile the two. THE MAN WITH TWO BRAINS Our brain has two hemispheres; each one processes information differently. Consider the following quote from the essay “2044: One Hundred Years of Innovation,” presented by William Roy Kesting (founder of Kesting Ventures) and Kathy Woods (VP and Principal of Woods Creative Services) at a 1994 meeting of the Commercial Development Association: “The right brain is the older of the two hemispheres and functions in an all-at-once mode to produce a complete picture. In contrast, the left hemisphere excels in sequential functions such as words, abstract thinking and numbers.” Essentially, the right brain is the “Macintosh GUI” side that handles intuitive, emotional tasks — like being creative. The left brain is more like the “MS-DOS command line interface” side that works in a more linear fashion, and deals with sequential thought processes. Use Color to Your Advantage. The right brain parses color rapidly. Many programs let you customize color schemes, and hardware companies are becoming more aware of this too. For example, the Alesis Ion synth changed the transpose LED’s intensity when transposing by different octaves, making it easy to see the transposition range without having to read anything. And, its programs were arranged in four banks by color rather than letters or numbers. The “breakthrough” in understanding this difference between the hemispheres comes from the work of Drs. Roger W. Sperry, David H. Hubel, and Torsten N. Wiesel, who shared the 1981 Nobel prize in Physiology. Later studies have modified their findings a bit, but some comments in the Nobel Awards presentation speech, by David Ottoson, are well worth noting. “The left brain half is . . . superior to the right in abstract thinking, interpretation of symbolic relationships and in carrying out detailed analysis. It can speak, write, carry out mathematical calculations and in its general function is rather reminiscent of a computer. It is with this brain half that we communicate. The right cerebral hemisphere is mute. . . It cannot write, and can only read and understand the meaning of simple words in noun form. It almost entirely lacks the ability to count and can only carry out simple additions up to 20. However . . . is superior to the left in the perception of complex sounds and in the appreciation of music . . . it is, too, absolutely superior to the left hemisphere in perception of nondescript patterns. It is with the right hemisphere we recognize the face of an acquaintance, the topography of a town, or landscape earlier seen. “Pavlov . . . that mankind can be divided into thinkers and artists. Pavlov was perhaps not entirely wrong. Today we know from Sperry’s work that the left hemisphere is cool and logical in its thinking, while the right hemisphere is the imaginative, artistically creative half of the brain.” As a result, one option is to explain the art/technology dichotomy as the hemispheres being not necessarily in conflict, but working at cross-purposes. Once “stuck” in a hemisphere’s mode of thought, it’s difficult to transition seamlessly into working in the other one, let alone integrate the two. The “Unified Interface” and the Brain. A “unified interface,” which avoids opening multiple overlapping windows in favor of a single screen where elements can be shown or hidden as needed, speaks to both hemispheres. The right brain takes in the “big picture,” while the left brain can focus on details if needed. Ableton Live has two unified interfaces — a “right brain” one optimized for live improvisation, and a “left brain” one optimized for “offline” editing. But if that’s the case, why are so many good programmers musicians? And why have many mathematicians — going back as far as Pythagoras — been fascinated with music, and vice-versa? THE MUSICIAN’S "FIRMWARE" The NAMM campaign “music makes you smarter” is rooted in truth. Recent research shows that many musicians indeed use both halves of the brain to a greater extent than non-musicians. According to Prof. Dr. Lars Heslet (Professor of Intensive Care Medicine at Copenhagen State Hospital in Denmark, and a researcher into the effects of music on the body): “The right brain hemisphere is specialized in the perception of spatial musical elements, that is the sense of harmony and pitch, whereas the left hemisphere perceives the progress of the melody, which requires musical memory.” In other words, both halves of the brain need to be in play to fully appreciate music. This may explain why musicians, critics, and average listeners have seemingly different tastes in music: The critics listen with the analytical (left) side of their brain, the non-musicians react emotionally with their right brain, and the musicians use both hemispheres. Here’s an interesting quote from Frederick Turner (Founders Professor of Arts and Humanities at the University of Texas at Dallas) and Ernst Pöppel, the distinguished German neuropsychologist: “Jerre Levy . . . characterizes the relationship between right and left as a complementarity of cognitive capacities. She has stated in a brilliant aphorism that the left brain maps spatial information into a temporal order, while the right brain maps temporal information onto a spatial order.” Does that sound like a sequencer piano roll to you? Indeed, it uses both temporal and spatial placement. The same thing goes for hard disk recording where you can “see” the waveforms. Even though some programs allow turning off waveform drawing, I’d bet very few musicians do: We want to see the relationship between spatial and temporal information. We Want Visual Feedback. Which track view do you like better — the one that shows MIDI and audio data, or the blank tracks? Odds are you prefer a relationship between spatial and temporal information. Again, from Turner and Pöppel: “Experienced musicians use their left brain just as much as their right in listening to music shows that their higher understanding of music is the result of the collaboration of both ‘brains,’ the music having been translated first from temporal sequence to spatial pattern, and then ‘read,’ as it were, back into a temporal movement.” HEMISPHERIC INTEGRATION: JUST DO IT! The ideal bridge between technology and art lies in “hemispheric integration” — the smooth flow of information between the two hemispheres, so that each processes information as appropriate. For example, the right brain may intuitively understand that something doesn’t sound right, while the left brain knows which EQ settings will fix the problem. Or for a more musical example, a songwriter may experience a distinct emotional feeling in the right hemisphere, but the left hemisphere knows how to “map” this onto a melody or chord progression. Without hemispheric integration, the brain has to bounce back and forth between the two hemispheres, which (as noted earlier) is difficult. This is why integration may expedite the creative process. Here’s another quote from William Roy Kesting and Kathy Woods: “ . . . just as creative all-at-once activities like art need left-sided sequence, so science and logic depend on right-sided inspiration. Visionary physicists frequently report that their insights occur in a flash of intuition . . . Einstein said: ‘Invention is not the product of logical thought, even though the final product is tied to a logical structure.’” Mozart also noted the same phenomenon. He once stated that, when his thoughts flowed best and most abundantly, the music became complete and finished in his mind, like a fine picture or a beautiful statue, with all parts visible simultaneously. He was seeing the whole, not just the individual elements. MEET THE INFORMATION SUPERHIGHWAY The physical connection between the two hemispheres is called the corpus callosum. As Dr. Lars Heslet notes,“To attain a complete musical perception, the connection and integration between the two brain hemispheres (via the corpus callosum) is necessary. This interaction via the corpus callosum can be enhanced by music.” Interestingly, according to the article “Music of the Hemispheres” (Discover, 15:15, March 1994), “The corpus callosum — that inter-hemisphere information highway — is 10-15\\% thicker in musicians who began their training while young than it is in non-musicians. Our brain structure is apparently strongly molded by early training.” Bingo. Musical training forges connections between the left and right hemispheres, resulting in a measurable, physical change. And that also explains why some musicians are just as much at home reading about some advanced hardware technique in our articles library as they are listening to music: They have the firmware to handle it. THE RIGHT/LEFT BRAIN “GROOVE” Producer/engineer Michael Stewart (who produced Billy Joel’s “Piano Man”), while studying interface design, noticed that someone involved in a mostly left- or right-brain activity often had difficulty switching between the two, and sometimes worked better when able to remain mostly in one hemisphere. (Some of his research was presented in an article in EQ magazine called “Recording and the Conscious Mind.”) For example, as a producer, he would often have singers who played guitar or keyboards do so while singing, even if he didn’t record the instruments. He felt this kept the left brain occupied instead of letting it be too self-critical or analytical, thus allowing the right brain to take charge of the vocal. Another one of his more interesting findings was that you could sort of “restart” the right brain by looking at pictures — the right brain likes visual stimulation. Stewart was also the person who came up with the “feel factor” concept, quantifying the effects that small timing differences have on the brain’s perception of music, particularly with respect to “grooves.” This is a fine example of using left-brain thinking to quantify more intuitive, right-brain concepts. Quantization and Feel. Quantization can hinder or help a piece of music, depending on how you use it. For example, set any quantization “strength” parameter to less than 100\\% (e.g., 70\\%) to move a note closer to the rhythmic grid but retain some of the original feel. Also, quantization “windows” can avoid quantizing notes that are already close to the beat, and “groove” quantizing (which quantizes parts to another part’s rhythm, not a fixed rhythmic grid) can give a more realistic feel. Timing shifts for notes are also important. For example, if in rock music you shift the snare somewhat later than the kick, the sound will be “bigger.” If you move the hi-hat a little bit ahead of the kick, the feel will “push” the beat more. TECHNOLOGICAL TRAPS Technology has created a few traps that meddle with hemispheric integration. When the left hemisphere is processing information, it wants certainty and a logical order. Meanwhile, the right brain craves something else altogether. As mentioned earlier with the examples regarding Michael Stewart, in situations where hemispheric integration isn’t strong — or where you don’t want to stress out the brain to switch hemispheres — trying to stay in one hemisphere is often the answer to a good performance or session. Quite a few people believe pre-computer age recordings had more “feel.” But I think they may be looking in the wrong place for an answer as to why. Feel is not found in a particular type of tube preamp or mixer; I believe it was found in the recording process. When Buddy Holly was cutting his hits, he didn’t have to worry about defragmenting hard drives. In his day, the engineer handled the left brain activities, the artist lived in the right brain, and the producer integrated the two. The artist didn’t have to be concerned about technology, and could stay in that “right brain groove.” Cycle Recording: Let the Computer Be Your Engineer. Cycle (or loop) recording repeats a portion of music over and over, adding a new track with each overdub. You can then sort through the overdubbed tracks and “splice” together the best parts. This lets you slip into a right-brain groove, then keep recording while you’re in that groove without having to worry about arming new tracks, rewinding, etc. If you record by yourself, you’ve probably experienced a situation where you had some great musical idea and were just about to make it happen, but then you experienced a technical glitch (or ringing phone, or whatever). So you switched back into left brain mode to work on the glitch or answer the phone. But when you tried to get back into that “right brain groove,” you couldn’t . . . it was lost. That’s an example of the difficulty of switching back and forth between hemispheres. In fact, some people will lose that creative impulse just in the process of arming a track and getting it ready to record. Now, if you have an Einsteinian level of hemispheric integration, maybe you would see the glitch or phone call as merely a thread in the fabric of the creative process, and never leave that right-brain zone. We’ll always be somewhat beholden to the differences between hemispheres, but at least we know one element to reprogramming your firmware: Get involved with music, early on, in several different facets, and keep fattening up that corpus callosum. And it’s probably not a bad idea to exercise both halves of your brain. For example, given that the left hand controls the right brain and the right hand controls the left brain, try writing with the hand you normally don’t use from time to time and see if that stimulates the other hemisphere. JUST BECAUSE WE CAN . . . SHOULD WE? Technology allows us to do things that were never possible before. And maybe we were better off when they weren’t possible! For example, technology makes it possible to be artist, engineer, and producer. But this goes against our very own physiology, as it forces constant switching between the hemispheres. Would some of our greatest songwriters have written such lasting songs if they’d engineered or produced themselves? Maybe, but then again, maybe not. And what about mixing with a mouse? Sure, it’s possible to have a studio without a mixing console, but this reduces the mixing process to a linear, left-brain activity. A hardware mixing console (or control surface) allows seeing “the big picture” where all the channels, EQ, pans, etc. are mapped out in front of you. AVOIDING OPTION OVERLOAD Part of the fix for hemispheric integration is to use gear you know intimately, so you don’t have to drag yourself into left brain mode every time you want to do something. When using gear becomes second nature, you can perform left-brain activities while staying in the right brain. As just one example, if you’re a guitarist and want to play an E chord, when you were first learning you probably had to use your left brain to remember which fingers to place on which frets. Now you can do it instinctively, even while you stay in the right brain. The same principle holds true for using any gear, not just a guitar. Ultimately, simplification is a powerful antidote to option overload. When you’re writing in the studio, the point isn’t to record the perfect part, but to get down ideas. Record fast before the inspiration goes away, and worry about fixing any mistakes later. Don’t agonize over level-setting, just be conservative so you don’t end up with distortion. Find a good “workstation” plug-in or synthesizer and master it, then use that one plug-in as a song takes shape. You can always substitute fine-tuned parts later. Also maintain a small number of carefully selected presets for signal processors and instruments; you can always tweak them later. And if you’re a plug-o-holic, remove the ones you don’t use. How much time do you waste scrolling through long lists of plug-ins? Use placeholders for parts if needed, and don’t edit as you go along — that’s a left brain activity. With software, templates and shortcuts are powerful simplifying tools that let you stay in right brain mode. Templates mean you don’t have to get bogged down setting up something, and hitting computer keys (particularly function keys) is more precise than mouse movements. Efficiency avoids bogging down the creative process. MAKING MUSICAL INSTRUMENTS MAGICAL As Robert Pirsig’s “Zen and the Art of Motorcycle Maintenance” says, “If the machine produces tranquility, it’s right.” Reviews and other opinions don’t matter if something feels right to you. Which Type Of Graphic Interface Works for You? The interface is crucial to making an instrument feel right. Compare the screen shot for one of the earliest software synths, Seer Systems’ Reality, to that of G-Media’s Oddity. Reality has more of a spreadsheet vibe, whereas the Oddity portrays the front panel of the instrument it emulates; this makes the signal flow more obvious. Companies can supply technology, but only you can supply the magic that makes technology come alive. No instrument includes soul; fortunately, you do. As we’ve seen, though, to let the soul and inspiration come through, you need to allow the creative hemisphere of your right brain full rein, while the left brain makes its seamless contribution toward making everything run smoothly. Part of mastering the world of technology is knowing when not to use it. Remember, all that matters in your music is the emotional impact on the listener. They don’t want perfection; they want an emotionally satisfying experience. Be very careful when identifying “mistakes” — they can actually add character to your recording. And finally, remember that no amount of editing can fix a bad musical part . . . yet almost nothing can obscure a good one. The bottom line is that you need to master the technology you use so that operating it becomes automatic, then set up a work flow that makes it easier to put your left brain on autopilot. That frees up the right brain to help you keep the “art” in the state of the art. We’ll leave the last word on why you want to do this to Rolf Jensen, director of the Copenhagen Institute for Futures Studies: “We are in the twilight of a society based on data. As information and intelligence become the domain of computers, society will place a new value on the one human ability that can’t be automated: Emotion.” Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. USB memory sticks give huge performance gains with Ableton Live By Craig Anderton Many musicians use Ableton Live with a laptop for live performance, but this involves a compromise. Laptops often have a single, fairly slow (5400 RPM) disk drive, and a limited amount of RAM compared to desktop computers. Live gives you the choice of storing clips to RAM or hard disk, but you have to choose carefully. If you assign too many clips to disk, then eventually the disk will not be able to stream all of these clips successfully, and there will be audio gaps and dropouts. But if you assign too many clips to RAM, then there won’t be enough memory left for your operating system and startup programs. Fortunately, there’s a very simple solution that solves all these problems: Store your Ableton projects on USB 2.0 RAM sticks. That way, you can assign all the clips to stream from the solid-state RAM “disk,” so Ableton thinks they’re disk clips. But, they have all the advantages of being stored in RAM—there are no problems with seek times or a hard disk’s mechanical limitations. Best of all, the clips place no demands on your laptop’s hard drive or RAM, leaving them free for other uses. Here’s how to convert your project to one that works with USB RAM sticks. 1. Plug your USB 2.0 RAM stick into your computer’s USB port. 2. Call up the Live project you want to save on your RAM stick. 3. If the project hasn’t been saved before, select "Save" or "Save As" and name the project to create a project folder. Fig. 1: The "Collect All and Save" option lets you make sure that everything used in the project, including samples from external media, are saved with the project. 4. Go File > Collect All and Save (Fig. 1), then click on "OK" when asked if you are sure. Fig. 2: This is where you specify what you want to save as part of the project. 5. When you’re asked to specify which samples to copy into the project, select "Yes" for all options, and then click OK (Fig. 2). Note that if you’re using many instruments with multisamples, this can require a lot of memory! But if you’re mostly using audio loops, most projects will fit comfortably into a 1GB stick. 6. Copy the project folder containing the collected files to your USB RAM stick. 7. From the folder on the USB RAM stick, open up the main .ALS Live project file. 8. Select all audio clips by drawing a rectangle around them, typing Ctrl-A, or Ctrl-click (Windows) on the clips to select them. Fig. 3: All clips have been selected. Under "Samples," click on RAM until it's disabled (i.e., the block is gray). 9. Select Live’s Clip View, and under Samples, uncheck "RAM" (Fig. 3). This converts all the audio clips to “disk” clips that “stream” from your USB stick. Now when you play your Live project, all your clips will play out of the USB stick’s RAM, and your laptop’s hard disk and RAM can take a nice vacation. This technique really works—try it! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. This Simple Technique Can Make Amp Sims Sound Warmer and More Organic by Craig Anderton All amp sims that I've used exhibit, to one degree or another, what I call "the annoying frequency." For some reason this seems to be inherent in modeling, and adds a sort of "fizzy," whistling sound that I find objectionable. It may be the result of pickup characteristics, musical style, playing technique, etc. adding up in the wrong way and therefore emphasizing a resonance or it may be something else...but in any event, it detracts from the potential richness of the amp sound. This article includes audio examples from Avid’s Eleven Rack and Native Instruments’ Guitar Rig 4, but I’m not picking on them – almost every amp sim program I’ve used has at least one or two amps that exhibit this characteristic. It also seems like an unpredictable problem; one amp might have this “fizz” only when using a particular virtual mic or cabinet, but the same mic or cabinet on a different amp might sound fine. Normally, if you found this sound, you'd probably just say "I don't like that" and try a different cabinet, amp, or mic (or change the amp settings). But, you don't have to if you know the secret of fizz removal. All you need is a stage or two of parametric (not quasi-parametric) EQ, a good set of ears, and a little patience. BUT FIRST... Before getting into fizz removal, you might try a couple other techniques. Physical amps don’t have a lot of energy above 5kHz because of the physics of cabinets and speakers, but amp sims don’t have physical limitations. So eEven if the sim is designed to reduce highs, you’ll often find high-frequency artifacts, particularly if you run the sim at lower sample rates (e.g., 44.1kHz). One way to obtain a more pleasing distorted amp sim sound is simply to enable any oversampling options; if none are available, run the sim at an 88.2kHz or 96kHz sample rate. Another option is removing unneeded high frequencies. Many EQs offer a lowpass filter response that attenuates levels above a certain frequency. Set this for around 5-10kHz, with as steep a rolloff as possible (specified in dB/octave; 12dB/octave is good, 24dB/octave is better). Vary the frequency until any high-frequency “buzziness” goes away. Similarly, it’s a good idea to trim the very lowest bass frequencies. Physical cabinets—particularly open-back cabinets—have a limited low frequency response; besides, recording engineers often roll off the bass a bit to give a “tighter” sound. A quality parametric EQ will probably have a highpass filter function. As a guitar’s lowest string is just below 100Hz, set the frequency for a sharp low-frequency rolloff around 70Hz or so to minimize any “mud.” FIZZ/ANNOYING FREQUENCY REMOVAL Although amp sims can do remarkably faithful amp emulations, with real amps the recording process often “smooths out” undesirable resonances and fizz due to miking, mic position, the sound traveling through air, etc. When going direct, though, any “annoying frequencies” tend to be emphasized. Please listen to this audio example on the Harmony Central YouTube channel. The sound is from Avid’s Eleven Rack; the combination of the Digidesign Custom Modern amp, 2x12 Black Duo Cab, and on-axis Dyn 421 mic creates a somewhat “fizzy” sound. Listen carefully while the section plays that says original file, and you'll hear a high, sort of "whistling" quality that doesn't sound at all organic or warm, but "digital." Follow these steps to reduce this whistling quality. 1. Turn down your monitors because there may be some really loud levels as you search for the annoying frequency (or frequencies). 2. Enable a parametric equalizer stage. Set a sharp Q (resonance), and boost the gain to at least 12dB. 3. Sweep the parametric frequency as you play. There will likely be a frequency where the sound gets extremely loud and distorted—more so than any other frequencies. Zero in on this frequency. 4. Now use the parametric gain control to cut gain, thus reducing the annoying frequency. In the part of the video that says sweeping filter to find annoying frequency, I've created a sharp, narrow peak to localize where the whistle is. You'll hear the peak sweep across the spectrum, and while the sharp peak is sort of unpleasant in itself, toward the end (in the part that says here it is!) you'll note that it's settled on that whistling sound we heard in the first example. In this case, after sweeping the parametric stage, the annoying whistle is centered around 7.9kHz. In the next example that says now we'll notch it out, you'll hear the whistle for the first couple seconds, then hear it disappear magically as the peak turns into a notch (check out the filter response in Fig. 1). Note how the amp now sounds richer, warmer, more organic, and just plain more freakin' wonderful A little past the halfway point through the clip, I switched the filter out of the circuit so the response was flat (no dip). You'll hear the whistle come back. Fig. 1: Here's what was used to remove the fizz. This single parametric notch makes a huge difference in terms of improving the sound quality. DUAL NOTCH TECHNIQUES AND EXAMPLES Sometimes finding and removing a second fizz frequency can improve the sound even more; check out Example 2 in the video. First you'll hear the original file from Guitar Rig's AC30 emulation. It sounds okay, but there’s a certain harshness in the high end. Let’s find the fizzy frequencies and remove them, using the same procedure we used with the Eleven Rack. After sweeping the parametric stage, I found an annoying whistle centered at 9,645 Hz. The part called annoying fequency at 9645 Hz uses the parametric filter to emphasize this frequency, while the part labelled notch at 9645 Hz has a much smoother high end. But we’re not done yet; let’s see if we can find any other annoying frequencies. The section labelled annoying frequency at 5046 Hz again uses a filter to emphasize this frequency. The next section, with notches at 9645 Hz and 5046 Hz has notches at both frequencies (Fig, 2). Compare this to original file at the end without any notches; note how the version without notches sounds more “digital,” and lacks the “warmth” of the filtered versions. Fig. 2: The above image shows the parametric EQ notches that were applied to the signal, using the Sonitus EQ in Cakewalk's SONAR DAW. MUCH BETTER! Impressive, eh? This is the key to getting good amp sim sounds. Further refinements on this technique are: Experiment with the notch bandwidth. You want the narrowest notch possible that nonetheless gets rid of the whistle, otherwise you'll diminish the highs...although that may be what you want. As I said, experiment! Some amp sims exhibit multiple annoying frequencies. On occasion, sometimes three notches is perfect. Generally, the more notches you need to use, the more narrow you need them to be. When you’re done, between the high/low frequency trims and the midrange notches, your amp sim should sound smoother, creamier, and more realistic. Enjoy your new tone! ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Use MIDI Controller Data to Add Expressiveness to Software Synths by Craig Anderton When Sonic Foundry's Acid made its debut in 1998, it was a breakthrough product: Prior to that time, you couldn't simply drop a digital audio clip in a digital audio workstation track and be able to "stretch" both tempo and pitch in real time. (Propellerhead Software had introduced the REX file format four years previously, which also allows for time and pitch stretching. However, it was a specialized file format, whereas Acid could work with any digital audio file and "Acidize" it - more or less successfully - for stretching.) Over the years other programs started to acquire similar capabilities, and as Sonic Foundry's fortunes declined, so did Acid's. However, Sony bought the Sonic Foundry family of programs in 2003, and started the rebuilding process. Acid's hard disk recording capabilities became on a par with other programs, and more recently, MIDI has been beefed up to where Acid can handle software synthesizers, MIDI automation, and external controllers with ease. In this article, we'll show how to add MIDI controller messages to MIDI tracks (for clarity, MIDI note data isn't shown). Begin by selecting a MIDI track, then choosing "Automation Write (Touch)" from the Automation Settings drop-down menu. If you instead want to overwrite existing automation data instead of write new data, choose Latch (right below the Touch option). Latching creates envelope points when you change a control; if you stop moving the control, its current setting overwrites existing envelope points until you stop playback. You'll see four control sliders toward the bottom of the MIDI track. If you don't see the controller you want, click on a controller's label; this reveals a pop-up menu with additional controller options, and you can then select the desired controller from this menu. In the screen shot, Modulation is replacing Aftertouch. As with other programs (e.g., Cakewalk Sonar), it's not necessary to enter record mode to record automation data. Simply click on the Play button, then click and drag the appropriate controller slider to create an automation envelope in real time. However, note that MIDI controllers can generate a lot of data. When computers were slower, this could sometimes cause problems because older processors couldn't keep up with the sheer amount of data. While this is less of an issue with today's fast machines, lots of tracks with controller data can "clog" the MIDI stream, particularly if you're driving external MIDI hardware rather than an internal software synthesizer. Acid has an option that lets you thin the amount of controller data. To do this, click on the Envelope button to the right of the controller's slider, then select "Thin Envelope Data" from the drop-down menu. What's more, Acid offers automatic smoothing/thinning of automation data. To set this up, go Options > Preferences > External Control & Automation tab and check "Smooth and thin automation data after recording or drawing." To add a point (what some other programs call a node) manually but still use the slider to set the value, choose the Pencil tool and click at the time where you want to add the point. Then, move the slider to change the newly-added point's value. To add a point manually that can be moved in any direction, place the cursor over the automation curve until it turns into a pointing hand, then double-click to create a point. Click and drag on the point to move it. In this example, a modulation value of 27 is being entered at measure 1, beat 2, 192 ticks. Another way to add an automation point is to right-click on the automation curve, and select "Add Point." Click and drag on the point to move it. Note that this same pop-up menu also lets you change the shape of the curve between points. In this example, Fast Fade has been chosen. You can continue to add and edit automation until the automation "moves" are exactly as desired. So why bother? Because automation can add expressiveness to synthesizer parts by keeping sounds dynamic and moving, rather than static. The next step would be to add an external control surface, so you can create these changes manually using physical faders...but that's another story, for another time! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. If You’re not Yet Conversant with Programming SFZ Files, It’s Time to Learn by Craig Anderton The SFZ file format maps samples to virtual instruments, and is used primarily in virtual instruments made by Cakewalk, including DropZone, RXP, SFZ, Session Drummer 2, and LE versions of Dimension and Rapture. However, it's an open standard, and others (such as Garritan) are using it as well; furthermore, there's a free SFZ player VST instrument so you can create your own virtual instrument by creating an SFZ file, then loading it into the player. Overall, this is a protocol who's time has come, and we'll walk you through the basics. THE SFZ FILE FORMAT The SFZ file format is not unlike the concept of SoundFonts, where you can load a ready-to-go multisampled sound - not just the samples - as one file. Unlike SoundFonts, which are monolithic files, the SFZ file format has two components: A group of samples, and a text file that "points" to these samples and defines what to do with them. The text file describes, for example, a sample's root key and key range. But it can also define the velocity range over which the sample should play, filtering and envelope characteristics, whether notes should play based on particular controller values, looping, level, pan, effects, and many, many more parameters. However, note that not all SFZ-compatible instruments respond to all these commands; if you try to load an SFZ file with commands that an instrument doesn't recognize (possibly due to an SFZ version 2 definition file being loaded into an SFZ version 1-compatible instrument), the program will generate an error log in the form of a text message. Fortunately, nothing crashes, and the worst that can happen is that the file won't load until you eliminate (or fix, in the case of a typo or syntax error) the problematic command. It's worth mentioning that the SFZ spec is license-free, even for commercial applications. For example, if you want to sell a set of SFZ-compatible multisamples for use in the Cakewalk synths, you needn't pay any kind of fee or royalty. WHY BOTHER LEARNING ABOUT SFZ FILES? There are three main reasons: If you like to create your own sounds, you can create far more sophisticated ones for SFZ-compatible instruments if you know how the SFZ format works. And, the files you create will load into other SFZ-aware instruments (particularly if you limit yourself to using commands from the version 1.0 SFZ spec). By editing SFZ files, you can overcome some of the limitations in the LE versions of Rapture and Dimension included in Sonar. It's been pointed out that you can't adjust tuning in the LE versions...and you can't, which can be a real problem if you recorded a piano track where the piano was in tune with itself, but not tuned to concert pitch - and you want to add an overdub. However, you can edit the tuning of the SFZ file itself loaded into the instrument, and compensate for tuning that way. SFZ files provide a way for cross-host collaboration. The SFZ Player (Fig. 1) included in Sonar, a VST plug-in that works in any VST-compatible host, is available as a free download. Fig. 1: The SFZ Player is a free download that works in any VST-compatible host, not just Sonar. As to why this is important, suppose you're using Sonar, a friend is using Ableton Live, and you want to collaborate on a part based on some samples you've grabbed. String those samples together into an SFZ file, have your friend download the player, send the SFZ file to your friend, and you can swap parts back and forth. You can even use really big samples, because the SFZ player supports compressed Ogg Vorbis files. So, you can create a compressed, "draft" version of the SFZ file, then substitute a full version with WAV files when it's mixdown time. CREATING YOUR FIRST SFZ FILE Creating an SFZ file is not unlike writing code, but don't panic: It's easier than writing music! Despite the many commands, you don't need to learn all of them, and the syntax is pretty straightforward. Although you can "reverse-engineer" existing SFZ files to figure out the syntax, it's helpful to have a list of the available commands - you can find one at http://www.cakewalk.com/DevXchange/sfz.asp (Fig. 2), or check out Appendix A in the book "Cakewalk Synthesizers" by Simon Cann (published by Thomson Course Technology) Fig. 2: All the opcodes (commands) for the 1.0 version of the SFZ spec are listed and described on Cakewalk's web site. As an example of how the SFZ protocol can dress up a sample, suppose you've sampled a guitar power chord in D and extracted a wavetable from it - a short segment, with loop points added in an audio editor (we'll call the sample GuitWavetable\_D1.WAV). It won't sound like much by itself, but let's create an SFZ file from it, and load it into SFZ Player. Arguably the two most crucial SFZ concepts are "region" and "group." Region defines a particular waveform's characteristics, while Group defines the characteristics of a group of regions. For example, a typical Region command would be to define a sample's key range, while a typical Group command might add an attack time to all samples in an SFZ multisample. Another important element is the Comment. You can add comments to the definition file simply by adding a couple slashes in front of the comment, on the same line; the slashes tell SFZ to ignore the rest of what's on the line. Here's a suggested procedure for getting started with SFZ files. 1. Create a folder for the samples you plan to use. In this case, I called mine "GuitarWavetables." 2. Drag the sample(s) you want to use into the folder you created. In this example, I used only one sample to avoid complications. 3. Open up a text editor, like Notepad (the simpler, the better-you don't need formatting and other features that add extraneous characters to the underlying text file). If you do use a word processor like Word, make sure you save the file as plain MS-DOS text. 4. Add some comments (putting // before text turns it into a comment) to identify the SFZ file, like so... // SFZ Definition File // Simple Guitar Wavetable File 5. Let's turn this wavetable into a region that spans the full range of the keyboard. To do this we need to add a line that specifies the root key, the key range, and tells the file where to find the sample. Here's the syntax: <region> pitch\_keycenter=D1 lokey=C0 hikey=C8 sample=GuitWavetable\_D1.WAV That's all pretty obvious: pitch\_keycenter is the root key, lokey is the lowest key the sample should cover, hikey is the highest key the sample should cover, and sample defines the sample's name. As the definition file and sample are in the same folder, there's no need to specify the folder that holds the sample. If the definition file is "outside" of the folder, you'd change the sample= line to include the folder, like so: sample=GuitarWavetables\GuitWavetable\_D1.WAV 6. Save this text file under the file name you want to use (e.g., "GuitarPowerChordWave.sfz") in the GuitarWavetables folder. You could actually save it anywhere, but this way if you move the folder, the text definition file and samples move together. (Note that you can right click on an SFZ file and "open with" Notepad - you don't have to change the suffix to TXT.) 7. Open up an SFZ-compatible instrument, like Dimension LE. Click in the Load Multisample window that says "Empty," then navigate to the desired SFZ file (Fig. 3). Double-click on it, and now you should hear it when you play Dimension. If you don't, there might be a typo in your text file; check any error message for clues as to what's wrong. Fig. 3: Click in the Load Multisample field in Dimension or Rapture, and a Load Multisample browser will appear; navigate to what you want to load. The Garritan Pocket Orchestra samples for Dimension LE are a rich source of SFZ files. TAKING IT FURTHER Okay, we can play back a waveform...big deal. But let's make it more interesting by loading two versions of the same waveform, then detuning them slightly. This involves adding a tune= description; we'll tune one down -5 cents, and the other up 5 cents. Here's how the file looks now: <region> pitch\_keycenter=D1 lokey=C0 hikey=C8 tune=-5 sample=GuitWavetable\_D1.WAV <region> pitch\_keycenter=D1 lokey=C0 hikey=C8 tune=5 sample=GuitWavetable\_D1.WAV Now let's pan one waveform toward the right, and the other toward the left. This involves adding a descriptor of *pan= *where the value must be between -100 and 100. Next up, we'll add one more version of the waveform in the center of the stereo image, but dropped down an octave to give a big bass sound. We basically add a line like the ones above, but omit tune= and add a transpose=-12 command. Loading the SFZ file now loads all three waveforms, panned as desired, with the middle waveform dropped down an octave. But it sounds a little buzzy for a bass, so let's add some filtering, with a decay envelope. This is a good time for the <group> function, as we can apply the same filtering to all three oscillators with just one line. And here is that line, which should be placed at the top of the file: <group> fil\_type=lpf\_2p cutoff=300 ampeg\_decay=5 ampeg\_sustain=0 fileg\_decay=.5 fileg\_sustain=0 fileg\_depth=3600 Here's what each function means: fil\_type=lpf\_2p This indicates that the filter type is a lowpass filter with 2 poles. cutoff=300 Filter cutoff in Hertz ampeg\_decay=5 The amplitude envelope generator has a decay of 5 seconds ampeg\_sustain=0 The amplitude envelope generator has a sustain of 0 percent. fileg\_decay=.5 The filter envelope generator has a decay of 0.5 seconds. fileg\_sustain=0 The filter envelope generator has a sustain of 0 percent. fileg\_depth=3600 The filter envelope generator depth is 3600 cents (3 octaves). As you work with SFZ files, you'll find they're pretty tolerant - for example, the sample names can have spaces and include any special characters other than =, and you can insert blank lines between lines in the SFZ definition text file. But one inviolable rule is that there can't be a space on either side of the = sign. OVERCOMING LE-MITATIONS Rapture LE and Dimension LE are useful additions to Sonar, but as playback-oriented instruments, they have limitations compared to the full versions. For example, with Dimension LE, you can edit two stages of DSP, a filter, and some global FX-nothing else, like tuning, transpose, envelope attack, and other important parameters. However, if the sound you want to load into either of these LE versions is based on an SFZ file, you can modify it well beyond what you can do with the instruments themselves. (Note that these instruments often load simple WAV or other file types instead of the more complex SFZ types; in this case, editing becomes more difficult because you have to first turn the WAV file into an SFZ file, and if you're going to put that much effort into programming, you might want to upgrade to the full versions that have increased editability.) Let's look at a Dimension patch, Hammond Jazz 3. This loads an SFZ file called Hammond jazz.sfz, so it's ripe for editing. We'll take that Hammond sound and turn it into a pipe organ by creating two additional layers, one an octave above the original sound, and one an octave lower. We'll pan the octave higher and main layers right and left respectively, with the lower octave panned in the middle. Then we'll tweak attack and release times, as well as add some EQ. Here's how. 1. To find the SFZ file, go to C:\Program Files\Cakewalk\Dimension LE\Multisamples\Organs and open Hammond Jazz.sfz in Notepad. Here's what it looks like: <region> sample=Hammond Jazz\HBj1slC\_2H-S.wav key=c3 hikey=f3 <region> sample=Hammond Jazz\HBj1slC\_3H-S.wav key=c4 hikey=f4 <region> sample=Hammond Jazz\HBj1slC\_4H-S.wav key=c5 hikey=f5 <region> sample=Hammond Jazz\HBj1slD\_5H-S.wav key=d6 hikey=f6 <region> sample=Hammond Jazz\HBj1slC\_6H-S.wav key=c7 hikey=f7 <region> sample=Hammond Jazz\HBj1slF#1H-S.wav key=f#2 hikey=b2 lokey=c1 <region> sample=Hammond Jazz\HBj1slF#2H-S.wav key=f#3 hikey=b3 <region> sample=Hammond Jazz\HBj1slF#3H-S.wav key=f#4 hikey=b4 <region> sample=Hammond Jazz\HBj1slF#4H-S.wav key=f#5 hikey=c#6 <region> sample=Hammond Jazz\HBj1slF#5H-S.wav key=f#6 hikey=b6 2. This shows that the SFZ definition file basically points to 10 samples, with root keys at various octaves of C or F#, and spreads them across the keyboard as a traditional multisample. Note that it doesn't use the pitch\_center= statement, for two reasons: First, Dimension LE doesn't recognize it, and second, because the key= statement sets the root key, low key, and high key to the same value. You can add modifiers to this, like lokey= and hikey= statements, as needed. 3. Before this block of <region> statements, add a <group> statement as follows to modify all of these regions: <group> ampeg\_attack=0.2 ampeg\_release=2 pan=-100 eq1\_freq=4000 eq1\_bw=2 eq1\_gain=20 These parameters add an amplifier envelope generator attack of 0.2 seconds, amplifier envelope generator release time of 2 seconds, pan full left, and one stage of EQ (with a frequency of 4kHz, bandwidth of two octaves, and 20dB gain). 4. Now we'll add another region an octave lower, and put a similar <group> statement before it. We'll simply use one of the existing samples and to minimize memory consumption, as this sample plays more of a supportive role, we'll just stretch it across the full keyboard range. <group> ampeg\_attack=0.2 ampeg\_release=1 transpose=-12 eq1\_freq=2000 eq1\_bw=4 eq1\_gain=20 <region> sample=Hammond Jazz\HBj1slC\_4H-S.wav key=c5 lokey=c0 hikey=c8 The group statement is very similar to the previous one, except that the sample has been transposed down 12 semitones, the pan statement is omitted so the sample pans to center, and the EQ's center frequency is 2kHz instead of 4kHz. The sample's root key is C5, and stretched down to C0 and up to C8. 5. Next, we'll add the final new region, which is an octave higher. Again, we'll put a <group> statement in front of it. <group> ampeg\_attack=0.2 ampeg\_release=1 transpose=12 pan=100 <region> sample=Hammond Jazz\HBj1slC\_4H-S.wav key=c5 lokey=c0 hikey=c8 The group statement adds the familiar attack and release, but transposes up 12 semitones and pans full right. The region statement takes the same sample used for the octave lower sound and stretches it across the full keyboard. I should add that although we've made a lot of changes to the SFZ file, it's still being processed by Dimension LE's Hammond Jazz 3 patch. As a result, if you take this SFZ file and load it into SFZ player, it won't sound the same because it won't be using the various Dimension parameters that are part of the Hammond Jazz 3 patch. ARE WE THERE YET? Explaining all this on paper may make the process of creating SFZ files seem complex, but really, it isn't as long as you have the list of SFZ opcodes in front of you. After a while, the whole process becomes second nature. For example, I found an SFZ bass patch that produced a cool, sort of clav-like sound when I transposed it up a couple octaves-but the attack sounded cartoonlike when transposed up that high. So, I just used the offset= command to start playback of the samples past the attack. And while I was at it, I added a very short attack time to cover up the click caused by starting partway through the sample, and a decay time to give a more percussive envelope. The editing took a couple of minutes at most; I saved the SFZ file so I could use this particular multisample again. Sure, creating SFZ files might not replace your favorite leisure time activity-but it's a powerful protocol that's pretty easy to use. Modify some files to do you bidding, and you'll be hooked. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Don't you just LOVE BBS software? I started the review and copied over some of the excellent posts on the VL4 that were in the VL2 thread, and something happened during one of the copy operations that nuked the VL4 thread. So I'm starting over...major apologies to those who had submitted such great tips, I'm really sorry the BBS killed them. I'll try to see if there's some way to recover them later tonight. Anyway, back the VL4, which thankfully was not programmed by the same people who did the BBS software! The image shows an overview of the piece. You'll note there's a MusIQ switch to turn off the function that ties harmonies to your guitar playing, which means you can specify particular scales and keys. You'll also notice four footswitches instead of the two on the VL2: Effects, Harmony, and preset up/down. The other obvious difference is that there are a lot more editable parameters -- you'll see the familiar DigiTech "matrix of parameters" printed on the front panel. You have five knobs to tweak these (AFAIC a big improvement over the one knob/increment-decrement button approach), and three mix knobs.
  • Create New...