Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. Creating files that can stretch to the project tempo doesn’t have to be difficult By Craig Anderton Sonar was the first program after Sony Acid itself to allow turning standard WAV files into the “Acidized” format. This format adds metadata to the file that indicates where transients exist, and links these to tempo so that the file can “stretch” to accommodate tempos other than the tempo at which it was recorded. For example, you could drop a two-measure Acidized loop that was recorded at 120BPM into a project with a tempo of 140BPM, and the file will be shortened intelligently so that it lasts two measures at the faster tempo. Although Acidizing a file so it stretches over a wide range of tempos can be difficult with complex material, with simple rhythmic loops you can “Acidize” the file in a few easy steps if you’re primarily interested in speeding up rather than slowing down the tempo. That’s because it’s much more difficult to add material to a file so that it’s longer (the additional material has to be synthesized), compared to removing material to make it shorter, as is needed with a faster tempo. Fig. 1: Open the Loop Construction window. Begin by double-clicking on the file to be stretched, as this opens the Loop Construction window. Fig. 2: Click the Enable Looping button. Next, click the Enable Looping button. Sonar estimates the number of beats in the file; if correct, the Orig. BPM field displays the original tempo. If not, enter the correct number of beats in the Beats in Clip field. Fig. 3: Transient detection is an essential part of the Acidization process. For Transient Detect, enter 0 and hit Return. The transient markers that control slicing will jump to the value in the Slices field. Fig. 4: The Slices field lets you choose a rhythmic value that’s most compatible with the part. In the Slices field, choose the rhythmic value that matches the pattern (e.g., with a 16th note-based pattern, choose 16th notes). Fig. 5: You can add markers for any transients that Sonar misses. Sonar detects transients automatically, but the process isn’t perfect. Acidization works best if there’s a transient marker on every transient, so if a beat doesn’t have a transient marker (like a 32nd note accent in a 16th note pattern), add a marker in the strip with the other marker triangles by double-clicking above the transient. Fig. 6: You can also remove transient markers if there isn’t really a transient. If there’s an unneeded transient marker (e.g., in the middle of a sustained eighth note cymbal crash where nothing happens underneath it), remove the marker by clicking on the Erase tool, then clicking on the marker you want to remove. Note that with many patterns—especially those generated by a drum machine or step sequencer—you probably won’t need to add or subtract transients, as 16th note slicing will work more often than not. Also note that if the drum pattern was played by a human, transients likely won’t fall right on the beat. Move them to line up with the transient start by clicking on the red triangle and dragging. Fig. 7: You can also have the file follow the project pitch. If the material is pitched (e.g., synth arpeggiation), click the Follow Project Pitch button and select the file’s original key from the drop-down menu. With unpitched material, leave this field grayed-out. Fig. 8: Save the file once it’s Acidized properly. Now it’s time to save the file. Click on the floppy disk button; the file will contain the additional stretching metadata. You can also save the file simply by dragging the clip to the desktop, but hold down the mouse button until the file is finished copying. By the way, if you’re creating a file from scratch that you want to stretch, choose a slow tempo, like 60 to 90BPM. As mentioned at the beginning, tempo-stretching works better for speeding up than slowing down. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  2. Hey all - I'm closing this thread because there hasn't been a new post since 2008, and it's becoming a magnet for spambots. If you have any questions or issues with pro reviews, you can PM me directly; if it's of general interest, I'll re-open the thread long enough to post it here.
  3. Use Live's Drum Rack feature to create complete, editable drum kits by Craig Anderton Starting with version 7, Ableton Live added a feature called the Drum Rack. The Drum Rack makes it easy to assemble complete drum kits, with processing, transposition, different sound characteristics, and other options. Although a very deep feature, this introduction will get you started with the basic concepts – we’ll create and edit a simple drum set, and sequence a quick drum part, using the Drum Rack feature. Fig. 1: Drum Racks require MIDI tracks. In Session View, create a MIDI track or select an existing one. Fig. 2: Drag the Drum Rack from the Browser’s Instruments folder into the drop zone. Open the Browser if it isn’t already open. In the Browser toolbar toward the left side, click on the Live Devices button. Next, locate the Instruments folder, then unfold it. A Drum Rack instrument appears within the Instruments folder; click on the Drum Rack, then drag it into the “drop zone” section of the Track View Selector. Fig. 3: The Drum Rack note selector chooses the range of notes to which you can assign drum samples. The right side of the Drum Rack has a note selector scroll bar. Move the note selector up or down to select the desired note range where you can assign drum (or other) sounds. Fig. 4: Create your kit by dragging samples. In the Browser, locate the hits you want to collect into a drum kit. Drag the desired hits from the Browser into the Drum Rack device (you can drag one hit per “pad”). Fig. 5: Program a drum part. You’ll need a MIDI clip to create your drum sequence. If a MIDI clip doesn’t yet exist, double-click on a MIDI clip slot (not the slot’s Record button), then program the desired drum part within the MIDI Clip Overview. If you want to work with an existing MIDI clip, single-click on its clip slot; you’ll see the sequence in the MIDI Clip Overview. Fig. 6: You can edit particular drum parameters within the Drum Rack. There are several editable parameters (volume, pan, MIDI note receive and play, choke for drum groups, solo, etc.) within the Drum Rack, called rack chain parameters. You can edit these parameters in the Track View Selector’s Chain List (click on the button shown circled in red), or the Session View Overview (click on the button shown circled in blue at the top of the “parent” drum track). Choosing the Session View Overview option “unfolds” additional mixer channels that correlate to the various drum sounds. Fig. 7: Here’s how to edit drum sound parameters. To edit an instrument’s sound, click on the Track View Selector, then click on the “pad” with the instrument you want to edit. In the “mini toolbar” toward the left, under the Device Title Bar, enable Show/Hide Devices to see the drum sound parameters for editing. Note that if you hide Macro Controls and Chain List (these are hidden in the screen shot), you can see more of the instrument editor without have to scroll in the Track View Selector. Note that Drum Racks have their own sends and return chains. In the “mini toolbar” toward the left, under the Device Title Bar, click on the Show/Hide Sends and Show/Hide Return Chains to see these. To create a return chain, simply drag an effect into the space indicated when showing Return Chains. And of course, if you’re going to do all this work creating a Drum Rack, you’ll want to save it. To save the current Drum Rack, click on the save button (floppy disk icon) toward the right side of the Drum Rack’s Device Title Bar. Resources: Electronic Drums Buying Guide Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Here's how to use Eleven Rack with ASIO programs including Sonar, Live, Acid, Mixcraft, Record, and Cubase by Craig Anderton The success of Avid’s Eleven Rack has exceeded expectations, but it’s not undeserved. I did a Pro Review on it and over the course of doing the review, enjoyed a real chemistry with it—it’s a fine, well-thought-out unit. However, one of the first questions in the Pro Review was whether Eleven Rack would work with other DAWs, because quite a few people had the misconception that it worked only with Pro Tools LE. The answer is yes—and no. So, let’s look at the pros and cons of using Eleven Rack with other, non-Avid programs. SO WHAT IS ELEVEN RACK, ANYWAY? For those who need a quick refresher course, Eleven Rack (Fig. 1) is a combination live performance multieffects and USB interface that comes with Pro Tools LE. Fig. 1: You can think of Eleven Rack as a multieffects that just happens to work as an interface for Pro Tools LE, or a Pro Tools LE interface you can just happen to take onstage as a multieffects. Although based on Avid’s Eleven amp simulator plug-in, the rack version is something else altogether. It’s sturdily built, has guitar/mic/MIDI I/O, and some effects emulations that are up there with anything else I’ve heard. It has a high fun factor, and the cost is reasonable considering that it can do double-duty for stage and studio. And while it serves as an Avid-approved device that allows you to run Pro Tools LE, as with other modern Avid interfaces it also works as an ASIO interface with other DAWs. FIRST, THE BAD NEWS The biggest problem is that Eleven Rack's on-screen editor loads under Pro Tools; it's not going to work in any other program, or for that matter, in Pro Tools 11. Note: Since January 2014, the on-screen editor was removed from Pro Tools and now operates as a stand-alone editor that's compatible with 64-bit Mac and Windows systems. For more information, please refer to the FAQ. Fig. 2: You won’t see this GUI in anything other than Pro Tools, but note that you won't see it in Pro Tools 11 and up. But, there’s a mitigating factor. Because Eleven Rack was designed to double as a live performance rig, front-panel editing is surprisingly painless. You don’t get the cool graphics and all, but if you want to call up presets and tweak them, you can certainly do so without the computer editor. In most other respects, though, you don’t give up anything by using Eleven Rack with other DAWs. USING ELEVEN RACK AS AN ASIO INTERFACE We’ll talk about using Eleven Rack with Sonar, Live, Acid, Mixcraft, Record, and Cubase, but the principle of operation is pretty much the same for any ASIO-compatible program. Note: There are a lot of screen shots in this article due to covering so many different programs, so I’ve made them small to avoid them taking over the page. INPUT DRIVERS Eleven Rack presents four input drivers to an ASIO DAW. We’ll start with Sonar because it shows these very clearly (Fig. 3). Fig. 3: Eleven Rack’s input and output drivers, shown in Sonar’s Audio Options window. Note that these can all be used simultaneously, e.g., you can feed what’s going into the Eleven Rack S/PDIF input to one DAW track, the line in to another track, and the guitar being processed through the rack to yet another track. Here’s the story on the four drivers. Remember, you need to enable Input Monitoring for the track you’re feeding with Eleven Rack to hear its output. Eleven Rack Guitar/Mic In. The guitar comes in on the left channel, and the mic on the right. This is basically a DI connection to the guitar, and is very useful if you want to drive amp sims within your DAW. If you want guitar only, choose the left input and if you want only mic, choose the right. To select only the straight guitar/mic sound and send it to your DAW: From the Eleven Rack main screen that shows the presets, press and hold the Edit/Back button to access the User Options. Use the Scroll wheel to select Rig Input. Press SW1. Adjust the second knob from the left (red indicator) so the display shows anything except RIG INPUT: Guitar or RIG INPUT: Mic as you’ll hear the guitar or Mic through the rack’s processing. Once that’s set, hit Edit/Back twice to get to the main editing page. If you want to use Eleven Rack's True-Z input feature (a clever way to emulate the loading of various types of amps and processors) for your guitar before feeding your DAW or any subsequent amp sims—yet still go essentially direct—there’s an easy workaround. Choose the Guitar input as described in Step 4 above, but then bypass all the effects. Press the Edit/Back button once. Turn the Scroll wheel to highlight any effect other than Input (e.g., Mod, Dly, etc.) Press SW1 to edit the effect controls, and use SW1 to choose Bypass. Now use the Scroll wheel to scroll through the various effects in the chain, and set each one to Bypass. Hit the Edit/Back button again. Turn the Scroll wheel counter-clockwise until the display shows the Input stage as being selected, then rotate the second control from the left to choose the desired True-Z input characteristics. Eleven Rack Rig In. This driver picks up the Eleven Rack output—what you hear at the Rack out is what you get. The Rig Input selector obviously makes a difference here, because what you select at the input (Guitar, Mic, Line or Digital) is what will be feeding Eleven Rack, and therefore, what you’ll hear coming in to your DAW. Eleven Rack Digital In. This picks up whatever’s feeding the S/PDIF or AES/EBU input, independently of what’s going on with the rack. Eleven Rack Line Input. This picks up whatever’s feeding the1/4” phone or XLR line inputs, independently of what’s going on with the rack. Now let’s see how other programs handle this. Fig. 4: With Sony Acid, choose Eleven Rack ASIO as the audio device type. In this screen shot, the Guitar In/Mic In driver is being chosen as the default audio recording device, but you can change this within any Acid audio track using the Record Input selector. Fig. 5: Use Cubase’s Device Setup to specify Eleven Rack as the VST Audio System. Under VST Connections – Input, add buses that correspond to the inputs you want to use. These will then show up as “Active” in the Device Setup window under “State.” Fig. 6: In Mixcraft Preferences, chose the default recording device (here, it’s the Rig itself) and default output. Like Acid, you can modify the input within individual tracks. Fig. 7: Input settings are obvious with Propellerhead Record. Under Preferences, click on the Active Input Channels button, and enable the Eleven Rack inputs you want to use. Ableton Live is a special case, because it doesn’t name the drivers. Setting Preferences in straightforward for the audio device, but the input configuration requires some explanation (Fig. 8). Fig. 8: With Live, enable the inputs you want to use from Eleven Rack in the input config section. In Live, 1/2 is the Guitar/Mic input. 3/4 is the Rig, 5/6 the Digital in, and 7/8 the Line in. If you want to use the left and right channels as independent inputs, you also need to select the equivalent Mono inputs. In the screen shot, 1 and 2 are also selected as possible mono inputs so that only the guitar or only the mic can be selected. OUTPUT DRIVERS If you know how to choose the input drivers in your program of choice, then you almost certainly know how to set the output drivers, so there’s no need to go into that here. The output choices are pretty straightforward, with one exception. Main Out. This is what you’d typically use to monitor what’s happening with your DAW, e.g., if you have headphones plugged into the Phones jack, and the Volume control up. Digital Out. This sends the signal to the digital outs, which presumably feed a monitoring system with a digital audio input. Re-Amp. And this is the exception...so let’s get into re-amping with Eleven Rack. RE-AMPING This has always been one of Eleven Rack’s strongest features, and fortunately, it translates to programs other than Pro Tools. Here’s the process (Fig. 9). Fig. 9: Re-amping with Eleven Rack in Sonar. Other programs follow the same basic principle. For the track you want to re-amp (presumably straight guitar, but why be normal? It could be anything!), set its output to ReAmp instead of, for example, the master audio output. Input echo should not be on for this track. The screen shot shows this selected for track 1. Set Eleven Rack’s Rig Input to RIG INPUT: Re-Amp. Create a track to record the re-amped sound, and set its input to Stereo Eleven Rig (track 2 in the screen shot). This track’s output will usually go to the Master bus, e.g, Eleven Main Out, so you can monitor it. Enable Input Echo monitor so you can hear the re-amped sound. Start recording from the beginning of the track to be re-amped, and you’ll record the re-amped sound into that track. Now you can re-amp through Eleven Rack’s processors. Hint: Check out the article on how to make amp sims sound more organic, where you put a parametric EQ after the sim output to tame “rogue” resonances—it makes all the difference in the world in terms of “smoothing” out the sound. MIDI I/O Eleven Rack does MIDI too, but you need to be a little careful here. There are two flavors of MIDI. If you look at the available MIDI devices, you’ll see two associated with Eleven Rack. Eleven Rack: This does MIDI-over-USB for MIDI parameter control of the Eleven Rack. It has nothing to do with the 5-pin DIN connectors on the back. External: This works with the 5-pin MIDI DIN connectors. For example, suppose you have a master keyboard with a physical MIDI out. You would patch this into the Eleven Rack’s physical MIDI in jack, then enable External as a MIDI input so that your DAW can record the data coming from your keyboard. Here’s how to select the 5-pin MIDI I/O for various programs. Fig. 10: In Sonar, both Eleven Rack and External are selected in the MIDI devices window. Fig. 11: Acid’s Preferences has a tab for MIDI. In this screenshot, Eleven Rack and External are selected. Fig. 12: Cubase’s Device Setup page lets you click on MIDI port setup, which shows the Eleven Rack and External options. These become active when used in a project. Fig. 13: Mixcraft’s Preferences page allows choosing External as your port for MIDI recording. Fig. 14: With Record, you choose a control surface, then the port to which it connects. Fig. 15: Access the MIDI section of Live’s Preferences page to assign External as the source for MIDI input. And there you have it—how to use Eleven Rack with a variety of ASIO programs. If your favorite program isn’t shown here, the procedure for doing assignments will be similar. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Do loop-based, time-stretched music with Pro Tools by Craig Anderton Pro Tools 7.4 introduced the “Elastic Audio” feature, which allows using audio loops of various tempos within a single Pro Tools project. These loops aren’t just limited to working the constant tempos, but can also follow tempo changes. Elastic Audio has some pretty deep features, and can do far more than just simple time-stretching. For now, we’ll keep things simple with a basic automatic warping application for Pro Tools 8 that integrates loops of varying tempos within a Pro Tools project. However, note that you can obtain even more detailed control with manual warping, using the Warp Track view. Fig. 1: Going to the workspace is the first step in navigating to loops. (Click to enlarge.) The first step is to find the loops you want to use. With a Pro Tools project open, go Window > Workspace (Fig. 1). Fig. 2: Locate the first file you want to use in your project. (Click to enlarge.) Navigate to the loop you want to use in the project (Fig. 2). This can be an AIF, WAV, Acidized WAV, or RX2 format file; it doesn’t matter, because Elastic Audio will condition the file. If needed, click on a folder’s expand button to reveal its contents. Fig. 3: The speaker button lets you audition the selected file at its native tempo. (Click to enlarge.) You can audition the file to confirm that you want to use it. Click on the Preview button (the speaker button, either the one toward the top of the window, or under the “Waveform” column; see Fig. 3) to audition the loop at its native tempo. When you do this, the speaker icon turns green during playback (Fig. 4), and you’ll see the waveform drawn in the Waveform column. Note that you can click anywhere on this waveform to begin playback from that point. Fig. 4: The “Conform to Tempo” button lets you audition the file at the project’s tempo. (Click to enlarge.) Click the “Conform to Tempo” button (the one that looks like a metronome, next to the level meter – see Fig. 4) to hear the loop at the current project tempo. Fig. 5: Drag and drop the file into the Track List. (Click to enlarge.) Now drag the file into the Track List (Fig. 5). A track appears automatically in the Edit and Mix windows that contains the loop. If there are no tracks in the session, and the file is tick-based, a dialog box will appear asking if you want to import the file’s tempo. If you answer “Don’t Import,” the session will use the existing session tempo. Fig. 6: It’s easy to specify the number of loop iterations. (Click to enlarge.) Right-click on the loop, choose Loop, and you can specify the number of loop iterations as well as some other loop parameters (Fig. 6). Editing these isn’t necessary to conform the loop to tempo. Note that if you change the tempo (by turning off the Conductor track and entering a new tempo or using the tap tempo feature), the loop will conform to the selected tempo. Incidentally, Elastic Audio analysis is based on finding transients. If these are ambiguous (e.g., a string pad), an Analysis view allows adding or removing transient markers to optimize the stretching process. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Take advantage of Digital Performer’s pitch correction options by Craig Anderton Pitch correction is controversial, probably because too many people overuse it too much too many times, and that identifiable “robot” quality gets old after awhile. But there’s no law that says you have to overuse it; subtle pitch correction can take that vocal that would perfect if only the singer hadn’t gone flat on one note, fix the note, and retain all the feel of the original. Although pitch correction is available as a plug-in from companies like Antares and Celemony, some DAWs include built-in pitch correction...and MOTU’s Digital Performer is one of them. Because Digital Performer has very sophisticated pitch correction options for monophonic vocals and instruments, it’s easy to overlook the fact that you can make simple, effective vocal edits very quickly by drawing in correction curves with the pencil tool. Although there are different ways to do this, here’s one possible workflow. Fig. 1: Selecting Memory Cycle makes it easier to work on a specific vocal section. Begin by selecting Memory Cycle (Fig. 1) and loop the area you want to edit. Although you can work on the entire vocal, it’s often more efficient to isolate specific areas that need fixes; looping that section makes the process even easier. Fig. 2: Select the appropriate soundbite. After looping the section with the vocal that you need to fix, select the soundbite (Fig. 2) that requires processing Fig. 3: Choose the desired processing mode. You have a variety of pitch correction options; for this type of application, it’s generally best to choose “Set Pitch Mode for Track and Selected Bites” unless you need different pitch modes for the bites in a track. Then, set the processing mode to “Vocals” (Fig. 3). It’s important to do this step before making any adjustments, as Digital Performer needs to analyze the vocal. The processing mode influences the analysis process. Fig. 4: Choosing the Sound File tab gives a good view of what you need to edit. Click on the Sound File tab (Fig. 4). Although you can edit pitch in the Sequence view by selecting Pitch (instead of Soundbites, Volume, Pan, etc.), I prefer the more expansive Sound File view. Fig. 5: When you choose Pitch, you can see the pitch curves easily. Click on the Pitch tab (Fig. 5). However, you probably won’t see the pitch curves immediately; use the scroll bar on the left to bring the pitch curves into view. Fig. 6: Choose Tools next. Go to the Studio menu, then select Tools (Fig. 6). This lets you use the pencil. Fig. 7: The Pencil tool can draw in a new pitch curve. Use the Pencil tool to modify the pitch correction curve (Fig. 7) as much or as little as you want. Be careful not to turn everything into a straight line; real singers have slight pitch changes that add interest to a vocal. While this technique is a great way to do touch-ups on vocals, Digital Performer has additional ways to tailor the pitch analysis process, as well as multiple methods (including pitch quantization) you can use to modify pitch. It’s well worth spending some quality time with the manual—and experimenting!—to exploit these features to the fullest. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. Are Amp Sims for You? Before Deciding, Check Out These Tips By Craig Anderton Guitar amp simulators are controversial. Some say they don’t sound or feel like real tube amps. Others say sims deliver sounds you can’t get any other way. Guess what? They’re both right. For the sound of a Fender Twin, play through a Fender Twin—but if you want a Fender Twin layered with a Plexi-type head going through a Peavey cabinet, with part of the sound filtered in time with the drums and the guitar’s bottom two strings going through an octave divider...believe me, you’re better off with amp sims. 10 THINGS YOU NEED TO KNOW ABOUT AMP SIMS Latency is becoming a non-issue. If you got turned off to sims because of latency—the delay between hitting a note and hearing it—take another listen. With today’s fast computers, the delay can be well under 10ms. In practical terms, that’s the same delay as having your ears 10 feet away from your amp. Peavey’s ReValver MkIII even offers two modes, one for minimum latency while playing, the other for maximum fidelity during playback. Peavey’s ReValver MkIII has a “Real-time” mode for minimum latency, and a “mixdown” mode for maximum fidelity. Sims do re-amping for free.When you load a sim into software like Pro Tools, Sonar, Live, etc., you’re not recording the processed sound: You’re recording the dry sound of the guitar, and monitoring through the sim while playing back or recording. So, you can change your guitar’s sound right up to the final mixdown by changing the sim’s settings. You must tweak the presets. When I first try out a sim, I always think it sounds terrible—until I tweak the presets to match my playing style with my guitar. If a preset was created by someone using a single coil and light gauge strings and you’re using a humbucker with heavy gauge strings, it won’t sound as intended. If possible (especially in stand-alone mode), run the sim at 88.2kHz or 96kHz. While I think 44.1kHz is fine for listening to CDs, running a sim at a higher sample rate allows it to reproduce distortion characteristics with better fidelity. Try it; I bet you’ll hear a difference. Many sims have “high resolution” options—use them.Recognizing that sims suck a lot of juice from computers, programs like IK AmpliTube and Native Instruments’ Guitar Rig have options that provide higher fidelity, but increase the load on your computer. Use these unless they load down your computer so much the audio starts to glitch. IK’s AmpliTube has three oversampling options and a high resolution option—if your computer can handle the load, check them all for the best fidelity. Download any available updates.As computers have become more powerful, designers have tweaked their simulation algorithms to take advantage of the extra power. The result: Better effects, and a sweeter sound. Today’s sims sound way better than ones from even just a few years ago. Sims are not an “all or nothing” proposition.Miss the sound of speakers in a cabinet pumping air? Then bypass the sim’s cabinet, and feed the sim preamp output into your physical amp. Conversely, if you love your pedalboard but hate carrying around amps and cabinets, plug the pedalboard output into the sim input, use only the sim amp and cab, then plug the sim output into a mixer or PA system. Want to use a physical cabinet and speakers? In Waves’ GTR, bypass the emulated cabinet, and send the output to a guitar amp. Watch levels like a hawk. Make sure your sims never go “into the red.” This creates nasty digital distortion that is totally unlike the “good” distortion you get from a tasty preamp or amp. Guitar Rig even has a “learn” function (as do other sims): Play your very loudest, and Guitar Rig will adjust levels automatically. Native Instruments Guitar Rig’s Learn function optimizes level automatically to avoid unwanted digital distortion. Amp sims are not just for guitars. A typical amp sim program includes a bunch of effects—chorusing, delay, pitch shifting, reverb, and more. I’ve used sims with great results on vocals, drums, and keyboards. In fact, amp sims were the "secret ingredient" in my Turbulent Filth Monsters sample library of twisted drum sounds. Sorry, but there’s no “best” sim. The algorithms that create amp sounds are as much art as science. Just as I have several guitars, I have several amp sims because each has its own character: Some might excel at clean tones, others at distortion, and still others might have great but not-so-hot amps. Sometimes I even put two amp sims in series so I can use the preamp and effects from one sim and the amp and cabinet from another. Sims are definitely not as “plug and play” as standard amps, which have far fewer options. But dig deep enough and learn to tweak: You’ll get some mind-blowing sounds that are impossible to achieve any other way. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. This Simple Technique Can Make Amp Sims Sound Warmer and More Organic by Craig Anderton All amp sims that I've used exhibit, to one degree or another, what I call "the annoying frequency." For some reason this seems to be inherent in modeling, and adds a sort of "fizzy," whistling sound that I find objectionable. It may be the result of pickup characteristics, musical style, playing technique, etc. adding up in the wrong way and therefore emphasizing a resonance or it may be something else...but in any event, it detracts from the potential richness of the amp sound. This article includes audio examples from Avid’s Eleven Rack and Native Instruments’ Guitar Rig 4, but I’m not picking on them – almost every amp sim program I’ve used has at least one or two amps that exhibit this characteristic. It also seems like an unpredictable problem; one amp might have this “fizz” only when using a particular virtual mic or cabinet, but the same mic or cabinet on a different amp might sound fine. Normally, if you found this sound, you'd probably just say "I don't like that" and try a different cabinet, amp, or mic (or change the amp settings). But, you don't have to if you know the secret of fizz removal. All you need is a stage or two of parametric (not quasi-parametric) EQ, a good set of ears, and a little patience. BUT FIRST... Before getting into fizz removal, you might try a couple other techniques. Physical amps don’t have a lot of energy above 5kHz because of the physics of cabinets and speakers, but amp sims don’t have physical limitations. So eEven if the sim is designed to reduce highs, you’ll often find high-frequency artifacts, particularly if you run the sim at lower sample rates (e.g., 44.1kHz). One way to obtain a more pleasing distorted amp sim sound is simply to enable any oversampling options; if none are available, run the sim at an 88.2kHz or 96kHz sample rate. Another option is removing unneeded high frequencies. Many EQs offer a lowpass filter response that attenuates levels above a certain frequency. Set this for around 5-10kHz, with as steep a rolloff as possible (specified in dB/octave; 12dB/octave is good, 24dB/octave is better). Vary the frequency until any high-frequency “buzziness” goes away. Similarly, it’s a good idea to trim the very lowest bass frequencies. Physical cabinets—particularly open-back cabinets—have a limited low frequency response; besides, recording engineers often roll off the bass a bit to give a “tighter” sound. A quality parametric EQ will probably have a highpass filter function. As a guitar’s lowest string is just below 100Hz, set the frequency for a sharp low-frequency rolloff around 70Hz or so to minimize any “mud.” FIZZ/ANNOYING FREQUENCY REMOVAL Although amp sims can do remarkably faithful amp emulations, with real amps the recording process often “smooths out” undesirable resonances and fizz due to miking, mic position, the sound traveling through air, etc. When going direct, though, any “annoying frequencies” tend to be emphasized. Please listen to this audio example on the Harmony Central YouTube channel. The sound is from Avid’s Eleven Rack; the combination of the Digidesign Custom Modern amp, 2x12 Black Duo Cab, and on-axis Dyn 421 mic creates a somewhat “fizzy” sound. Listen carefully while the section plays that says original file, and you'll hear a high, sort of "whistling" quality that doesn't sound at all organic or warm, but "digital." Follow these steps to reduce this whistling quality. 1. Turn down your monitors because there may be some really loud levels as you search for the annoying frequency (or frequencies). 2. Enable a parametric equalizer stage. Set a sharp Q (resonance), and boost the gain to at least 12dB. 3. Sweep the parametric frequency as you play. There will likely be a frequency where the sound gets extremely loud and distorted—more so than any other frequencies. Zero in on this frequency. 4. Now use the parametric gain control to cut gain, thus reducing the annoying frequency. In the part of the video that says sweeping filter to find annoying frequency, I've created a sharp, narrow peak to localize where the whistle is. You'll hear the peak sweep across the spectrum, and while the sharp peak is sort of unpleasant in itself, toward the end (in the part that says here it is!) you'll note that it's settled on that whistling sound we heard in the first example. In this case, after sweeping the parametric stage, the annoying whistle is centered around 7.9kHz. In the next example that says now we'll notch it out, you'll hear the whistle for the first couple seconds, then hear it disappear magically as the peak turns into a notch (check out the filter response in Fig. 1). Note how the amp now sounds richer, warmer, more organic, and just plain more freakin' wonderful A little past the halfway point through the clip, I switched the filter out of the circuit so the response was flat (no dip). You'll hear the whistle come back. Fig. 1: Here's what was used to remove the fizz. This single parametric notch makes a huge difference in terms of improving the sound quality. DUAL NOTCH TECHNIQUES AND EXAMPLES Sometimes finding and removing a second fizz frequency can improve the sound even more; check out Example 2 in the video. First you'll hear the original file from Guitar Rig's AC30 emulation. It sounds okay, but there’s a certain harshness in the high end. Let’s find the fizzy frequencies and remove them, using the same procedure we used with the Eleven Rack. After sweeping the parametric stage, I found an annoying whistle centered at 9,645 Hz. The part called annoying fequency at 9645 Hz uses the parametric filter to emphasize this frequency, while the part labelled notch at 9645 Hz has a much smoother high end. But we’re not done yet; let’s see if we can find any other annoying frequencies. The section labelled annoying frequency at 5046 Hz again uses a filter to emphasize this frequency. The next section, with notches at 9645 Hz and 5046 Hz has notches at both frequencies (Fig, 2). Compare this to original file at the end without any notches; note how the version without notches sounds more “digital,” and lacks the “warmth” of the filtered versions. Fig. 2: The above image shows the parametric EQ notches that were applied to the signal, using the Sonitus EQ in Cakewalk's SONAR DAW. MUCH BETTER! Impressive, eh? This is the key to getting good amp sim sounds. Further refinements on this technique are: Experiment with the notch bandwidth. You want the narrowest notch possible that nonetheless gets rid of the whistle, otherwise you'll diminish the highs...although that may be what you want. As I said, experiment! Some amp sims exhibit multiple annoying frequencies. On occasion, sometimes three notches is perfect. Generally, the more notches you need to use, the more narrow you need them to be. When you’re done, between the high/low frequency trims and the midrange notches, your amp sim should sound smoother, creamier, and more realistic. Enjoy your new tone! ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Ableton Live Isn't Just about Being a DAW...as You'll Soon Find Out by Craig Anderton Many people use Live, Ableton's groove and digital recording software, as a musical instrument - even more so now that Ableton's Push controller has been released. But it also makes a fine host for signal processors, whether in the studio or for live performance. The only real caution is the same caution when using any computer-based setup: If you plan to feed real-time audio into a Live track, you need as low latency as possible. Multiple processor cores and plenty of RAM are key, as are using interfaces with well-written drivers. Once you have a low-latency system you can process the signal with Live’s built-in effects, or other compatible effects (VST with Windows, Audio Units with Mac). What’s more, it’s easy to set up parallel processing chains. We’ll assume you have an audio interface and have patched an instrument like guitar, voice, hardware synth, etc. (and if needed, a preamp) into a spare audio interface input. We'll treat a track as your signal processing “rack” that holds the processors. You'll need to see the I/O section in order to set everything up properly. If the I/O section isn’t currently visible, click on the I/O button (Fig. 1) toward the right-hand side of the Arrangement view or Session view. Fig. 1 In the Audio From field for the selected track, choose Ext In (Fig. 2). In the field below that, choose the audio interface input to which the signal connects. In the screen shot, interface input 1 is being selected. Fig. 2 Under Monitor, as the main goal here is to do real-time processing, select In (Fig. 3); this means that Live will listen only to the input, not the contents of a track. A small mic symbol appears above the Audio From field to indicate that Live is listening exclusively to the input. (Note that if you also plan to record the signal feeding this input, Auto is usually best so you hear the audio input while recording, and the track out on playback.) Fig. 3 If you set Monitor to In, you’ll hear the input regardless of the record/playback status. But if you selected Auto for the monitor function because you also want to record what's coming in to Live, click on the track’s Record button (Fig. 4) so you can hear the audio. Fig. 4 Now it's time to assemble your “virtual rack.” Drag the effects (Fig. 5) for your “rig” from the browser to the Track View Selector for the track doing the processing. Fig. 5 To create additional parallel effects chains, repeat what you've done so far with a different track, and set them all to the same input. If you want to select one effects setup at any given time, for example because you want to edit some of the settings without being distracted by what's happening with the other tracks, click on the Solo button (Fig. 6) for the associated track. Fig. 6 Another possibility is that you might want to set up different chains for different types of sounds and select a particular combination. In that case, you can ctrl-click on the Solo button to solo multiple tracks, and use the track faders to adjust the balance of each parallel chain. One more thing: If you have effects that sync to tempo but you're using Live by itself and there's no timing signal, you can always use the Tap Tempo button in the upper left of Live’s screen. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Use MIDI Controller Data to Add Expressiveness to Software Synths by Craig Anderton When Sonic Foundry's Acid made its debut in 1998, it was a breakthrough product: Prior to that time, you couldn't simply drop a digital audio clip in a digital audio workstation track and be able to "stretch" both tempo and pitch in real time. (Propellerhead Software had introduced the REX file format four years previously, which also allows for time and pitch stretching. However, it was a specialized file format, whereas Acid could work with any digital audio file and "Acidize" it - more or less successfully - for stretching.) Over the years other programs started to acquire similar capabilities, and as Sonic Foundry's fortunes declined, so did Acid's. However, Sony bought the Sonic Foundry family of programs in 2003, and started the rebuilding process. Acid's hard disk recording capabilities became on a par with other programs, and more recently, MIDI has been beefed up to where Acid can handle software synthesizers, MIDI automation, and external controllers with ease. In this article, we'll show how to add MIDI controller messages to MIDI tracks (for clarity, MIDI note data isn't shown). Begin by selecting a MIDI track, then choosing "Automation Write (Touch)" from the Automation Settings drop-down menu. If you instead want to overwrite existing automation data instead of write new data, choose Latch (right below the Touch option). Latching creates envelope points when you change a control; if you stop moving the control, its current setting overwrites existing envelope points until you stop playback. You'll see four control sliders toward the bottom of the MIDI track. If you don't see the controller you want, click on a controller's label; this reveals a pop-up menu with additional controller options, and you can then select the desired controller from this menu. In the screen shot, Modulation is replacing Aftertouch. As with other programs (e.g., Cakewalk Sonar), it's not necessary to enter record mode to record automation data. Simply click on the Play button, then click and drag the appropriate controller slider to create an automation envelope in real time. However, note that MIDI controllers can generate a lot of data. When computers were slower, this could sometimes cause problems because older processors couldn't keep up with the sheer amount of data. While this is less of an issue with today's fast machines, lots of tracks with controller data can "clog" the MIDI stream, particularly if you're driving external MIDI hardware rather than an internal software synthesizer. Acid has an option that lets you thin the amount of controller data. To do this, click on the Envelope button to the right of the controller's slider, then select "Thin Envelope Data" from the drop-down menu. What's more, Acid offers automatic smoothing/thinning of automation data. To set this up, go Options > Preferences > External Control & Automation tab and check "Smooth and thin automation data after recording or drawing." To add a point (what some other programs call a node) manually but still use the slider to set the value, choose the Pencil tool and click at the time where you want to add the point. Then, move the slider to change the newly-added point's value. To add a point manually that can be moved in any direction, place the cursor over the automation curve until it turns into a pointing hand, then double-click to create a point. Click and drag on the point to move it. In this example, a modulation value of 27 is being entered at measure 1, beat 2, 192 ticks. Another way to add an automation point is to right-click on the automation curve, and select "Add Point." Click and drag on the point to move it. Note that this same pop-up menu also lets you change the shape of the curve between points. In this example, Fast Fade has been chosen. You can continue to add and edit automation until the automation "moves" are exactly as desired. So why bother? Because automation can add expressiveness to synthesizer parts by keeping sounds dynamic and moving, rather than static. The next step would be to add an external control surface, so you can create these changes manually using physical faders...but that's another story, for another time! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. If You’re not Yet Conversant with Programming SFZ Files, It’s Time to Learn by Craig Anderton The SFZ file format maps samples to virtual instruments, and is used primarily in virtual instruments made by Cakewalk, including DropZone, RXP, SFZ, Session Drummer 2, and LE versions of Dimension and Rapture. However, it's an open standard, and others (such as Garritan) are using it as well; furthermore, there's a free SFZ player VST instrument so you can create your own virtual instrument by creating an SFZ file, then loading it into the player. Overall, this is a protocol who's time has come, and we'll walk you through the basics. THE SFZ FILE FORMAT The SFZ file format is not unlike the concept of SoundFonts, where you can load a ready-to-go multisampled sound - not just the samples - as one file. Unlike SoundFonts, which are monolithic files, the SFZ file format has two components: A group of samples, and a text file that "points" to these samples and defines what to do with them. The text file describes, for example, a sample's root key and key range. But it can also define the velocity range over which the sample should play, filtering and envelope characteristics, whether notes should play based on particular controller values, looping, level, pan, effects, and many, many more parameters. However, note that not all SFZ-compatible instruments respond to all these commands; if you try to load an SFZ file with commands that an instrument doesn't recognize (possibly due to an SFZ version 2 definition file being loaded into an SFZ version 1-compatible instrument), the program will generate an error log in the form of a text message. Fortunately, nothing crashes, and the worst that can happen is that the file won't load until you eliminate (or fix, in the case of a typo or syntax error) the problematic command. It's worth mentioning that the SFZ spec is license-free, even for commercial applications. For example, if you want to sell a set of SFZ-compatible multisamples for use in the Cakewalk synths, you needn't pay any kind of fee or royalty. WHY BOTHER LEARNING ABOUT SFZ FILES? There are three main reasons: If you like to create your own sounds, you can create far more sophisticated ones for SFZ-compatible instruments if you know how the SFZ format works. And, the files you create will load into other SFZ-aware instruments (particularly if you limit yourself to using commands from the version 1.0 SFZ spec). By editing SFZ files, you can overcome some of the limitations in the LE versions of Rapture and Dimension included in Sonar. It's been pointed out that you can't adjust tuning in the LE versions...and you can't, which can be a real problem if you recorded a piano track where the piano was in tune with itself, but not tuned to concert pitch - and you want to add an overdub. However, you can edit the tuning of the SFZ file itself loaded into the instrument, and compensate for tuning that way. SFZ files provide a way for cross-host collaboration. The SFZ Player (Fig. 1) included in Sonar, a VST plug-in that works in any VST-compatible host, is available as a free download. Fig. 1: The SFZ Player is a free download that works in any VST-compatible host, not just Sonar. As to why this is important, suppose you're using Sonar, a friend is using Ableton Live, and you want to collaborate on a part based on some samples you've grabbed. String those samples together into an SFZ file, have your friend download the player, send the SFZ file to your friend, and you can swap parts back and forth. You can even use really big samples, because the SFZ player supports compressed Ogg Vorbis files. So, you can create a compressed, "draft" version of the SFZ file, then substitute a full version with WAV files when it's mixdown time. CREATING YOUR FIRST SFZ FILE Creating an SFZ file is not unlike writing code, but don't panic: It's easier than writing music! Despite the many commands, you don't need to learn all of them, and the syntax is pretty straightforward. Although you can "reverse-engineer" existing SFZ files to figure out the syntax, it's helpful to have a list of the available commands - you can find one at http://www.cakewalk.com/DevXchange/sfz.asp (Fig. 2), or check out Appendix A in the book "Cakewalk Synthesizers" by Simon Cann (published by Thomson Course Technology) Fig. 2: All the opcodes (commands) for the 1.0 version of the SFZ spec are listed and described on Cakewalk's web site. As an example of how the SFZ protocol can dress up a sample, suppose you've sampled a guitar power chord in D and extracted a wavetable from it - a short segment, with loop points added in an audio editor (we'll call the sample GuitWavetable\_D1.WAV). It won't sound like much by itself, but let's create an SFZ file from it, and load it into SFZ Player. Arguably the two most crucial SFZ concepts are "region" and "group." Region defines a particular waveform's characteristics, while Group defines the characteristics of a group of regions. For example, a typical Region command would be to define a sample's key range, while a typical Group command might add an attack time to all samples in an SFZ multisample. Another important element is the Comment. You can add comments to the definition file simply by adding a couple slashes in front of the comment, on the same line; the slashes tell SFZ to ignore the rest of what's on the line. Here's a suggested procedure for getting started with SFZ files. 1. Create a folder for the samples you plan to use. In this case, I called mine "GuitarWavetables." 2. Drag the sample(s) you want to use into the folder you created. In this example, I used only one sample to avoid complications. 3. Open up a text editor, like Notepad (the simpler, the better-you don't need formatting and other features that add extraneous characters to the underlying text file). If you do use a word processor like Word, make sure you save the file as plain MS-DOS text. 4. Add some comments (putting // before text turns it into a comment) to identify the SFZ file, like so... // SFZ Definition File // Simple Guitar Wavetable File 5. Let's turn this wavetable into a region that spans the full range of the keyboard. To do this we need to add a line that specifies the root key, the key range, and tells the file where to find the sample. Here's the syntax: <region> pitch\_keycenter=D1 lokey=C0 hikey=C8 sample=GuitWavetable\_D1.WAV That's all pretty obvious: pitch\_keycenter is the root key, lokey is the lowest key the sample should cover, hikey is the highest key the sample should cover, and sample defines the sample's name. As the definition file and sample are in the same folder, there's no need to specify the folder that holds the sample. If the definition file is "outside" of the folder, you'd change the sample= line to include the folder, like so: sample=GuitarWavetables\GuitWavetable\_D1.WAV 6. Save this text file under the file name you want to use (e.g., "GuitarPowerChordWave.sfz") in the GuitarWavetables folder. You could actually save it anywhere, but this way if you move the folder, the text definition file and samples move together. (Note that you can right click on an SFZ file and "open with" Notepad - you don't have to change the suffix to TXT.) 7. Open up an SFZ-compatible instrument, like Dimension LE. Click in the Load Multisample window that says "Empty," then navigate to the desired SFZ file (Fig. 3). Double-click on it, and now you should hear it when you play Dimension. If you don't, there might be a typo in your text file; check any error message for clues as to what's wrong. Fig. 3: Click in the Load Multisample field in Dimension or Rapture, and a Load Multisample browser will appear; navigate to what you want to load. The Garritan Pocket Orchestra samples for Dimension LE are a rich source of SFZ files. TAKING IT FURTHER Okay, we can play back a waveform...big deal. But let's make it more interesting by loading two versions of the same waveform, then detuning them slightly. This involves adding a tune= description; we'll tune one down -5 cents, and the other up 5 cents. Here's how the file looks now: <region> pitch\_keycenter=D1 lokey=C0 hikey=C8 tune=-5 sample=GuitWavetable\_D1.WAV <region> pitch\_keycenter=D1 lokey=C0 hikey=C8 tune=5 sample=GuitWavetable\_D1.WAV Now let's pan one waveform toward the right, and the other toward the left. This involves adding a descriptor of *pan= *where the value must be between -100 and 100. Next up, we'll add one more version of the waveform in the center of the stereo image, but dropped down an octave to give a big bass sound. We basically add a line like the ones above, but omit tune= and add a transpose=-12 command. Loading the SFZ file now loads all three waveforms, panned as desired, with the middle waveform dropped down an octave. But it sounds a little buzzy for a bass, so let's add some filtering, with a decay envelope. This is a good time for the <group> function, as we can apply the same filtering to all three oscillators with just one line. And here is that line, which should be placed at the top of the file: <group> fil\_type=lpf\_2p cutoff=300 ampeg\_decay=5 ampeg\_sustain=0 fileg\_decay=.5 fileg\_sustain=0 fileg\_depth=3600 Here's what each function means: fil\_type=lpf\_2p This indicates that the filter type is a lowpass filter with 2 poles. cutoff=300 Filter cutoff in Hertz ampeg\_decay=5 The amplitude envelope generator has a decay of 5 seconds ampeg\_sustain=0 The amplitude envelope generator has a sustain of 0 percent. fileg\_decay=.5 The filter envelope generator has a decay of 0.5 seconds. fileg\_sustain=0 The filter envelope generator has a sustain of 0 percent. fileg\_depth=3600 The filter envelope generator depth is 3600 cents (3 octaves). As you work with SFZ files, you'll find they're pretty tolerant - for example, the sample names can have spaces and include any special characters other than =, and you can insert blank lines between lines in the SFZ definition text file. But one inviolable rule is that there can't be a space on either side of the = sign. OVERCOMING LE-MITATIONS Rapture LE and Dimension LE are useful additions to Sonar, but as playback-oriented instruments, they have limitations compared to the full versions. For example, with Dimension LE, you can edit two stages of DSP, a filter, and some global FX-nothing else, like tuning, transpose, envelope attack, and other important parameters. However, if the sound you want to load into either of these LE versions is based on an SFZ file, you can modify it well beyond what you can do with the instruments themselves. (Note that these instruments often load simple WAV or other file types instead of the more complex SFZ types; in this case, editing becomes more difficult because you have to first turn the WAV file into an SFZ file, and if you're going to put that much effort into programming, you might want to upgrade to the full versions that have increased editability.) Let's look at a Dimension patch, Hammond Jazz 3. This loads an SFZ file called Hammond jazz.sfz, so it's ripe for editing. We'll take that Hammond sound and turn it into a pipe organ by creating two additional layers, one an octave above the original sound, and one an octave lower. We'll pan the octave higher and main layers right and left respectively, with the lower octave panned in the middle. Then we'll tweak attack and release times, as well as add some EQ. Here's how. 1. To find the SFZ file, go to C:\Program Files\Cakewalk\Dimension LE\Multisamples\Organs and open Hammond Jazz.sfz in Notepad. Here's what it looks like: <region> sample=Hammond Jazz\HBj1slC\_2H-S.wav key=c3 hikey=f3 <region> sample=Hammond Jazz\HBj1slC\_3H-S.wav key=c4 hikey=f4 <region> sample=Hammond Jazz\HBj1slC\_4H-S.wav key=c5 hikey=f5 <region> sample=Hammond Jazz\HBj1slD\_5H-S.wav key=d6 hikey=f6 <region> sample=Hammond Jazz\HBj1slC\_6H-S.wav key=c7 hikey=f7 <region> sample=Hammond Jazz\HBj1slF#1H-S.wav key=f#2 hikey=b2 lokey=c1 <region> sample=Hammond Jazz\HBj1slF#2H-S.wav key=f#3 hikey=b3 <region> sample=Hammond Jazz\HBj1slF#3H-S.wav key=f#4 hikey=b4 <region> sample=Hammond Jazz\HBj1slF#4H-S.wav key=f#5 hikey=c#6 <region> sample=Hammond Jazz\HBj1slF#5H-S.wav key=f#6 hikey=b6 2. This shows that the SFZ definition file basically points to 10 samples, with root keys at various octaves of C or F#, and spreads them across the keyboard as a traditional multisample. Note that it doesn't use the pitch\_center= statement, for two reasons: First, Dimension LE doesn't recognize it, and second, because the key= statement sets the root key, low key, and high key to the same value. You can add modifiers to this, like lokey= and hikey= statements, as needed. 3. Before this block of <region> statements, add a <group> statement as follows to modify all of these regions: <group> ampeg\_attack=0.2 ampeg\_release=2 pan=-100 eq1\_freq=4000 eq1\_bw=2 eq1\_gain=20 These parameters add an amplifier envelope generator attack of 0.2 seconds, amplifier envelope generator release time of 2 seconds, pan full left, and one stage of EQ (with a frequency of 4kHz, bandwidth of two octaves, and 20dB gain). 4. Now we'll add another region an octave lower, and put a similar <group> statement before it. We'll simply use one of the existing samples and to minimize memory consumption, as this sample plays more of a supportive role, we'll just stretch it across the full keyboard range. <group> ampeg\_attack=0.2 ampeg\_release=1 transpose=-12 eq1\_freq=2000 eq1\_bw=4 eq1\_gain=20 <region> sample=Hammond Jazz\HBj1slC\_4H-S.wav key=c5 lokey=c0 hikey=c8 The group statement is very similar to the previous one, except that the sample has been transposed down 12 semitones, the pan statement is omitted so the sample pans to center, and the EQ's center frequency is 2kHz instead of 4kHz. The sample's root key is C5, and stretched down to C0 and up to C8. 5. Next, we'll add the final new region, which is an octave higher. Again, we'll put a <group> statement in front of it. <group> ampeg\_attack=0.2 ampeg\_release=1 transpose=12 pan=100 <region> sample=Hammond Jazz\HBj1slC\_4H-S.wav key=c5 lokey=c0 hikey=c8 The group statement adds the familiar attack and release, but transposes up 12 semitones and pans full right. The region statement takes the same sample used for the octave lower sound and stretches it across the full keyboard. I should add that although we've made a lot of changes to the SFZ file, it's still being processed by Dimension LE's Hammond Jazz 3 patch. As a result, if you take this SFZ file and load it into SFZ player, it won't sound the same because it won't be using the various Dimension parameters that are part of the Hammond Jazz 3 patch. ARE WE THERE YET? Explaining all this on paper may make the process of creating SFZ files seem complex, but really, it isn't as long as you have the list of SFZ opcodes in front of you. After a while, the whole process becomes second nature. For example, I found an SFZ bass patch that produced a cool, sort of clav-like sound when I transposed it up a couple octaves-but the attack sounded cartoonlike when transposed up that high. So, I just used the offset= command to start playback of the samples past the attack. And while I was at it, I added a very short attack time to cover up the click caused by starting partway through the sample, and a decay time to give a more percussive envelope. The editing took a couple of minutes at most; I saved the SFZ file so I could use this particular multisample again. Sure, creating SFZ files might not replace your favorite leisure time activity-but it's a powerful protocol that's pretty easy to use. Modify some files to do you bidding, and you'll be hooked. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. Turn Standard Audio into Stretchable Apple Loops by Craig Anderton You know the problem: You have the perfect loop, but it's the wrong tempo. There are several ways to make a loop stretch to different tempos - from DSP to converting to a particular file format, like the REX or Acidized WAV format - each with their own advantages and disadvantages. Apple's Apple Loops format adds metadata to WAV or AIF files that allows them to conform to arbitrary tempos and pitches. For example, a 100 BPM Apple Loop in the key of E could loop in a sequencer project in the key of C at 126 BPM. One advantage of creating an Apple loop as opposed to simply applying time-stretching DSP is that an Apple loop can follow along dynamically with tempo changes. The Apple Loops utility is included with Logic Express, Logic Pro, GarageBand, and Soundtrack Pro. They actually are about more than just looping, and Apple provides an excellent document online that gives all the background you'll need. The most recent version you can download without buying the software listed above is version 1.3.1; click here, then scroll down to Apple\\\_Loops\\\_SDK\\\_1.3.1.dmg. Only version 1.4 will work natively with Intel Macs, so you're better off using the utility included with your Apple program of choice. Apple developers can log in using their ID to get the most recent version. Ready? Here are the basics of turning files into Apple Loops. Simple files with strong transients (like drum loops and bass parts) are the easiest to loop, so practice with those first. The hardest files to turn into Apple Loops are pads and other files with sustaining audio and no significant transients. 1. Open the file you want to convert to an Apple Loop. 2. Click on the Tags tab and enter as many attributes for the file as possible. Make sure Looping is selected. 3. Click on the Transients tab. The object is to have a transient marker for each transient in the file, so slide the Sensitivity slider to the right until most, if not all, transients have a marker. If the file has a regular beat (like a 16th note high-hat pattern), you can often just select the desired rhythm in the Transient Division field (e.g., 16th notes) and then you don't have to do any editing with the markers - which makes life a lot easier! 4. To add a marker if there's a transient the Sensitivity slider didn't catch, click above the transient next to the other marker handles. (To remove extraneous markers, which can happen with high sensitivity settings, click on the marker handle and press the keyboard's Delete key.) Note that proper setting of markers is an art and a science; sometimes a file will stretch better at slower tempos if there's a marker at the end of a note to define its end point. The more precisely a marker lines up with a transient, the better. Zoom in or out on the waveform display by using the handles on the ends of the scroll bar below the waveform screen, then move any markers so they sit precisely at the start of a transient. 5. Test how the loop responds to tempo changes by varying the tempo slider and clicking on the Play button. 6. Test how the loop responds to key changes by clicking on the key field and selecting a different key. 7. If the loop doesn't stretch well rhythmically, experiment with transient marker placement. When you're satisfied with the results, click on Save. Now you have a loop you can use with Apple's programs at a variety of tempos. You have to be realistic about this; don't expect a Drum 'n' Bass loop that started at 180 BPM to sound good when stretched down to a rap-friendly 89 BPM. The same goes for speeding up, although you can usually speed up loops with better results than slowing them down. For this reason, many loop developers create loops at slower tempos, like 100 BPM, as they can often speed up to 160 BPM (or more), or slow down to 90 BPM, and still sound good. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. Sidechained Processors Can Give a Variety of Special Effects By Craig Anderton Sidechaining has been around for years; this is the process of using one signal to control another. A couple classic examples are using a kick drum to gate a bass part, or doing de-essing - isolating the high sibilant frequencies from a vocal, and using those to trigger compression so that the sibilants come down in volume. But in the digital age, we can do a lot more with sidechaining. One of the most popular applications is with dance music, where sidechaining can create the "heavy pumping" electronica drum sound used by artists like Eric Prydz and others. We'll describe how to do this with Cakewalk Sonar, although the same principle applies to other programs that allow for sidechaining. Sonar allows sidechaining for several effects, including compression, so that one instrument can control the compression characteristics of another instrument. This offers a variety of effects, including a "pumping" drum sound for multitracked drum parts; we'll do that by setting up the snare to control compression for all drum tracks. The first step is to create a drum submix bus, and send the drum tracks to it (Fig. 1). Fig. 1: You'll need a drum submix bus to create an overall drum sound. We need this submix so the entire drum track can be processed by the sidechained compressor. To create the submix bus, right-click in an empty space in the bus pane and select "Insert Stereo Bus." To create a send in track view, right-click in a blank space in the track title bar and select "Insert Send." From the menu that appears, select the send destination. Make sure you feed the bus pre-fader, and turn the individual drum channel faders down so that only the bus contributes the drum sound to the master. You'll also want to assign the Drum Submix output to your main stereo out (master) bus (Fig. 2). Fig. 2: Assign the Drum Submix out to your main stereo output. Next up, insert the Sonitus:Compressor (which allows for sidechaining, but any compressor with a sidechain option will work) in the Drum Submix bus's effects bin (Fig. 3). This will allow for processing the entire drum track. Fig. 3: The Sonitus:Compressor comes bundled with Sonar, and can do sidechaining. Now it's time to program the compressor for really heavy compression - e.g., threshold below -20 and a ratio higher than 10:1 (Fig. 4). Fig. 4: Use lots of compression! This can definitely add some "squash" as needed; we won't be adding it all the time, but only when the snare hits. Note that a softer knee compression curve often sounds better than a hard knee for this particular application. We still need to generate a signal to drive the compressor's sidechain input, which in this case would be the snare so that the drums "pump" whenever the snare hits. The simplest way is to create another stereo bus, then assign its output to the sidechain input (Fig. 5). Fig. 5: Assigning a bus to the compressor's sidechain input. Now all we have to do is make sure the snare drum feeds this bus (Fig. 6). Create a second pre-fader send in the snare track, and assign its out to the bus feeding the sidechain input. Fig. 6: Use the snare signal to provide the output to the sidechain. To adjust the compressor, start with the compression attack time set to 0 ms; the drum sound will essentially disappear when the snare hits because the gain is being reduced so much. Gradually increase the attack time to let through more of the initial snare hit, and add a fair amount of release (250-500 ms) to increase the apparent amount of pumping (Fig. 7). Fig. 7: We're almost there - it's time to adjust the compressor. And there you have it - the pumping drum sound. May it go over well on the dance floor! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. Speed Up Workflow when Recording Bass by Creating a Virtual Bass Rack Track Template By Craig Anderton Certain processors/track setting combinations have become my “go to” starting point for recording bass with DAW software. It used to be necessary to create these settings from scratch for each project, but now most DAWs let you create and save Track Templates (also called Track Presets) that remember effects and control settings—like a “virtual effects rack.” One such program is Cakewalk Sonar; we’ll show how to create a “virtual bass rack” template in Sonar, although the same principle applies to other DAWs, such as Steinberg Cubase. FIRST, GET IN TUNE Insert a tuner as the first plug-in for your virtual rack - but note that some chromatic tuners are designed for guitar, and can’t deal with the bass’s low A and E strings. If so, play harmonics on those two strings, and (assuming your intonation is correct) tune to them. With Sonar, turn on “Input Echo” for the tuner (Fig. 1) or the signal won’t go to the tuner. Also, note that in Sonar, enabling the tuner mutes the track signal. Fig, 1: Sonar’s tuner plug-in works with both guitar and bass. WHY YOUR TEMPLATE NEEDS TWO TRACKS A Sonar Track Template can contain multiple tracks. This is important for bass because you almost always want to retain the low end; applying an effect like wa in series with the bass thins out the sound—but applying it in parallel “overlays” the wa effect on top of a solid bottom. So, the secondary track is used mostly to layer effects. When recording, record into both tracks simultaneously. If you’re processing an existing track, copy it into the second track so you have two identical, parallel audio tracks. MULTIBAND COMPRESSION FOR BASS On the main track, a Multiband Compressor follows the Tuner (Fig. 2) because it serves as both a compressor and, if you adjust the various bands’ levels, an equalizer. I use lots of compression in the lowest band (under 200Hz or so), with light compression in the lower mids so that the bass doesn’t compete too much with more “midrangey” instruments like piano and guitar, and fairly heavy compression in the upper mids to bring out pick noise. (This allows more latitude when mixing the bass in relation to the kick, as pick transients make the bass “speak” better if the two instruments compete.) Fig, 2: The complete virtual bass rack in Sonar. During mixdown, you can tweak the high and low ends easily by adjusting individual bands in the multiband compressor - you may not even need standard track EQ. Sonar’s multiband compressor includes a limiter function. Enable this under the “Common” tab to affect all bands; this will trap strong transients (great for slap bass), and can bring up levels of individual bands to “push” the limiter for a more squashed sound - without having to vary the band’s compression controls. THE FX TRACK The second track contains several effects, but I rarely use them all. The first effect is a wa, because if you use envelope-followed wa, it wants to “see” a signal with maximum dynamics. Next is a compressor, which serves as an effect. While the multiband compressor in the other track provides more traditional, transparent dynamics control that preserves bass transients, the compressor can mix a heavily squashed signal in with the main track. This provides a ringing, sustained effect when used subtly. Distortion is good for “grit,” and Sonar’s TL64 Tube Leveler effect is a good choice. However, as this adds “crunch” more than heavy-duty distortion, I typically follow it by a lowpass EQ to trim the distortion’s high end. Native Instruments’ Guitar Rig 4, the final effect in the chain, serves as a sort of “universal” effect because no matter what I want to layer on the bass, odds are Guitar Rig can do it. LET’S MIX! The final advantage of this approach is the ability to mix the two tracks independently. Use automation to bring in the crunch track during the big chorus, and pull it back for the verse...tempo-sync effects parameters to the host tempo for a tight rhythm section...you get the idea. Best of all, because you’re starting from a template, you’ll get to the mixing stage much faster. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  15. Turbocharge Your Loops by Re-Arranging Slices by Craig Anderton Ableton Live offers a way to deconstruct an audio file into individual slices. You can then edit each slice with respect to placement in a loop, filtering, envelope characteristics, and much more. This is a great way to take existing beats, then mutate them into something completely different. First, right-click on an audio clip (or ctrl-click on the Mac) and select “Slice to New MIDI Track” (Fig. 1). This opens up a dialog box where you determine how you want to slice the audio. Fig. 1: Select “Slice to New MIDI Track.” For drum parts and other parts with a regular rhythm, it’s convenient to choose a specific beat, like eighth notes or sixteenth notes. You need to choose whichever encompasses all beats; for example, if there’s a 16th-note high hat part, you’ll want to choose 16th notes (Fig. 2). Fig. 2: After you choose a rhythmic slice value, click on “OK.” This isn’t your only slicing option, as you can also slice based on warp markers, transients, other rhythmic values, etc. Also, Ableton provides various slicing presets, and you can also create your own and save them. In general, a consistent rhythmic option or the transient option is what you’ll use most of the time, so we’ll keep things simple and concentrate on that for now.However, note that Live won’t allow more than 128 slices (for example, with a file that’s 32 beats long, you can do a maximum of 1/16th note slices). If slicing exceeds this limit, set a lower slice resolution, or select a smaller region of the file for slicing. After clicking on OK, Live creates a MIDI track that contains a MIDI clip and a Drum rack. The clip contains one note for each slice, arranged chromatically; you can call up the MIDI note editor to see the series of notes (Fig. 3). Fig. 3: A sequence of MIDI notes triggers slices in succession. Each note triggers a chain in the Drum rack, which contains a Simpler pre-loaded with the corresponding audio slice. Live also assigns important Simpler parameters to the Drum Rack’s Macro Controls (Fig. 4), such as basic envelope controls. Fig. 4: Live generates useful Macros for controlling Simpler. Now you can go into the MIDI note editor and edit the loop by changing the pitch and/or location of slices, as well as alter slice velocities. In this example (Fig. 5), several notes have had their start time and pitch changed, and the velocity is currently being edited for one of them. Fig. 5: Once slices are turned into MIDI notes, you can edit them as you would any other MIDI data. Next, click the newly-created MIDI track’s “unfold” button (Fig. 6) to reveal a mixer channel for each slice. Fig. 6: Show the slice mixer channels in the MIDI track by clicking the “unfold” button. This lets you mute or solo individual slices, change their levels, etc. Note that the controls affect specific, individual slices, regardless of whether you’ve changed their positions or not. Live also creates a device chain for the loop, which has macro controls (Fig. 7) for the Simplers used to play back the slices. As one example of editing fun, turn down Sustain, and vary Decay to create more percussive slices. Fig. 7: Use the Simpler macros to alter slice parameters. You can also show the Chain List (Fig. 8) to select individual slices. Fig. 8: The Chain List allows for additional editing. When displaying the chain list, use the Show/Hide Devices function (Fig. 9) to change characteristics of the selected slice (e.g., add filtering, LFO, envelope response, etc.). Fig. 9: Once you’ve selected a slice in the chain list, you can edit it further. That should be enough signal warping to get you into some really interesting loops. And of course, if you want to use it in other projects, you can always export it as audio. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  16. Use ReCycle to Mangle Your REX Format Loops Beyond All Recognition by Craig Anderton The REX file format, invented by Propellerhead Software, "slices" a rhythmic file into several pieces (e.g., every 16th note, although you're not restricted to particular rhythmic intervals), which are triggered for playback. This allows for easy time-stretching, but as we'll see, REX files provide lots of other creative possibilities. REX FILE BASICS The individual slices of a REX file are triggered by MIDI. Slowing down the tempo causes these triggers to occur further apart, thus creating a slower rhythm. Speeding up the tempo moves the triggers closer together, which makes the rhythm faster. Some programs require a MIDI file to trigger the slices, although these are rarer than programs that can extract timing data automatically from the REX file. The REX2 file format supports stereo; the original version supported only mono files. Because little DSP is involved, with the right kind of percussive material there's virtually no sonic degradation (particularly with stretching to faster rather than slower tempos). REX files also track tempo changes automatically. There are limitations; sustained sounds don't lend themselves to the REX process, and slowing down loops generally gives inferior results to speeding them up. The only real tool for REX file creation is Propellerheads' ReCycle program. You bring in a WAV or AIF file, then create "slices" at each transient (Fig. 1). This is a semi-automated process, but generally requires some manual editing as well to optimize these slice points. Ideally, each slice should have a discrete sound: A kick drum, kick+snare combination sound, high-hat hit, etc. Fig. 1: This screen shot from ReCycle shows how a drum loop has been "sliced" into different sections that isolate each sound. Each slice has been "colorized" to make it easier to differentiate them - the actual program uses color much more tastefully! When you save the sliced REX file, it also bundles a MIDI file that triggers each slice. When you bring a REX file into a host program, either a MIDI track will be created automatically that contains the MIDI portion of the REX file, or you'll need to somehow drag or import the MIDI portion into a MIDI track. Note that because the triggers use MIDI, editing the MIDI notes will alter how the slices are played back. This offers a lot of creative options - you can even copy MIDI notes to trigger several slices at the same time, remove MIDI notes to "thin out" a loop, and the like. LET'S GET "WRONG" Although REX files are great for time stretching, it's also possible to create "wrong" variations on a loop to end up with new, and sometimes very cool (or at least perverse), alternatives. Normally you do this by re-arranging the MIDI data in your host program that drives the ReCycle audio slices, but the following method uses an "all-digital audio" approach. The key is to export each slice as its own WAV or AIFF file, then rearrange the order of these slices in your host program. (Note: This assumes you've already used ReCycle to "slice" your file properly in the REX format; also note that the tempo you set in ReCycle is irrelevant, because slices are saved so their durations fit the original loop file's native tempo.) The first step (Fig. 2) is to call up ReCycle's Process menu, and make sure that "Export as One Sample" is unchecked in the Process menu. Fig. 2: Unchecking "Export as One Sample" exports each individual slice. Next, select "Export" from the File menu (Fig. 3), navigate to the folder where you want to save each slice as a sample, choose the file format, then click on "Save." Fig. 3: Make sure you create a folder where you can save all these slices so they all end up in one place. You'll also need to choose the export settings (Fig. 4). Usually you'd check the option to create a MIDI file, but that's not necessary in this case, as we're working only with the audio itself. Fig. 4: Choose the desired sample rate and bit depth for the slice files, then click on "OK." Now all the files have been saved into the folder you specific in previously (Fig. 5). The exported slices include a number that represents the order in the original file. For example, "filename" 001 is the first slice, "filename" 002 the second slice, and so on. Fig. 5: The individual exported slices. Now open your host program, and drag the slices into a track (Fig. 6). This assumes that your host supports drag-and-drop, and most do; otherwise, you'll need to import the slices individually). Fig 6: In this example, REX files slices are being dragged into Adobe Audition. Now, let the games begin! Rearrange slices on the same track, or as shown here (Fig. 7), drag slices to a different track in whatever order you want. Repeat slices, overlap them, reverse them, whatever...you get the idea. Fig, 7: The slices are shown re-arranged within two tracks. Set a "snap to" grid in your host if you want the slices to line up rhythmically as you drag them in, and above all...have fun creating your totally new sounds. If you come up with something really great, consider saving it as a new loop you can use in other projects. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  17. Make Your Life Easier by Organizing Presets as Favorites By Craig Anderton IK Multimedia's virtual instruments based on the SampleTank engine (SampleTank 2.5, Miroslav Philharmonik, Sonik Synth 2, SampleMoog, etc.) include so many presets that you may want to organize them differently than the default organization. For example, you can create "favorites" folders for sounds you use a lot, or just to weed out the really useful presets from the ones you'll probably never use. Or, you can make folders for presets used in particular projects, or for specific styles of music. It's also possible to rename presets as you organize them - you might prefer something descriptive, like "Chorused ambient guitar" instead of what the programmed called "Cosmic swirl." The first thing you need to do is located the Instruments folder associated with the particular IK instrument (the default location is \Program Files\IK Multimedia\\Instruments) and create a Favorites folder. Of course, this could be named anything you want. Note that if you want it to jump to the top of the list of folders in the instrument's browser, add a punctuation symbol like "!" at the beginning of the name, as this "sorts" before the alphabet letters. Fig. 1: A Favorites folder has been added in the SampleMoog instruments folder. Next, create any sub-folders you want within the Favorites folder. Try to strike a balance between having enough to accommodate your needs, but not so many that you have to go through piles of folders to get where you want. Fig. 2: Five different sub-folders have been created within the Favorites folder. With IK Multimedia programs, each preset is made up of three file types, each with its own extension (.sth, .sti. and .stw). If you want to copy a preset over to the Favorites folder or a sub-folder, make sure you drag over all three files or the instrument won't be able to "find" the preset. Also, if you rename the preset, you'll need to use the identical name for all three file types (except of course for the extensions, which should remain .sth, .sti, and .stw). Fig. 3: In this example, the 4-Osc Stereo Sweep patch is being copied from the Instruments folder to the Bass sub-folder located in the Favorites folder. Now you have to make sure that the instrument recognizes the changes you've made - this is like the "refresh" function in Windows. Open the instrument program (either as a plug-in within a host, or as a standalone program), and click on Prefs. When the Preferences window appears, set Relist on Startup to "On." This will cause the virtual instrument to scan its Instruments folder on startup, and recognize the new folder(s) you've added. As this does slow down the instrument loading time a bit, once the new folders are recognized, you can go back to Prefs and set Relist on Startup to "Off." Then, if you make any more changes, you can always turn this back on again. Fig. 4: You'll find the Relist option in the Preferences dialog box. You're almost done - close the program, or if you're using it as a plug-in, close the host program. Now re-open the program (or instantiate it as a plug-in), and after the re-listing process is complete, your new folder(s) and presets will show up in the browser. Fig. 5: Success! The Favorites folder not only shows up, but is at the top of the preset listing. I think you'll find that organizing presets can really help with workflow. In fact, whenever you find a patch you like or have used in a project, pop it into the Favorites folder and it will be much easier to find again if you need it in the future. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. Get the Most Out of Virtual "ROMplers" by Craig Anderton When hardware still ruled the earth, the Korg M1 workstation created quite a stir with its combination of sounds, multi-timbrality, a MIDI sequencer, and effects. And when MIDI sequencing took off, workstations became essential elements of many a MIDI studio because they could play back multiple sounds from a single, economical piece of hardware. Today's software workstations can not only do pretty much anything their hardware ancestors could do, but a lot more - so let's discuss how to maximize the potential of these virtual instruments, from initial songwriting inspiration to final mix. READY TO ROCK (OR JAZZ, OR TECHNO, OR?) Software workstations are exceptionally useful for songwriting, because with one instrument, you can create 8, 16, or even more tracks. As a result, you can simply keep loading instruments into MIDI channels, create new MIDI tracks, and lay down overdub after overdub. Fig. 1: Several techniques mentioned in this article are applied here to Native Instruments' Kontakt. Polyphony has been limited to save on CPU consumption (outlined in yellow; Electrik Guitar has been limited to 16 voices, Drawbar Organ to 32, and Fretless bass to 4). The blue outline shows that reverb has been added as a bus effect, to avoid putting a reverb in each channel. The red outlines show that each instrument has been sent to its own output channel. However, as you add more instruments, CPU consumption will increase, sometimes dramatically. Many workstations allow adjusting polyphony for particular channels (Fig. 1), so take advantage of this minimize the number of voices that need to sound at once. For example, with many bass lines, you probably won't need more than two voices. Sounds with long decays, such as pads, tend to "eat" polyphony so restrict those as well?often the voices that are cut off are at a low enough volume, or masked by other notes, so that you won't notice they're missing. Another way to reduce CPU consumption: Whenever multiple sounds use the same effect, use bus effects within the instrument (if present) instead of insert effects. For example, insert reverb on a bus and send signal to it from the instruments you want processed, rather than inserting reverb on all channels requiring reverb. THE MIDI ADVANTAGE A big advantage to using a MIDI-based software workstation is that MIDI data is so malleable. If during the songwriting process you decide to change the key or tempo, it's much easier to do with a bunch of MIDI tracks than with digital audio. (Having said that, though, many workstations can also stretch digital audio loops with respect to timing and possibly pitch.) A corollary MIDI advantage is that when it's time to mix, you can replace the sounds of individual tracks with individual instruments that may offer a better sound quality. For example, you can use the workstation to lay down a piano part, but then switch over to something like Ivory or another dedicated piano program to get the best piano sound possible. The same thing goes for drums, as you can use a program like FXpansion's BFD to replace the simpler drum sounds found in a workstation. Also note that hosts with MIDI plug-ins get along very well with workstations. When you're laying down tracks in quick succession, rather than deal with quantizing or tweaking as you record, you can often use MIDI plug-ins to temporarily do quantization, scale velocities, and the like. After you're finished laying down tracks, then you can get into the editing process and tweak the MIDI data. CONTENT MANAGEMENT There's a growing trend with workstations and multitimbral samplers to include ever-greater amounts of content. Generally, this defaults to being installed in the root drive where the program lives; use enough of these products, and your main drive can run out of space pretty fast. So, dedicate a big hard drive (250-500GB) just to content and samples. If your computer's motherboard has some unused connections for hard drives, you can mount the drive internally but if not, an external FireWire or USB drive will do the job. However, you will likely have to instruct the program where to look for the content; check any available preferences dialogs, as that's usually where you specify a path to the content. With some workstations, you can create an alias/shortcut for the target samples in the original folder on your root drive. For example, if the workstation has a folder named "sounds," you may be able to create an alias for the drive containing your samples and put it in the "sounds" folder. Other workstations recommend against moving factory content to a different location, as any updates might be written to the current location, thus making it difficult to keep file structures "in sync" - although you can create a path to your own custom content. The instrument's documentation should mention any considerations involved in moving content. SAMPLE STREAMING VS. LOADING INTO RAM Some workstations can stream long samples from hard disk, while others are restricted to what you can load into RAM. While using RAM is generally faster and smoother, you're not going to load a 30GB piano into a computer with 2GB of RAM. Streaming often needs to be enabled, sometimes for individual instruments within a multitimbral instrument, or sometimes for the instrument as a whole. Just remember that neither solution is without issues: Streaming lots of samples from a hard drive can limit the number of audio tracks you can stream from the same hard drive simultaneously (which is why a dedicated drive for content is helpful), while pushing RAM to the limit can cause instability problems because that same RAM is shared with the operating system and your host program. TO EFFECT OR NOT TO EFFECT? I used to advise bypassing the included effects with virtual instruments, as you could likely do better by adding other plug-ins into the signal path hosting the instrument. But times have changed. With better computers, instruments can include far more CPU-intensive, and better-sounding, plug-ins (Fig. 2); some even include features like convolution reverb. Fig. 2: SampleTank 2.5 comes with 33 effects, including convolution reverb. These can be different for each instrument in a multi-timbral setup, and can also be used as send (insert) or master effects. However, the issue here isn't just about sound quality. Using effects included within the instrument makes projects more transportable and archivable: As long as you can load the instrument, you're loading the effects as well. SEPARATE OUTS OR STEREO? Workstations usually offer multiple outputs (Fig. 1), so you can take advantage of your host mixer's features to process individual sounds. Keeping in mind the above comments about effects, though, if you can do all your mixing and processing within the workstation, you again have a more ergonomic and transportable project. You can even save the workstation setup as a preset and import it into a different host, knowing that the sounds and mix will be as you intended. TRACK FREEZING If you really load up the channels of a multitimbral instrument, you may need to freeze tracks to free up CPU power. Freezing essentially disconnects the instrument from the CPU, replacing it temporarily with an audio track that makes much fewer demands on the CPU. However, freezing works a little differently with multitimbral instruments compared to single-channel instruments; this varies from program to program. For example, you may be able to freeze one particular instrument of a multitimbral instrument or you may only be able to freeze the entire instrument. Check your host program's documentation for details. THE PLAYERS Here are thumbnail descriptions of some common software-based workstations, listed alphabetically by manufacturer, along with screen shots for some of them. Apple, EXS24. This is available only as part of Logic; while showing its age a bit, the EXS24 broke open the virtual sampler market. Big Fish, Vir2 instruments. Based on the Native Instruments' Kontakt Player engine, these offer effects, mixing, streaming from hard disk, and other features derived from NI's flagship sampler. The screen shot shows Mojo, their horn section-based virtual instrument. Cakewalk, TTS-1. Available only within Sonar, this basic (and CPU-friendly) workstation is useful for blocking out parts while songwriting. Note the window that does basic editing for one of the 16 available parts. EastWest, Play. EastWest's proprietary virtual instrument engine hosts many of their sample libraries, and is compatible with 64-bit operating systems. IK Multimedia, SampleTank-based products (SampleTank, Miroslav Philharmonik, Sonik Synth, SampleMoog). These are all characterized by large sound libraries, and offer a comprehensive set of effects. Korg, Digital Legacy Collection. This set of virtual instruments includes a software M1, but it's not your father's M1: The sound is much cleaner, and it comes with all the M1's expansion card sounds. This screen shot shows the "Easy" page, so-called because of the ease with which you can make common edits. MOTU, MachFive 2. The latest version includes advanced features such as beat-slicing, REX file importation, convolution reverb, the ability to import samples in just about any format, and a 32GB sound library. Native Instruments, Kontakt 4. This ambitious sampler includes features not found elsewhere, like MIDI scripting (think of it as MIDI plug-ins you can write yourself). It also has highly developed slice-oriented "beat machine" functions. Propellerhead Software, Reason. While not a workstation per se, the ability to ReWire it into any major host is compelling; and you can insert as many instances of the included synths and samplers (including the acclaimed NN-XT sampler, shown in the screen shot) as your computer can handle. Sonivox, Muse. Based on GigaStudio technology, this isn't as editable as some of the "pure" samplers, but has a ton of sounds and fulfills the concept of a hardware ROMpler brought to software. Steinberg, HALion. HALion is a traditional sampler that can stream samples from hard disk and offers multi-timbral operation. Ultimate Sound Bank, PlugSound Pro. This workstation also handles loops and beats well, with excellent time-stretching options. Optional-at-extra-cost "virtual sound cards" are available for expansion. Also note that several companies produce "application-specific" workstations, such as MOTU's Ethno Instrument and East-West's Ra (both dedicated to world and ethnic sounds); for orchestral work, there?s IK Multimedia's Miroslav Philharmonik, Garritan's Personal Orchestra, and HALion String Edition, among many others. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Beats Don't Have to Be Boring - and Making Them Shouldn't Be Either ($129) www.beatstation.com by Craig Anderton Toontrack has garnered a reputation for easy-to-use virtual drum software and a fine collection of sounds, but they also throw a little whimsy in to the mix - just check out the company name. Beatstation is the company's latest program, and we were able to get our hands on a copy (don't worry, it's legitimate!) that will be released to the public on 6/16/10. It's a bit of a new direction in virtual drum software, as it bridges the gap between serious desktop tool and fun laptop beats creator...and does a fine job with both. OVERVIEW Beatstation works in stand-alone mode, or as a plug-in for VST- (Windows) and AU- (Mac) compatible hosts, including 64-bit operating systems. Unlike most drum software, it also includes bass and synth lead capabilities. So, you can flesh out your beats with other instruments, or design beats in a musical context. The design philosophy revolves around making Beatstation highly customizable and flexible, but also, easy to use. The copy I received didn't have a manual other than a Quick Start document that was more like a description of features - but no matter, as I was able to figure out 95\\\% just by playing around (a Toontrack representative made me aware of the other 5\\\%). Note that Beatstation makes it easy to import your own material, but it comes with a 1.63GB core library out of the box, so you can get going immediately. Speaking of getting going immediately, Toontrack uses a new authorization/registration system that you can do entirely from within the plug-in, as long as you're connected to the internet. THE "BURGER KING EFFECT" Burger King's slogan for the past few years has been "have it your way," and that slogan would work well for Beatstation, too. The program comes with 11 skins, although Toontrack will be creating a web site where you can create your own skins, then import them into Beatstation. You can also set the browser's brightness and color, even inverting the colors, as well as modify other elements of the overall look (Fig. 1). Fig. 1: Here's the "A-maze Me" skin. Note the dialog box for customizing colors even further. Let's look at one more skin: This is the "Outer Space" skin, which is my current favorite. Fig. 2: The "Outer Space" skin, with a blue planet-type motif. But it doesn't stop with the skinning, as you can also change pads around. The pad setup in Fig. 3 is the same as in Fig. 2, but looks completely different. Oh, and I threw in another skin ("Camouflage") just to make it interesting. Note the little colored stripe toward the bottom of each pad; you can also customize this so that, for example, all the toms are the same color. The dialog box in the upper right has been opened to show how you edit the pad size and color. Fig. 3: You can lay out the pads anyway you want. Note the transparent look to the GTRFX-Hit; this means that no sound has been loaded into the pad yet. As you might expect, if you come up with a pad setup you like, you can save it - as well as save the kit independently for a particular pad setup. PAD PROPERTIES Each pad is also highly editable (Fig. 4). You'll find two fx send controls for feeding two send effects, mute/solo buttons, volume control, the ability to assign a pad to a mute group (e.g., if you assign drums to a group, hitting one will cut off any that are ringing - this is commonly used for hi-hats), and an insert effect slot (more on this later). Fig. 4: Editing pad samples is straightforward, but there's also lots of flexibility. The lower Sounds tab is where you can load up to five samples - just drag and drop MP3, WAV, or AIF files from the desktop or the Beatstation browser. You can also mute individual samples, and adjust volume, pitch, pan, envelope, reverse, start time offset, and other parameters independently for each sample. Note that if a sample has loop points, Beatstation will play back the loop within those points rather than looping the entire file. THE BROWSER The browser is a permanent fixture of the interface - there's no show/hide, as it's something you'll use frequently. It includes four main categories, corresponding to different elements available in the core library: Instruments, REX files, MIDI grooves, and sounds. You can restrict the view to a particular category (Fig. 5) - for example, if you're looking to import a MIDI file ("groove"), you can see just MIDI files - or see all content at once. Yes, have it your way. Fig. 5: The browser is being filtered to show only sounds from the core library. As you can see, there are lots of available sounds. Browser drag and drop is quite evolved. You can drag MIDI Grooves from Beatstation into your host, and vice-versa; even drop REX files into hosts that support REX file import, and drag REX files into Beatstation from your desktop or host. If you drag a REX file to a pad you can set it to play in different modes: standard (from beginning to end), sequential (plays one slice at the time each time you trigger the pad), and Random (same as sequential, but randomly choose REX file slices. And, you can even drag individual REX slices onto pads. Imagine what can happen if you take slices from five different REX files and load them into a single pad...you can make some pretty interesting sounds, to say the least. There's also a cool feature for the REX or MIDI files where if you click on a little magnifying glass in the file window, you can see the filenames highlighted in the browser. However, note that if you bring in content from outside the core library (e.g., dragged from the desktop), this feature won't work for those files as they "live" outside of the plug-in. BASS AND LEAD The little Bass and Lead keyboards are "special" pads that respond to MIDI notes below and above the drum notes, respectively. As with the drum pads, you can drag and drop audio files in from the browser or desktop, and again, layer up to five samples if you want to build up complex sounds. Play them in real time from a controller, or drive them from the Standard MIDI File section in the lower left. ...AND HOW ABOUT EFFECTS? Beatstation has plenty of effects (Fig. 6). Each pad accommodates an insert effect, but there are also two send effects and a master effect. There's quite a rich roster of signal processors: 3 bitcrusher/lofi effects 4 choruses 13 compressors 23 delays 4 distortion 13 equalizers 5 gates Too many insert chains to count 6 master insert chains 13 time-based effects chains 20 reverbs Fig. 6: If you like processing the living daylights out of your drums, Beatstation is glad to oblige. But that's not all: There are also sidechainable compressor, gate, and master chains, and you can choose any pad to provide the sidechain signal. This is an extremely hip feature that is still somewhat rare in DAW-land, let alone a plug-in. MORE SOUNDS! AND SAMPLING! Not only can you import sounds of your choosing, Beatstation is compatible with Toontrack's expansion packs (including SDX and EZX programs); if installed, they'll show up in the browser. With Beatstation Toontrack also introduces the BTX expansion format, which defines complete Beatstation programs. And in stand-alone mode, you can actually sample new sounds thanks to the Sample Recorder window (Fig. 7). Fig. 7: Use the Sample Recorder to record, trim, fade, etc. samples you record up to l0 seconds long. This connects to your default audio input for recording, so it accepts whatever audio your computer can accept. You can trim the sample's start and end point, do a fade in and fade out, zoom in and out on the waveform, normalize levels, set loop points, change gain, set a level for automatic triggering (i.e., recording starts when the sample exceeds a particular level), and when you're done, drag the resulting sound onto a Beatstation pad. CONCLUSIONS If you've read up to this point, then you've probably figured out I really like this program, and you'd be right. It has a really high "fun factor," not just because it's fun to play with, but because it sure seems to me that this program was designed by people who love what they do. I got a demo of a very early version from a Toontrack representative at the 2010 Winter NAMM, and every time he showed me a new feature his eyes would light up (and to be fair, there was also the occasional maniacal laugh). Then when you consider the price, well, that's something else altogether. It wasn't that long ago a sample library of drum hits cost about the same, if not more, than the price of the entire Beatstation system. Yes, there are some "pro" features that are missing, like individual audio outputs for the pads, ReWire support, and MIDI learn mode for tying parameters to hardware control (although according to Toontrack, this is high on the list for a future update). Also, direct audio export is limited to either bouncing within the host then exporting, or dragging MIDI as audio to the host or desktop (which is actually a very cool feature). So is this a problem? Not for me, because Beatstation is all about getting beats down fast, having fun, and ending up with groovy sounds - not agonizing over a billion different options. I have a lot of virtual drum programs, some of them pretty high end, and they all have their uses. But I have to say, I'll be calling on Beatstation a lot in the future. There's something about it that just makes you want to play with it, and that's a pretty strong recommendation right there. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. Use Auto-Tune Evo's Retune Speed Tools For Transparent Pitch Correction With all of the attention these past few years on the Auto-Tune Vocal Effect (the T-Pain/Cher style effect) and its ubiquity in pop culture, it's easy to forget that Auto-Tune was initially designed (and is still most commonly used) for extremely realistic, natural-sounding pitch correction. Each new generation of Auto-Tune improves audio quality and ease of use; by taking advantage of the following techniques, it can be virtually impossible to hear that pitch correction has been applied. THE IMPORTANCE OF RETUNE SPEED The single most important Auto-Tune parameter for natural pitch correction is Retune Speed. This is the speed at which the pitches of any out-of-tune notes in the audio are changed to the "right" notes. The goal is to set a Retune Speed that's fast enough to get out-of-tune notes in tune quickly, but not so fast that you can hear it happening in an unnatural way. This is made trickier by the fact that the optimum Retune Speed for any performance depends on such attributes as song tempo, note duration, and vocal style, and can often change from note to note. Luckily, Auto-Tune Evo includes unique tools in both Automatic and Graphical Modes to make it easy to set optimum Retune Speeds. HUMANIZE One situation that can be problematic in Automatic Mode is a performance that includes both very short notes and longer sustained notes. The problem is that in order to get the short notes in tune, you have to set a fast Retune Speed, which would then make any sustained notes sound unnaturally static. Fortunately, Auto-Tune Evo's Automatic Mode Humanize function easily solves this problem. The Humanize function differentiates between short and sustained notes and lets you apply a slower Retune Speed just to the sustained notes. Thus, the short notes are in tune and the sustained notes still allow the natural variations of the original performance. Here's how it works: Start by setting Humanize to 0 and adjusting the Retune Speed until the shortest problem notes in the performance are in tune. At this point, any sustained notes may sound unnaturally static. If so, start advancing the Humanize control. The higher the Humanize setting, the more the Retune Speed is slowed for sustained notes. The goal is to find the point where the sustained notes are also in tune and just enough of the natural variation in the performance is present in the sustained notes to sound natural and realistic. (If you set Humanize too high, any problematic sustained notes may not be fully corrected.) INDIVIDUAL CORRECTION OBJECT RETUNE SPEEDS Prior to Auto-Tune Evo, it was necessary to select a single Graphical Mode Retune Speed that applied to all of your pitch corrections. Your choice was typically picking a Retune Speed that was a "good enough" compromise for an entire track, or painstakingly automating the Retune Speed from phrase to phrase or even note to note (with the attendant cost in time and effort). With Auto-Tune Evo, Antares has introduced the ability to set independent Retune Speeds for every individual correction object, whether Line, Curve, or Note. Simply select one or more objects and set the Retune Speed that provides the most natural result. Getting exactly the desired effect for every note of a performance is a quick, simple, and intuitive process. Of course, in practice you don't need to set an individual Retune Speed for every object. To streamline the process, start by selecting all your audio and setting a Retune Speed that works for the majority of the performance. Then listen to the result and note which notes or phrases could still use improvement. Select those notes or phrases, adjust their Retune Speeds for the most natural result, and you're done. DEFAULT GRAPHICAL RETUNE SPEEDS With independent Retune Speeds for each correction object, Auto-Tune Evo includes the time-saving ability to set custom default Retune Speeds for each of the three object types: Lines, Curves and Notes. These are the initial Retune Speed values that are automatically assigned to each newly created object. To choose your own values, just pay attention to what values you most commonly use for the various objects and set those as defaults in Auto-Tune Evo's Option dialog. Update as necessary. Of course, it's possible to make Auto-Tune Evo do the obvious type of 'hard' correction effects you hear on so many hits these days. But don't forget that it's also a tool for doing transparent, non-obvious correction that can help save a session. Copyright 2010 by Antares Audio Technologies and adapted for Harmony Central with the express written consent of the publisher. For more information on Antares and Auto-Tune Evo, visit www.antarestech.com
  21. [attachment=140750:name] The biggest bang-for-the-buck computer upgrade is more RAM. Without enough RAM, your computer uses your (much slower) hard drive as virtual memory. You need at least a Gigabyte of RAM for an audio-oriented computer, but if you're really into it, install the maximum your system can handle. Note that 32-bit operating systems are limited to 4GB. Typical systems access somewhat less than 4GB due to some memory being mapped to video or other hardware resources. This limitation does not apply to 64-bit operating systems, which can access far more than 4GB of RAM (the exact amount varies, but the theoretical maximum is millions of gigabytes). If you don't know what kind of RAM your computer needs for expansion, there are several online memory configuration apps. For example, PNY has one at http://www.pny.com/configurator. Search for your machine, and you'll find out what type of memory you need to get. --Craig Anderton
  22. When You Want Pads that Can Last Forever, Try this Foolproof Looping Technique By Craig Anderton Pads can add beautiful atmospherics to a recording, but if you’ve ever tried to loop a pad, you’re probably aware that it’s not an easy task. Any kind of discontinuity as the loop jumps back to the beginning interrupts the pad’s flow, creating anything from a jarring effect to a massive click or pop. Although there are sample editing programs that can loop these complex sounds, you may not realize that the tools needed to create perfect loops are available in just about any DAW. First, an assumption: The pad will have some sort of interesting attack that you want to retain. As a result, you’ll want to the loop to occur sometime after that initial attack. As pads don’t have rhythmic components, it doesn’t really matter whether you repeat a two-, three-, or four-bar section of the pad following the attack (or even a longer loop, if you’re so inclined). For our example, we’ll take a four-bar pad and loop the last three bars. 1. Record a little more than four bars of the pad. 2. Enable snap on your DAW (a half or whole note snap works well). 535ec7c93e2ef.bmp Fig. 1: Note that the file has been split at measure 2; everything after the start of measure 5 has been discarded. 3. Split the pad audio clip at the start of measure 2. Now the first measure is a separate piece of audio. Also split the pad audio clip at the start of measure 5, and discard everything after the start of measure 5 (Fig. 1). Next, we’ll need to crossfade measure 1 with the last measure of the pad (the one starting at measure 4). If your DAW offers automatic crossfading, you should simply be able to drag-copy measure 1 on top of the last measure. Make sure you use equal power crossfading. If your DAW can do this, skip steps 5 and 6, then continue. 535ec7c940232.bmp Fig. 2: A one-measure fade-out has been applied to the end of the file, and a one-measure fade-in to a copy of the first measure (placed temporarily after the end of the file). 4. If your DAW doesn’t do automatic crossfading, copy measure 1. Use a convex fade-in curve for the copied measure 1, and a convex fade-out curve that extends from the start of measure 4 to its end (Fig. 2). 535ec7c9778ea.bmp Fig. 3: The two areas with the fades have been layered to apply a crossfade. 5. Next, layer the two sections together (Fig. 3) to create a crossfade. At this point, you have several options. If you want to create a loop out of the last three measures, bounce measures 2, 3, and 4 to a separate clip. This will loop perfectly at the host’s tempo. If you want it to loop at other tempos, or in other keys, you can either apply time-stretching DSP, or create an “Acidized” file with metadata that tells the file how to stretch (Sony Acid or Cakewalk Sonar can do this). Note that trying to stretch pads using ReCycle to create a REX format file won’t work very well; REX files work best for percussive loops. If you want to use the attack too, it’s still available as the single measure we split off in step 3. Simply paste it in front of the loop, and you’ll hear the attack followed by the loop. Extend the loop for as long as you’d like. Yet another option is if you want to use the loop in a traditional sampler, either software or hardware. In this case, you want the audio (including the attack) to start playing when you play a key or trigger a note-on, then as the note sustains, you want it to loop. To do this: 1. Bounce all four measures to a single audio file. 2. Use the DAW’s time ruler to locate the precise start of measure 2, using either samples or milliseconds (whichever format your sampler uses). 3. Import the audio file into your sampler. 4. Set the loop end to the end of the file. Set the loop start to the location you determined in Step 2. 5. Play the sampler. You may need to jog the sample start or end point a bit to get a perfect loop, but you should be able to obtain a loop with no glitches or pops. Now you’ve transformed your pad into a loop you can “roll out” in a DAW track to provide a background, or load into a sampler. And it will loop perfectly! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. Psst, Want an EQ Curve? Used Only Once on a Hit? By Craig Anderton Bob Ludwig, Doug Sax, Bernie Grundman - they're masters of mastering. They produce hit after hit, with nothing at their disposal other than...well, experience, talent, great ears, the right gear, and superb acoustics. So maybe you're missing one or more of those elements, and wish that what came out of your studio sounded as good as what comes out of theirs. So, why not just analyze the spectral response curves of well-mastered recordings, and apply those responses to you own tunes? Why not, indeed - but can you really steal someone's distinctive spectral balance and get that magic sound? The answer is no...and yes. No, because it's highly unlikely that EQ decisions made for one piece of music are going to work with another. So even if you do steal the response, it's not necessarily going to have the same effect. But the other answer is yes, because curve-stealing processors can really help you understand the way songs are mixed and mastered, and point the way toward improving the quality of your own tunes. As to the tools that do this sort of thing, we'll look at Steinberg's FreeFilter (which was discontinued, but still appears in stores sometimes), Voxengo CurveEQ, and Har-Bal Harmonic Balancer. They're very similar, yet also, very different. HOW THEY WORK FreeFilter and Voxengo split the spectrum into multiple frequency bands in order to analyze a signal. These create a spectral response, as from a spectrum analyzer, while a song plays back. During playback, the program builds a curve that shows the average amount of energy at various frequencies. You can apply this analysis (reference) curve to a target file so that the target will have the same spectral response as the analyzed file, as well as edit and save the reference file. Har-Bal isn't curve-stealing software per se. While optionally observing the response of a reference signal, you can open another file, and see its curve superimposed upon the reference. You can edit the opened file's curve so it matches the reference signal more closely, but this is a manual, not automatic, process. The manual vs. automatic aspect is in some ways a workflow issue. FreeFilter and Voxengo start by creating the reference curve, but give you the tools to adjust this manually because you'll probably want to make some changes. Har-Bal takes the reverse route: You start out manually, and if you want to, use the tools to create something that resembles the visual reference curve, which was generated automatically when you opened the file. Also remember that curve-stealing is only a part of these programs' talents; they're really sophisticated EQs. Fig. 1: The black line is the spectral response for Madonna's Ray of Light; the red line represents a Fatboy Slim mix. Fatboy's has a lot more treble, while Ray of Light has a serious low-end peak. So what do some typical curves look like? Check out Fig. 1. The black line is the spectral response for Madonna's "Ray of Light," while the red line represents a Fatboy Slim mix. Past about 1 kHz, Fatboy's curve shows enough high frequency energy to shatter glass. "Ray of Light" has a higher response below about 400 Hz, due mostly to a prominent kick. It has a more thud-heavy, disco kind of vibe, whereas Fatboy Slim leans more toward a techno style of mastering. Apply these curves to your own music, and they'll take on the characteristics of the reference tunes - but the results may not be what you expect, as we'll see. THE SOFTWARE Fig. 2: Steinberg's FreeFilter was an early curve-stealing/EQ program. Its sound quality is lacking by today's standards, but its functionality set the paradigm for this type of software. The software needs to analyze two files: the reference and the target. It compares the two, and raises or lowers the target curve's response to match that of the reference. Fig. 2 shows the spectral response graph for Steinberg's FreeFilter; it illustrates what happens after applying the source's curve to the destination. The green line displays the target curve, while the red line shows the result of applying the reference. The yellow line shows the response correction curve generated by FreeFilter to match the two curves. The 30 sliders are like those on a graphic EQ; they modify the curve represented by the yellow line. It's crucial to be able to change the degree to which the reference influences the target. With FreeFilter's morph control at 0\%, you hear the original destination sound. At 100\%, the two curves match. You can even go past 100\%, which exaggerates any changes. Generally, it seems settings in the 20\% - 50\% range almost always gives better results than 100\%, because then the source curveinfluences, rather than dominates, the destination. With Voxengo CurveEQ (http://www.voxengo.com), you can again see the filter's frequency response, input spectrum, and output spectrum. It also includes goodies not found in other programs: The "GearMatch" feature includes impulse responses of pieces of classic gear you can apply to a tune. Additional limiting, saturation, and voicing can further color a piece of music. Fig. 3: Voxengo's CurveEQ has analyzed the song in the rear window, matched it to the current song in the front window, and generated a compensating response curve so that the current song's spectrum matches the reference. This curve can be tweaked further. When you want to capture and apply a curve, you can load a reference file, or play a file (in real time) into CurveEQ and capture its response. You then load the target file you want to process, and match the two. CurveEQ generates a filter response that matches the current file to the reference (Fig. 3), which you can then tweak by dragging on the small handles. With its vintage gear and dynamics processing options, CurveEQ is intended to be more of a complete mastering solution than FreeFilter or Har-Bal. Of course, if you're not careful you can overdo things, but a hint of saturation of vintage compression can indeed add some sparkle. And as it's a plug-in, CurveEQ can work with individual tracks as well as program material, although you need to be careful about delay compensation. Har-Bal (http://www.har-bal.com) has several interesting aspects. First, it's stand-alone, not a plug-in, and runs under ASIO, WDM, or MME as well as Mac OS X. Fig. 4: Har-Bal's spectral response display. The interface is extremely easy to use and responsive in terms of drawing curves; you can adjust peaks, average, and a mean of the two separately (Fig. 4). For example, you would bring down excessive peaks on the peak line, and bring up "holes" in the average line. Har-Bal also has a volume compensation feature so that the equalized and bypassed sounds have the same apparent volume. This allows you to base your judgments solely on what EQ contributes to the overall sound, rather than being influenced by level differences. Another talent is the ability to match average levels among tunes. Because CurveEQ and Har-Bal seem superficially similar, they're often lumped together as similar programs. But actually, they do things in very different ways, and have very different workflows and optimizations. For pure EQ curve adjustments to fix problems, Har-Bal gets the nod. But that's all it does. CurveEQ does a lot more, including automatic curve-stealing with the ability to "morph" curves like FreeFilter, but is not as versatile in terms of having separate control over peak and average amounts. Frankly, you kind of need to have both if you want all the features, but fortunately, both have downloadable trial versions so you can determine for yourself which one satisfies your particular needs better. SOUNDS GOOD, WHAT'S THE CATCH? For EQ adjustments, these are extremely useful programs. But if you're into stealing curves, be forewarned - there's a fundamental flaw in the concept. For example, I grabbed an audio reference from a Spice Girls CD (yes, I'm not ashamed to admit it, so sue me) because it had a nice, overheated kind of pop mastering approach and I was curious how it would affect some of my cuts. There's a serious treble boost on the girl's voices to make them airy; it sounds great with the Queens of Auto-Tune, but when applied to one of my tracks, the treble boost made the overdriven guitar screechy. However, reducing the influence of the reference tamed the treble boost, trimmed the bass, and did produce a more pop-sounding curve. Then there are times when curve-stealing doesn't really make a difference. I had a dance tune and thought hey, "Ray of Light" was a big dance hit, let's see what happens when I apply it to my tune. So I did, and...nothing. Then I realized why: I had mastered my tune with virtually the same spectral response. So does that mean I had mastered my tune as well as the big-bucks experts who did "Ray of Light"? Well, no - my tune needed a bit more high end. So a curve can point you in the right direction, but don't count on it to complete the job. SO WHAT DOES WORK? Using your ears to compare your work to a well-mastered recording is a tried-and-true technique, but it shortens the learning process when you can actually compare curves visually and see what frequencies exhibit the greatest differences. I've found a few reference comparison curves for Har-Bal that work well for certain types of music: Fatboy Slim for when dance mixes are too dull, "Ray of Light" for a house music-type low-end boost, Cirque de Soleil's "Alegria" for rock music, and Gloria Estefan's "Mi Tierra" for acoustic projects. On very rare occasions I use their curves, but when I do, they're more like "presets" because they end up getting tweaked a lot. Automatic curve-stealing just doesn't do it for me, but "save me 10 minutes by putting me in the ballpark" does. But my main use for curve-analyzing software is for stealing from myself. After mastering a music project for a soundtrack, one tune sounded a little better than the others - everything fell together just right. So, as an experiment, I subtly applied its response to some of the other tunes. The entire collection ended up sounding more consistent, but the differences between tunes remained intact - just as I'd hoped. Another good use was when German musician Dr. Walker remixed one of my tunes for a compilation CD, but used a loop for which he couldn't get legal clearance. Rather than give up, I created a similar loop that wasn't a copy, but had a similar "vibe." Yet it didn't really do the job - until I applied the illegal loop's response curve to my copy. Bingo! The timbral match was actually more important than the particular notes I played in terms of making the loop work with the rest of the tune. This does produce a weird paradox, though: I used a piece of curve-stealing software to avoid stealing a piece of copyrighted material. I guess it's all part of the living in the 21st century. [update: Since writing this article, I no longer use curve-stealing software because I get better results from A-B comparisons, with Waves' spectrum analysis plug-ins showing curves for the two files. I've also used this same technique to model something like a guitar to sound more like a different guitar.] Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. I've been trying to close an account with BofA for almost a year now, with similar results. Those people are morons. The only known way to cancel an AOL account for sure is to set it up for automatic pay, then close the account doing the payment. It also gets their attention BTW you are 100% right, B of A sucks. And not just for their customer service. They cut the credit card limit in half for their local branch manager. In fact, they've been cutting limits left and right and I figured out why: If people pay off most or all of their bill, B of A might not have enough loose bucks floating around to front money to all those people for 30 days. That alone is a sobering thought.
  25. So, I need to start doing some freelance something or other, because I'm straight burned out on the workplace. I hear you, and Ohio is in pretty bad shape right now. But let me offer a little bit of (hopefully) inspiration... The last big recession I went through was in the mid 70s (the ones since then, except for the current one, have been more like bursting bubbles). At the time I was writing articles for Popular Electronics and making very good bucks - it was a high circulation magazine that paid really well. Then the Editor died in a car crash, and they brought in a new guy who decided he didn't want music-related articles any more. So, my one source of reliable income was gone with no notice. I was freaked, needless to say. Where else could I sell articles?!? I went to a newsstand and saw Guitar Player. I pitched them on an article on a DIY project, and it went over really well - with times being so tight, people were more interested in building than buying. This led to writing Electronic Projects for Musicians, which led to my Guitar Player column, which led to writing the Home Recording book, and those things really launched my career, even in a down economy. None of this would have happened if Popular Electronics had continued wanting my articles!! It really is true that as one door closes, another one opens. It sounds like you were working for devious people anyway. Keep looking for that door that's opening, and I would almost guarantee that a year from now, you'll look back and be very glad you lost your job. Based on the posts I read here you're obviously a bright guy. Oh, one more thing: Get your employer to write you a letter of recommendation. He owes you.
×
×
  • Create New...