Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. Edit out acoustic guitar artifacts for a "perfect" recording by Craig Anderton Of course, this assumes you want perfection—sometimes it’s the little noises, squeaks, and glitches that add a “human” quality to a performance. However, it's possible that those glitches may step over the line, and become problems that distract from a part. For example, with drums, there’s little point in quantizing drums to a grid if they don’t sound wrong; but if there’s a hit that’s annoyingly off, sure, go ahead and fix it. The following relates to a technique I use a lot on nylon string (classical) guitar recordings, but which also applies to steel string guitar recordings in some cases. Classical guitar is an interesting instrument to record—part of that is because it has a rich vocabulary of artifacts, from fret buzzes to slides to fingernails scraping on the lower strings’ metal windings. In fact many samplers include samples of these sounds, which can be brought in by (typically) hitting a key harder or using a controller, to help create a more realistic emulation. But for those times when the artifacts are a distraction rather than an enhancement, before the age of digital audio editing there wasn’t much you could do about the situation: It’s now possible to surgically remove, or at least reduce, many of these types of artifacts. I first used this technique when working on a classical guitar album by an artist who had some health problems at the time. Most of his playing was exceptional, but occasionally some notes sounded “tentative.” I found that in those cases, there was some sort of sound preceding the note itself, and deleting that sound made the note ring through with authority. Since then, I’ve used this technique with other guitarists to reduce or remove artifacts that would spoil an otherwise perfect part. If you take this to an extreme, you can almost make a classical guitar sound like it was played by a robot with perfect technique—but I don’t recommend this any more than I recommend using Auto-Tune or Beat Detective on everything! GATHER YOUR TOOLS Doing this kind of editing requires a spectral view of the waveform (Fig. 1) so you can easily recognize the difference between the artifacts and the notes themselves, and perform the digital audio equivalent of a “window splice” in the frequency spectrum. I use Adobe Audition for this, although Steinberg Wavelab also gives an editable spectral view. With Audition, call up the file and go View > Spectral Frequency Display. Adjust the resolution as desired, then look closely at the notes. Note attacks will have a sharp, vertical line that extends from low to high frequencies. Artifacts almost invariably appear just before the note attack. You can use any of Audition’s selection tools to define the artifacts as a selection; when you do, a level control appears. You can then use this to dial in the exact amount of attenuation. Fig. 1: Note the artifacts, outlined in gray, in the upper waveform. In the lower waveform, these have been removed completely by using editing tools in Adobe Audition’s Spectral Frequency view (the vertical axis is frequency with the colors indicating amplitude, while the horizontal axis is time). For clarity, each selection has a white outline that indicates what was removed; note that only higher frequencies have been removed, and the decay of the note itself has been left intact. Surprisingly, it’s often possible to remove the area completely and not be able to hear that it was removed. Sometimes, though, you’ll need to reduce the gain (by a few dB) rather than remove the section to retain a realistic sound, or if you want to leave a bit of the artifact sound but make it less obvious. PRACTICE MAKES PERFECT It takes a while to recognize what’s an artifact and what isn’t, and to determine the degree to which you can reduce it. Life is often about compromises, and this is no different; you’ll find problems you can’t fix, and conversely, you’ll be able to fix problems you thought were unfixable. In any event, if you’re willing to take the time to do this kind of detailed editing, you can produce the most amazingly clean and clear nylon string guitar parts you’ve ever heard, and clean up steel string issues as well.
  2. Can't get your bass to fit right in the mix? Then follow these tips By Craig Anderton If there’s one instrument that messes with people’s minds while mixing, it’s bass. Often the sound is either too tubby, too thin, interferes too much with other instruments, or isn’t prominent enough . . . yet getting a bass to sit right in a mix is essential. So, here are ten tips on how to make your bass “play nice with others” during the mixing process. 1 CHECK YOUR ACOUSTICS Small project studio rooms reveal their biggest weaknesses below a couple hundred Hz, because the length of the bass waves can be longer than your room dimensions—which leads to bass cancelations and additions that don’t tell the truth about the bass sound. Your first acoustic fix should be putting bass traps in the corners, but the better you can treat your room, the closer your speakers will be to telling the truth. If acoustic treatment isn’t possible, then do a reality check with quality headphones. 2 MUCH OF THE SOUND IS IN THE FINGERS Granted, by the time you start mixing, it’s too late to fix the part—so as you record, listen to the part with mixing in mind. As just one example, fretted notes can give a tighter, more defined sound than open strings (which are often favored for live playing because they give a big bottom—but can overwhelm a recording). Also, the more a player can damp unused strings to keep them from vibrating, the “tighter” the part. 3 COMPRESSION IS YOUR FRIEND Normally you don’t want to compress the daylights out of everything, but bass is an exception, particularly if you’re miking it. Mics, speakers, and rooms tend to have really uneven responses in the bass range—and all those anomalies add up. Universal Audio’s LA-2A emulation is just one of many compressors that can help smooth our response issues in a bass setup. Compression can help even out the response giving a smoother, rounder sound. Also, try using parallel compression—i.e., duplicate the bass track, but compress only one of the tracks. Squash one track with the compressor, then add in the dry signal for dynamics. Some compressors include a dry/wet control to make it easy to adjust a blend of dry and compressed sounds. 4 THE RIGHT EQ IS CRUCIAL Accenting the pick/pluck sound can make the bass seem louder. Trying boosting a bit around 1kHz, then work upward to about 2kHz to find the “magic” boost frequency for your particular bass and bassist. Also consider trimming the low end on either the kick or the bass, depending on which one you want to emphasize, so that they don’t fight. Finally, many mixes have a lot of lower midrange buildup around 200-400Hz because so many instruments have energy in that part of the spectrum. It’s usually safe to cut bass a bit in that range to leave space for the other instruments, thus providing a less muddy overall sound; sometimes cutting just below 1kHz, like around 750-900Hz, can also give more definition. 5 TUNING IS KEY If the bass foundation is out of tune, the beat frequencies when the harmonics combine with other instruments are like audio kryptonite, weakening the entire mix. Beats within the bass itself are even worse. Tune, baby, tune! This can’t be emphasized enough. If you get to mixdown and find the bass has notes that are out of tune, cheat: Many pitch correction tools intended for vocals will work with single-note bass lines. 6 PUT HIGHPASS FILTERS ON OTHER INSTRUMENTS To make for a tighter, more defined low end overall, clean up subsonics and low frequencies on instruments that don’t really have any significant low end (e.g., guitars, drums other than kick, etc.). The QuadCurve EQ in Cakewalk Sonar's ProChannel has a 48dB/octave highpass filter that’s useful for cleaning up low frequencies in non-bass tracks. A low cut filter, as used for mics, is a good place to start. By carving out more room on the low end, there will be more space for the bass to fit comfortably in the mix. The steeper the slope, the better. 7 TWEAK THE BASS IN CONTEXT Because bass is such an important element of a song, what sounds right when soloed may not mesh properly with the other tracks. Work on bass and drums as a pair—that’s why they’re called the “rhythm section”—so that you figure out the right relationship between kick and bass. But also have the other instruments up at some point to make sure the bass supports the mix as a whole. 8 BEWARE OF PHASE ISSUES It’s common to take a direct out along with a miked or amp out, then run them to separate tracks. Be careful, though: The signal going to the mic will hit later than the direct out, because the sound has to travel through the air to get to the mic. If you use two bass tracks, bring up one track, monitor in mono (not stereo), then bring up the other track. If the volume dips, or the sound gets thinner, you have a phase issue. If you’re recording into a DAW, simply slide the later track so it lines up with the earlier track. The timing difference will only be a few milliseconds (i.e., one millisecond for every foot of distance from the speaker), so you’ll probably need to zoom way in in order to align the tracks properly. 9 RESPECT VINYL’S SPECIAL REQUIREMENTS Vinyl represents a tiny amount of market share, but it’s growing and you never know when something you mix will be released on vinyl. So, if your project has even a slight chance of ending up on vinyl, pan bass to the precise center. Bass is one frequency range where there should be no stereo imaging. 10 DON’T FORGET ABOUT BASS AMP SIMS You’ll find some excellent bass amp sims in Native Instrument’s Guitar Rig, Waves GTR, Live 6 POD Farm, and Peavey’s ReValver, as well as the dedicated Ampeg SVX plug-in (from the AmpliTube family) offered by IK Multimedia. IK Multimedia’s Ampeg SVX gives solid bass sounds in stand-alone mode, but when used as a plug-in, can also “re-amp” signals recorded direct. This shows the Cabinet page, where you set up your “virtual mic.” These open up the option of recording direct, but then “re-amping” during the mix to get more of a live sound. You’ll also have more control compared to using a “real” bass amp. Even if you don’t want to use a bass sim as your primary bass sound, don’t overlook the many ways they can enhance a physical bass sound. Craig Anderton is Editor in Chief of Harmony Central and Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered hundreds of tracks), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  3. Let there be light – for DJs, pedalboard fans, laptop jockeys, and band & orchestra www.mightybright.com by Craig Anderton Sure, it’s great fun to review some hot new guitar, amp, synthesizer, software program, or other tantalizing piece of gear. But sometimes, it’s important to scope out the kind of modest accessories that are anything from surprisingly useful to gig-savers. At the recent Frankfurt Musikmesse, I stopped by the Mighty Bright booth to get the scoop on how they outmaneuvered some counterfeiters, but ended up checking out several of their new products. The proverbial light bulb went on over my head (which I suppose it pretty appropriate), and I thought “these would be cool to review for the next newsletter.” So here we are with the Stage Light, USB Light, and Orchestra Light. COMMONALITIES All the products being reviewed use bright LEDs whose life is rated at “up to” 100,000 hours. This translates to about 10 years of continuous use, or in other words, if you play a two-hour gig every day of your life and use the lights for the entire gig, you’ll be able to gig for 136 years. But, let’s assume “up to” is off by half the rated hours, or you do 4-hour DJ sets. You can still gig for 68 years straight, so I think you’re covered. I guess I’ll need to leave these to my heirs, unless I break them accidentally. But, that might be somewhat difficult. The lights are made in China (ironic, given that the counterfeiters they busted in Frankfurt were also allegedly Chinese), but neither look nor feel cheap, and have some cool “extras” we’ll cover when describing each light. The models being reviewed here are black (except for the head of the USB light), so they pretty much blend in to a stage setup. STAGE LIGHT $34.99 MSRP, $24.99 street For DJs, this basically replaces the gooseneck light on older decks; the dual-head design has two LEDs per head. One head uses red LEDs, while the other uses white. For maximum illumination use the white; to avoid interfering with your night vision, use the red . . . or use both. This isn’t really a new product; its genesis, the Duet 2 Music Light, is identical but has only white LEDs. This led (or maybe I should say, LED) to the Pedalboard Light, which is designed to illuminate a pedalboard or rack so you can make sure you’re hitting the right button at the right time. It’s actually identical to the Stage Light, but someone at Mighty Bright got the, uh, bright idea that the Pedalboard Light would be perfect for dual-deck DJ setups, so the company got another product without having to do any real effort other than marketing to a totally different channel (they should just bite the bullet, and call it the DJ Light). So you can consider this review as relevant to all three products. Mounting options are a spring-loaded clamp (with rubber pads on the clamp contact points—definitely a thoughtful touch), or you can use a Velcro® strip along the bottom, which is handy with pedalboards that use this kind of “hook and loop” mounting. There’s also an optional-at-extra-cost ($4.99) cradle base with a magnet for mounting to metal, and allows using the Stage Light as a free-standing light. Power comes from three AAA batteries (included), which are rated for about 20 hours under normal use. There’s also an AC adapter jack, and Mighty Bright sells an accessory AC adapter. It’s a switching type that accommodates 100-240V and 50/60Hz, so even though they sell three different adapters with different plugs for the US, UK, and European markets, you only need one and any adapters as needed. Of course you can also use rechargeable batteries if you don't want to be tied to the AC line and want to be more eco-friendly by not using disposable batteries, but they won't be recharged if they're inside the light and you hook up the AC adapter; you would need to recharge them with an external charger. The heads mount to flexible goosenecks, which not only let you position them as desired, but can give them a cool “War of the Worlds” look. One of the things Mighty Bright does at trade show demos is to bang the heads against the nearest hard surface, and they haven’t broken any that I’ve seen. The clamp pivots on a metal, not plastic, shaft; the switches on the heads are recessed, so you can’t break them off. Overall, I don’t see any reason why these wouldn’t last a long time, as long as you make sure the batteries don’t leak, and you remove them if the lights aren’t going to be used for a while. LED USB LIGHT $10.99 MSRP When I first started using a laptop for DJ sets, backlit keyboards weren’t yet in vogue so I was delighted when I found a USB light at MacWorld. But it was heavy, used an incandescent bulb, had an on-off switch that was less than predictable, wasn’t all that flexible, and I was always concerned that it was stressing out the USB connector due to the weight. It was better than nothing, but the LED USB Light is better in all respects: Lighter, more flexible, LED-based, cuter, and with vastly superior light dispersion. It has an on-off slide switch on the top (I like the one on the Stage Light better, but I’ll cope). Note that in addition to being useful with laptops, I found two other uses for the LED USB Light. It’s great to stick on the rear of your computer if it’s under a desk so you can see what’s going on with the rear-panel cables and connectors; and with more keyboards sporting USB ports, assuming the port is not in use the LED USB Light can help illuminate the front panel. ORCHESTRA LIGHT $74.99 MSRP, $69.90 street This is a revamp of the original Orchestra Light, featuring several small improvements including an on-off switch that allows switching banks of Orchestra Lights on and off, and a more secure connection to the included AC adapter. The Orchestra Light has nine white LEDs and runs off three AA batteries, or the AC adapter. The light is very uniform (Mighty Bright seems to have the lens diffusion thing down, with no shadowing or ghosting, and can illuminate two pages of sheet music with a consistent, soft glow that’s easy on the eyes. The switch allows for two different brightness positions. The dim mode provides about 24 hours of battery life under normal use, while full brightness does about 16 hours. Regarding the packaging, it’s classy: The light comes in a small fabric zipper bag with internal straps to hold the light in place, and a pouch to hold the AC adapter. I’ve read reviews online commenting about the AC adapter jack not being a secure fit, but assuming the reviewers hadn’t gotten a counterfeit unit by mistake (they look really similar; the quality isn’t), that’s been addressed in the latest incarnation. The original on-off switch (a push-on/push off type) has been replaced by a latching type. This means that multiple units could plug the AC adapters into a barrier strip, and turned on and off simultaneously (e.g., the string section could “go dark” if needed, then come back up all at once). There’s not much more to say; the AC adapter is global if you add an adapter to provide the right kind of wall plug, and the clamp mount includes pads on the clamp contact points (you could also hold the base in place with Velcro®). The three AA batteries are also heavy enough that, assuming the gooseneck is set up for proper balance, the light can work as a free-standing light. It’s simple, it works, it’s effective, and the price is right, given what you receive in return. CONCLUSIONS Let there be light . . . these are truly handy little suckers. Mighty Bright makes a bunch of other lights as well, but these are particularly well-suited to everyday musical applications. They may not be some super-duper high-tech piece of gear, but they’re definitely way useful. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  4. Whether you're installing a new operating system or just repairing one, heed these tips by Craig Anderton What do the Mac's Snow Leopard and Microsoft's Windows 7 operating systems have in common? Answer: both came out over three years ago. I'm not sure what that is in dog years, but it's about 60 in computer years. And whether you like it or not, at some point you'll have to upgrade if you want to run particular programs that are available only on the latest and (hopefully) greatest operating system. But there's more to the story, because sometimes you'll need to re-install an existing operating system. You'll know you need to do it because one day, you might be unlucky enough to turn on your computer to boot up your DAW or play a soft synth, and get an error message that goes something like: 0234890 Bad F-Line Trap KernalSanders 0002:VX:RTFM:BURP:666 Translated into English, this means: You’re screwed. This type of catastrophe tends to be more of a problem with Windows operating systems thanks to the legacy of DOS (acronym for “Dumb Operating System”), where the phenomenon is known as the “Blue Screen of Death.” For the Mac, the equivalent is the multilanguage gray screen that tells you that you're hosed, you need to reset, and you probably lost some data. Without getting into a Windows vs. Mac debate (remember, I use and love/hate both), my experience is that my Mac OS X machine crashes/freezes more often than my Windows 7 machine, but Windows is more likely to die a slow death that renders a computer useless. (For those brave enough to have upgraded to Windows 8, we'll see how that holds up over time.) Most of my serious Mac problems have been hardware-related (although I’ve had to do the ever-popular “clean install” a couple times), but Windows can go psychotic from software glitches. This is because Mac developers operate under a fairly strict set of standards imposed by the Mac OS, which runs only on Mac hardware. With Windows, there are so many possible hardware/OS combinations that programs that work fine with one set of hardware and software can have unpredictable operation with others. TIME FOR A NEW OS Someday you’ll install a new OS or re-install and old one, for any one of several reasons: You want to upgrade to a new and improved OS, like moving from Windows 7 to Windows 8, or to the 64-bit world of Mac OS 10.8 (Mountain Lion). Your main drive is as screwed up as Darth Vader on Prozac, and your only hope is to re-format the hard drive and start all over again. You’ve installed and de-installed so many programs that your hard drive is loaded with junk, fragments of uninstalled programs, weird shared files, and so on, leading to unstable (although not necessarily fatal) conditions. When you do change operating systems, it will be a major pain to re-create your previous environment – particularly if a long time has elapsed since you first installed the system, and a zillion updates and tweaks have occurred since then. But there are some ways to make the process go a lot more smoothly. AN OUNCE OF PREVENTION... The best time to prepare for installing a new OS is well before you need (or want) to do the install. Use separate hard drives for programs and data. Okay, you knew this. But note that quite a few hard disk recording programs have a default location for storing files that leads back to the drive holding the program. After transferring any necessary data to the data drive, check any Preferences or Options menus for audio storage, and point them to the folder on the data drive. Organize downloaded files and updates. A new OS means re-installing programs. Doing this from a distribution CD is not difficult, but these days, it’s likely that at least some of your programs were purchased online. And you’ve almost certainly downloaded updates and patches. Follow two steps toward organization. To organize distribution CDs and DVDs, buy a disc storage “briefcase.” Store each disc in a paper disc envelope; write any serial numbers, version numbers, passwords to private user web sites, and other useful info on the back. To organize downloaded programs, create a folder on your data drive specifically for downloaded programs and updates. Create a sub-folder for each program, and use it to hold the original downloaded file (usually Zip format for Windows, Stuffit for Mac), and read.me files, updates, serial numbers, unlock codes, etc. Write down any system tweaks you’ve done. You're probably going to have to do them again. Before you install the new OS (or replace the OS that went crazy from too many installations/deinstallations), re-format the hard drive. This erases everything, but then again, you backed up all your data already - right? Besides, this gives you a great opportunity to scan your hard disk for errors and bad sectors (you don’t want those puppies showing up after you have the new OS in place), and partition the drive if desired. Even better, copy your hard drive to another hard drive, just in case you forgot to transfer a file or two. Make sure you have all necessary installation discs, files, and and product codes in hand before you start installing the new OS. You don’t want to re-format the drive, then realize you can’t install your OS over it because you lost some serial number. Start the re-loading process. After installing the OS, immediately install a way to go online so you can check for any operating system updates. Yes, it’s a drag to use a music computer for surfing the net, but these days, it’s not really possible to keep on top of programs without downloadable updates and purchases. REMOVABLE DRIVES AND DUAL-BOOT SYSTEMS I strongly recommend hedging your bets, particularly if your existing setup is working, and you want to move over to a new OS. You’ll often find that certain programs aren’t compatible with the new OS for some reason, or need a fix that hasn’t been released yet. There are two main options for dealing with this. With the Mac, you can just change the Startup disk and boot from that. Nice. With Windows, you can set up dual-boot systems. For example, you can install Windows 8 and start migrating programs over to that; if you run into snags, then boot into your existing OS instead. Dual-booting is a common procedure, and it's not hard to find programs designed for partitioning and dual-booting. Another option is to install a removable drive bay so you can just swap out hard drives as needed. If your previous OS and suite of programs was working okay, everything will remain on the disk, and you can just plug that disk into the bay should you need to access it again (always make sure power is off when swapping drives – they are not hot-swappable). You can even create different boot drives optimized for different work environments and situations. DEALING WITH COPY PROTECTION But there’s one problem with multiple drives: copy protection routines that require “authorizing” a single hard drive. Because I use only legal software, during a recent OS flush I had a crash course on which types of copy protection were easiest to deal with. If you’ve been diligent about organizing your distribution discs and downloaded programs/updates, entering a serial number or inserting the original distribution disc is not a problem. And although some people really don’t like hardware dongles, I've been won over - they can be a transparent solution that's neither computer- nor drive-specific, and allows you to back up your program. In some cases, you'll need to request a re-authorization from the manufacturer. Usually they err on the side of the customer, assuming you’ve registered your software and don’t request re-authorizations too often. SAFETY FIRST Acronis and Norton offer programs that can image your main drive. Once you've installed your operating system, image the drive. After installing your main programs and verifying that eeverything works properly, image it again. In fact, it's a good idea to image your drive periodically when your computer is happy and working well, because they you can always rreturn to that state. And here’s one final piece of advice: Allow plenty of time to re-create your system. This is another reason to keep your old system around, whether via removable drive or dual booting. That way you can get Real Work done as you debug your new OS. Good luck! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Thought you couldn't afford a Royer ribbon mic? They'd like to convince you otherwise by Craig Anderton Ribbon microphones, which had fallen out of favor due to cost and fragility, are coming on strong again because prices are lower—and today’s models are far more robust than their ancestors. So what’s the big deal with ribbon mics? For starters, they’re a variation on dynamic mics, but use a different technology where a thin ribbon (typically aluminum) sits between the two poles of a permanent magnet. Sound pressure hitting the ribbon element makes it move within the magnetic field, inducing a small current that produces the output voltage. Until 2001, when Royer introduced the phantom-powered R-122, ribbon mics were passive devices; in fact, with older designs applying phantom power was often a death sentence. Newer mics are nowhere near as delicate, and Royer has even designed tube ribbon mics like the R-122V. A ribbon mic tends to have a detailed, smooth high end compared to dynamics and condensers because the thin ribbon element is extremely sensitive and have excellent transient response. This makes it well-suited to translating complex material with lots of harmonics (brass, guitar amps, etc.) as well as instruments with significant transients. Traditionally, ribbon mics have been the mic of choice for brass, piano, percussion, strings, acoustic guitar, and some vocals; lately, they’re being used a lot for drum overheads and guitar amps. To learn more about ribbon mics in general, Royer has an informative description on their site. In fact the Royer site has a boatload of tips on recording with ribbons, including comments from Bruce Swedien and coverage of topics like why combining ribbons with condensers in multi-miking situations can be a good thing. There are also plentiful audio examples, as well as videos. Although a ribbon’s pickup pattern can be modified in a variety of ways, the “default,” natural ribbon mic pattern is figure-8. This makes ribbons particularly suitable with Blumlein pair configurations (for an excellent article on Blumlein pair recording, see Phil O’Keefe’s “Blumlein Pair Stereo Miking for Better Ambience and Imaging”). LITTLE BROTHER Royer’s R-121, introduced several years ago and typically selling for around $1,300, has become a de facto standard ribbon mic and is many engineers’ “secret sauce” when recording guitar amps (you know you’ve hit the big time when amp sim software includes your mic as one of the mic model options). However, the price point puts it out of the reach of smaller studios, especially when you want a matched pair. So, Royer introduced the R-101 as the R-121’s little brother. It typically sells for around $800, so when you get two of them, you’re saving $1,000 compared to getting two R-121s. But what are you giving up? The biggest difference is that while the R-101 uses the same basic ribbon assembly as the R-121, the body (which is somewhat larger and heavier than the R-121) and the inside frame are made offshore, then shipped back to Royer for assembly with the other components. The package includes a shock mount, aluminum case (unlike the R-121’s gorgeous wooden case), and protective mic sock. The shock mount does the job, but the R-101 is a side-address mic, so keep that in mind when doing mic positioning. Speaking of positioning, the figure-8 pattern has some interesting implications. The null point really is a null point, with extreme rejection. While you can of course point the mic at the sound source, how you position the mic can also determine room sound pickup. For example, take two guitar amps and put them side by side, separated by about two feet. Place the Royer in the space between them, about two feet in front of the amps, and point the null between the two amps. You’ll pick up the sound of the two amps, but you’ll get a lot of room sound in a smaller room with hard surfaces (like what I use for guitar amps). By rotating the mic to favor one amp or another you’ll get a different mix of the two amps, and I found that reversing the speaker connections on one of the amps to change the phase gave an entirely different, fuller sound because then the phase matched concerning mic pickup from the rear (as described in the next paragraph). The sonic variations you can get on a guitar amp simply by moving the mic around are substantial as well. Royer points out on their site that you can use the “wrong” side of the mic (i.e., opposite from the logo) and within one meter or so from the sound source, you’ll get a brighter sound. I tried this with acoustic guitar, which Royer recommends as a useful option, and it indeed picked up a variation on the ribbon sound—there was still that trademark transient response and smooth high end, but the overall timbre was somewhat thinner and brighter. The same was true when I tried this with vocals. You’ll need to reverse the phase in a multi-miking situation because the sounds are hitting the back of the ribbon, but that’s not a big deal. Perhaps one reason why the R-101 works so well for amps is that it has a strong midrange. This isn’t so much at the expense of other frequencies, and as bi-directional mics exhibit the proximity effect to a greater degree than omnidirectional types, you can “dial in” more or less bass with mic placement. I found that if I put the R-101 right up against an amp’s speaker, the bass boost was noticeable (although somewhat less than the R-121)...sometimes it sounded right for a song, but most times I EQ’ed out the low end a bit or moved the mic back somewhat. I did notice with acoustic guitar the sound could get really boomy if the mic was too close to the sound hole. As always, mic placement is crucial but perhaps even more so than with other mic types—if you haven’t worked with ribbons before, it really pays to take your time, experiment, and learn the mic’s character. I’d go so far as to apply the same rule as I do for software—never just open the package and expect to use it on a session! Do your homework. And don’t forget the pop filter... I didn’t really push the amp where it was butting up against the Royer’s 135dB max SPL for two reasons: With guitar amps, I think there’s a “sweet spot” between too soft and too loud; while you need to turn up an amp a certain amount to “open up” the sound, if you turn it up too high, the amp’s electronics sound more “splattered” than “focused.” I also didn’t want to blow out the ribbon, although based on what I’ve heard from other engineers, you really can hit Royer’s ribbons pretty hard (but if you get totally out of control, in addition to a lifetime warranty on the mic itself, Royer will give you one “re-ribbon” for free). Much is made of the generally low output of ribbon mics, but with a +55dB mic pre, I was able to hit levels above -10dB with the R-101. +60dB would have been able to take levels to 0 with close-miking, and a +70dB preamp would work for recordning low-level ambient sounds. CONCLUSIONS I had done very little work with ribbon mics prior to checking out the R-101, so I’m not exactly an authority who can comment on sonic nuances compared to other ribbon mics. However, compared to dynamics and condensers, I do feel qualified to make some generalizations. The high end is of course far more detailed than a dynamic mic; a good dynamic mic tends to have a “beefier” low end, but you can use the proximity effect to advantage with the R-101 if you want more of “that” sound. Compared to a condenser, the high end seems smoother, and a little less “brittle.” I tried some narration with the R-101, and it sounded both smooth and rich—although a good condenser mic seems to “flatter” my voice somewhat, as it helps the vocals “cut” a little more. Interestingly, I could get a similar effect with the R-101 by processing it through the Waves Aural Exciter plug-in. For singing, though, the R-101 complemented the bassier nature of my voice, making it sound richer than what I get from condensers. If I ever sing country (doubtful, but never say never!), I think the R-101 would be a fine personal choice. This truly a “sweet,” smooth-sounding mic. Nonetheless the main area of interest to me was using the R-101 for guitar amps (I don’t get a lot of trombone players or harpists going through my studio!). While there’s zero question that its sound with guitar amps is fabulous (especially if you have the distortion cranked up), I was pleasantly surprised by just how much audio experimentation I could do with the mic placement—using the back side of the mic, rotating the mic to pick up more or less room sound, and the like. In a way, the R-101 was kind of like having mechanical EQ built-in, along with a delay line that picked up more or less ambience. Overall, I’d say that if you have the bucks, an R-101 would be a fantastic addition to a mic locker for getting a different range, and character, than you can get with dynamics and condensers. Granted the sound isn’t quite up to the level of the R-121, but the differences are quantitative rather than qualitative; Royer’s web site has lots of audio examples, as well a comparison of the two, so you can decide for yourself. You’re essentially getting 95\\\% of an R-121 for 60\\\% of the price—and that’s a tough deal to beat when it comes to ribbon mics. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Yes, there's more than one way to do guitar amp / cabinet simulation in Reason by Craig Anderton Because Reason doesn’t accept external plug-ins, you can’t use conventional guitar amp simulation software within Reason. However, Reason is sufficiently flexible that you can construct a guitar amp/cabinet simulator using only two of its available processing modules - here's how. 1. Go to the Create menu, and select Dr. Octo Rex Loop Player. 2. Click on the Dr. Octo Rex folder button (Browse Patch), then choose a dry guitar as a signal source for testing the amp sim we’re about to create. A good choice is the ElGt\\\_Faith\\\_G\\\_085.rx2 guitar loop. To find it, navigate to the Reason Factory Sound Bank, then go Dr. Rex Instrument Loops > Guitar Loops > Telecaster Rhythm 085 BPM. 3. Make sure Enable Loop Playback is on. 4. Go Create > Scream 4 Distortion, then go Create > MClass Equalizer. 5. Hit Tab, then verify the patching on the back: The Dr. Octo Rex outs go to Scream 4, and its outs go to the MClass Equalizer. The MClass Equalizer outs go to your mixer or output. 6. Hit Tab again to return to the front panel. Click on the Dr. Octo Rex “Run” button so you can hear the loop play. 7. Guitar cabinets don’t have much highs over 5kHz. Enable the MClass EQ high shelf, set Frequency to around 5kHz, and to add a little resonance, set Q around 1. Set Gain to minimum. This rolls off the highs and produces a little “bump” around 2kHz. 8. In Scream 4, enable “Body.” Types A, B, and C are different guitar cabinet types; Scale chooses the size, with clockwise settings giving smaller cabs. For now, set Type = C, Reso and Auto = 0, and Scale between 100 and 127. Note that in the Body section, the Auto parameter adds an envelope follower effect. While it doesn’t contribute to a more realistic guitar amp sound, it can provide some cool effects if you’re not concerned about “authenticity.” 9. In Scream 4, enable “Damage” and choose the type of distortion characteristics you want. The Damage Control parameter has a huge effect on the sound, so experiment; the settings shown in the screen shot give a strong overdrive sound, but also try the Distortion, Fuzz, and Tube algorithms—varying P1 and P2 to optimize—for more distorted effects. After choosing your distortion algorithm, re-visit step 8. Changing the Type, Scale, and Reso parameters let you “customize” your cabinet for the chosen type of distortion. And that’s all there is to it - aside from tweaking it to optimize the sound to your liking. Enjoy your amp sim! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. A behind-the-scenes glimpse at what goes on inside your DAW By Dan Goldstein As home studio musicians, many of us know our gear intimately. With patience, we learn how to navigate our digital synthesizer's complex web of menus, and we've compiled our list of favorite go-to patches for any musical occasion. We know which reverb effects sound great and which ones sound tinny and fake. We have our MIDI and mixer channels sorted out just the way we like them. Maybe we've even opened up our guitars and wired in new pickups, or fixed a bad pot or solder connection in our vintage tube amps, or ventured inside an analog synth to re-seat a chip or replace a bad slider. When we know how our gear works and what makes it tick, we can make better music. It's that simple. But what do we really know about our DAWs? What makes our favorite DAW tick? We can't open it up and re-wire it. There's no technical service manual available so we can repair it when it's broken. For most of us, our DAW is a black box, an icon on our screen that can produce satisfying musical results, but which sometimes can be very frustrating. Why does it crash? Why does it work perfectly in our friend's studio, but not in ours? Why can we click the same button 50 times and everything works fine, but on the 51st push the program freezes? As the lead designer and developer of Acoustica's Mixcraft recording software, I certainly can't explain the reason for every flaw with your DAW. But I can give you an inside look at the sorts of tools that are used to build a DAW, and the challenges that software programmers face when trying to design the perfect recording program. TRACKING THE WILD BUG Bugs are frustrating enough for end users, but they’re incredibly frustrating for manufacturers as well. First we have to know about them, then we have to reproduce them, and then we have to fix them . . . and hope that any bugs stay fixed if, for example, there’s an update to your operating system. I’m sure Sherlock Holmes would have related to this, as we have to collect as many clues as possible, and make logical deductions. One bug stands out in mind as representative of one of the meanest, most resistant bugs we’ve ever encountered. This was no ordinary bug; it was extraordinary in that it was theoretically impossible. There was no way the bug could exist, nothing that Mixcraft could do to cause it, and nothing we could think of to fix it. Yet the bug persisted, and user complaints kept trickling in. The problem was that Mixcraft users running Windows XP and loading certain plug-ins—specifically, plug-ins that used an iLok security dongle—were experiencing reboots and blue-screens when trying to load these plug-ins into Mixcraft 5. This was only happening for people using Windows XP; the same plug-ins loaded perfectly in Windows 7 and Vista. Complicating matters even further, the same plug-ins seemed to load fine in other DAWs on Windows XP. Somehow, Mixcraft was causing the problem. As you can imagine, that kind of thing doesn’t make me happy. As we investigated the bug’s cause, we discovered that the blue-screen problem occurred when Mixcraft asked Windows to load the plug-in into memory. It was that simple: We would request that the Windows XP operating system load the plug-in into memory, and then Windows would reboot. We had no control over this. There is, in fact, no other way to load a plug-in. It was simply out of our control. There was nothing that we could be doing that would cause the operating system to fail catastrophically when loading the plug-in into memory, yet that’s what was happening. Worse yet, other DAWs were loading the same plug-in on the same computer without any problems at all. But then it got even more confusing because the same customers experiencing problems could load the same plug-ins perfectly into Mixcraft 4.5. That's the impossible bit. Something occurred between the end of Mixcraft 4.5 development and the beginning of Mixcraft 5.0 development that caused these plug-ins to fail, even though failure was, at least in theory, not possible. IT’S A JUNGLE OUT THERE . . . Perhaps the most difficult challenge when programming a DAW involves working seamlessly with audio drivers, plug-in effects, and virtual synths written by dozens of different companies, built by hundreds of different programmers with different skill levels and their own clever ideas about how to build software. There are, for example, strict specifications regarding how plug-ins should work. Steinberg, the developers of the VST specification, went to great lengths to document the VST specification to ensure that third-party plug-ins were built with uniform standards. Many programmers, naturally, view this as a challenge or perhaps an insult, and go out of their way to make sure that their plug-ins will not work (or, worse, will crash) if the rules are followed. Even Steinberg's own plug-ins sometimes violate the VST specification. As a result, for a DAW to support a wide array of VST plug-ins, programmers must go to great lengths to ensure that these plug-ins receive the information they want in the order they require it. Certain VST features must be handled very carefully because some plug-ins will not implement these features properly. And when a plug-in does crash, which can be due to something as obvious as a bug in the plug-in, we have to go to special lengths to ensure that this crash does not crash the DAW. No one wants to lose their entire recording project simply because a plug-in stopped working. The same is true for audio drivers. A music program that works perfectly with a thousand sound devices may fail when working with a new device whose drivers are not written properly, or which were successfully tested with a different DAW that does things in a slightly different order. When a driver or plug-in crashes and takes down the DAW, most users will immediately assume the crash occurred because of a flaw in the DAW, which makes the manufacturer look bad. So a great deal of effort goes into ensuring compatibility with the widest range of audio drivers and software plug-ins as possible. Still, a flaw in a plug-in can corrupt system memory, return mathematically nonsensical results to the DAW, or even leave the computer's floating-point math processor in an unstable state. Any of these issues can cause the DAW to fail irrecoverably. BEHIND THE SCENES: DIVIDE AND CONQUER Mixcraft is written in the C++ programming language, and contains hundreds of thousands of lines of computer code, made up of millions of characters of text. In order to maintain and manage all of this code, the programming instructions are broken up and logically organized into hundreds of different files. For example, code that draws a button might be found in a file called Button.cpp. Code that resamples digital audio might be found in Resampler.cpp. This makes it much easier to maintain the code. DAWs can involve hundreds of thousands of lines of code, all being worked on by multiple programmers. There are several programmers working on the code at any one time, and every new feature means making changes to potentially dozens of different files, and adding many new files in the process. And there's a twist: I do most of my programming from my home studio near Las Vegas, Nevada, while the rest of our Mixcraft developers are located in California. Given all of that, how are programmers in different geographical locations able to develop the same program at the same time, while making changes to the same files? The answer is surprisingly easy: serious programming teams store their code in a repository. This is a server on the internet that acts as a central database for all of the files that make up a program. Think of it as a library. When I want to make changes, I "check out" the code, make my changes, then "check in" the changes. At any time I can "check out" the latest changes to the code to see what other improvements my team has "checked in." In the rare instance where multiple programmers have made changes to the same text in the same file, the Code Repository will detect a conflict, and the programmer will have to figure out what the correct code should be. Part of the beauty of this system is that it's not just a storage bin for code. The repository also acts as a revision control system for the code. That may sound complicated, but it's not—it simply means that the database remembers every change that was ever checked in to the repository. So at any time it's possible to travel back in time, so to speak, and see what the code looked like on any given date. That makes it much easier to find what went wrong, when, and why. MISSION IMPOSSIBLE In the case of this “impossible” bug, however, we found ourselves with a dilemma. A blue-screen error or system reboot simply cannot be caused by a software glitch. These problems occur deep at the driver level, and are caused by a communication failure between the hardware and the operating system. While it was evident that the blue-screen was happening because of the plug-in's use of the iLok hardware, we were still completely puzzled: how could Mixcraft's code possibly interfere with the plug-ins ability to talk to the iLok device? And what could possibly have changed between Mixcraft 4.5 and Mixcraft 5 that could cause this problem? Who would have thought that this, coupled with an obscure issue involving a 10-year-old operating system, could bring down a DAW? We had a Windows XP computer and a copy of East West Symphonic Orchestra Silver Edition (a fantastic plug-in that uses the iLok system), which we could use to reproduce the problem. We knew that Mixcraft 4.5 did not have the bug. And we knew that we could use our code repository's revision control system to see what the code looked like on any given date. So we began looking for clues. I checked out the state of the code on May 1st, 2009, a date early in the development of Mixcraft 5. This code had very little changed from the final release of Mixcraft 4.5. When I compiled this code and ran it on the Windows XP machine, I discovered that the code was able to load the plug-in without crashing. So, we had a good starting point. I then checked out the code as it existed on July 1st, 2009. At this point in the development, effects automation had been added. This code, it turned out, was still able to load the plug-in without fail. So, I skipped ahead to September 15th, 2009. Mixcraft 5's mixer interface had been added, along with improvements to the audio pan control. When I tried to load the plug-in with this version of the code, the computer crashed. Aha! Something changed between July 1st the September 15th that caused the crash to occur—but what? I went back a month, to August 15th, 2009. Basic support for loading video files onto a video track had been added to the code at this point in time. I tested the East West plug-in—and it caused a system crash. Could the addition of video features somehow be causing the crash? It was certainly a possibility, though I still couldn't fathom how. Next step was setting the time machine for August 1st, 2009. Some preparation for the video track had been done, but there were no video features in Mixcraft yet. The plug-in still crashed, though. That meant that video could not be the culprit. I now knew that whatever was going wrong had been introduced between July 1st and August 1st, so the bug’s introduction was now narrowed down to a one-month time period. I went back one week further, to July 25th, 2009. I compiled the code, and the plug-in was able to load without crashing the computer. That gave a one week window, which made it much easier to focus in on the problem. SQUASHING THE BUG In the end, I looked at the changes that had been checked in to the repository between July 25th and August 1st. Most of these changes were minor, incremental changes, but one major feature was added that week: A convenient CPU meter, located in Mixcraft’s lower right corner. The CPU meter works by periodically talking to the operating system to get measurements on the amount of work being done by the processors. This involves the software talking to Windows, and probably involves Windows talking to the processor or processors that control the computer. For some reason, the conflict between the iLok hardware and Windows XP occurred when performing a CPU check while loading an iLok-protected plug-in. I couldn't begin to tell you exactly why this happens—it's just some minor, hidden flaw in a decade-old version of Windows that we happened to expose. It shouldn't have been possible for Mixcraft to cause a blue-screen driver error, but our actions were in fact indirectly responsible for the problem. There was no way to fix the error other than to remove the CPU meter from Mixcraft for users running Windows XP, but once we did this, Mixcraft 5 suddenly became fully compatible with all of the iLok-hungry high-end plug-ins that our Windows XP customers were trying to use. No longer were they forced to use an old version of Mixcraft just to run their favorite plug-ins. Here's the mug shot of the iLok's accomplice. It was a small victory, but at the same time, an important one for those customers that this bug inconvenienced. Without the use of our code repository's revision history, it would have been nearly impossible to discover what caused the problem. Mixcraft is designed as a professional DAW with hundreds of features; who would ever think that the little CPU meter could cause such a catastrophic system failure? We certainly didn't. When we've done our job right, and written our software to handle every audio driver and effect plug-in and virtual synthesizer and operating system and computer processor that you can throw at it, Mixcraft will just run. You'll make your music and have fun, and honestly, that's exactly the way it should be. But it's nice, for a moment, for us to be able to tell you that all these tools—the compilers, the repositories, the test computers, the closet full of audio hardware and CDs full of drivers, the piles of plug-ins—all of these tools are here in our lab so that we can find and fix the little frustrating issues that can pop up in software. No software is perfect, of course, and new bugs will always appear—it's an unfortunate fact of life. And when these bugs are caused by poorly-written plug-ins or badly designed drivers, we simply can't fix them—although we can alert companies to any problems we find. But sometimes, with some detective work and the right tools, even the most impossible bug can be caught and squished. I hope you enjoyed getting an inside look at what goes on inside the software development process, and may your system be free of bugs—especially the “impossible” kind! Dan Goldstein is the lead developer of Acoustica's award-winning Mixcraft recording software for Windows. You can download a free trial of the software at www.acoustica.com/mixcraft
  8. The second generation G2M offers some major improvements $129 MSRP, $99 street, www.sonuus.com, www.petersontuners.com (North American distribution) By Craig Anderton Last April, I reviewed the original G2M guitar-to-MIDI Converter; I’d suggest reading that review before getting too far into this one, as it gives quite a bit of background. Normally I wouldn’t do a review of a second-generation update, but I find this to be quite an improvement. If you want to get into MIDI guitar at a reasonable price, the Version 2 G2M (which we’ll call V2 G2M for short) is a step up from the original, and definitely deserves a look. WHAT’S THE SAME The V2 G2M remains a monophonic device—forget about converting chords to MIDI, as this accommodates single-note lines only. On the other hand because of this, the guitar doesn’t need any kind of special hex pickup. You plug your guitar into the V2 G2M’s audio input, then patch its MIDI output (Fig. 1) to a hardware MIDI synth (or if you want to drive software synthesizers, to a computer’s MIDI interface). Also note that a Thru output is available so you can send your guitar to a tuner, amp, etc. Fig. 1: The rear panel has a standard, 5-pin DIN MIDI out jack and a 1/4" guitar audio thru jack. As with any guitar-to-MIDI device, you have to learn how to adapt your playing for the best tracking, and take the time to find out what settings on your guitar work best (generally a neck humbucker pickup with the tone control rolled back a bit gives the best results). You’ll need to play articulately, and often slower than you might otherwise, so the V2 G2M is far more likely to find a home in the studio than playing live. Another issue is that the type of synthesizer patch you feed with the V2 G2M matters. Just changing the instrument’s response from polyphonic to monophonic (e.g., like a hardware Minimoog) can improve tracking dramatically, as can switching in legato mode (if available). Physically, the second-generation version is almost identical to the first: The small plastic case includes built-in LED indicators for clipping, MIDI out, power, tuner, and low battery. However, the Boost switch is now the Chromatic switch (which we'll cover shortly). WHAT’S DIFFERENT Overall, the tracking is more accurate, faster, and less glitch-prone. Fig. 2 shows the raw MIDI capture from the V2 G2M in Cakewalk Sonar X1. Fig. 2: The original, recorded part. The thin, vertical lines in the main piano roll represent glitches. If you look at the Velocity strip, where the vertical lines represent velocity, you can see that the glitches are considerably lower in amplitude than the “real” notes. There are a few other issues, like a missed note or two, or a note that was bent enough to register as a different note, but overall this is quite clean. The strip on the bottom shows pitch bend. Fortunately, Sonar has a “deglitch” option (Fig. 3) where you can specify that notes below a particular velocity, shorter than a specified duration, or out of a particular pitch range are deleted. In this example, deglitch will nuke all notes with velocities under 10, or shorter than 100 ticks. Fig. 3: Sonar's Deglitch dialog. Fig. 4 shows what happens after applying deglitch; there’s a lot less that needs to be cleaned up. Sonar isn’t alone with this kind of editing option, but note that the implementation varies somewhat from DAW to DAW. Fig. 4: The same part as Fig. 1 after "de-glitching." Another change that helps create a cleaner part is the Chromatic option. This is a switch (Fig. 5) that replaces the “Boost” switch found in the original model, which frankly, I never used anyway. Chromatic switches off pitch bend information, which is desirable when playing instruments that don’t bend pitch (piano, marimba, organ, glockenspiel, drums, etc.). Fig. 5: The Chromatic switch can clean up tracking with some instrument sounds. But the most noticeable change to me was that the new version seems to translate gestures, like pitch bending, with more accuracy. As a result, I was able to reach what I feel were more expressive parts because more of what was in my fingers made it into the tracks. THE PROOF IS IN THE LISTENING . . . It’s one thing to say the V2 G2M is more responsive, but it’s another thing to hear it in action. So, I threw together a little 12-bar blues sequence in Sonar using Steven Slate drums for the drum backing track, and playing sax, synth, bass, and organ parts with the Sonuus. Let’s listen to the audio example. The bass part is a Minimoog-type bass part played through the Rapture LE synth included with Sonar X1. This particular part was set for monophonic operation (you need the full version of Rapture to be able to change from monophonic to a particular number of voices, but several of the LE patches are monophonic, and highly suitable for MIDI guitar). The sax and synth parts are from Cakewalk’s TTS-1 synth, which is basic but allows switching any sound from polyphonic to monophonic response. I included the sax to show how a “real” instrument responds to the improved tracking. Even though it doesn’t sound like a human sax player, to my ears it sounds more expressive than what you might hear if played with a keyboard. The synth part, on the other hand, was intended to sound like a guitar solo but with a very different timbre. To me, this is arguably the best use of the V2 G2M—not to create the sounds of physical instruments, but to play synthetic sounds that are more expressive because they're being played from a guitar. (Note that the V2 G2M doesn’t handle long slides too well, but there’s not much you can do about that.) The organ part is also from the TTS-1, but here I laid down several MIDI tracks to build up polyphony. It’s awkward to create chords this way, but it works in a pinch. One of the most important aspects here is that editing was minimal. The sax and synth parts are almost exactly the same as recorded, except that I inserted a MIDI plug-in on each track to compress the velocity data, thus evening out the dynamics a bit. CONCLUSIONS As I said in the original review, “MIDI guitar isn’t about replacing guitar, but supplementing it with new choices.” The V2 G2M makes that process easier and more foolproof. (Also in that review, I said “I wouldn’t be surprised if Sonuus’s next product turns out to be G2M USB"—and now there's the i2M). I always try not to present any illusions about MIDI guitar, because it’s an extremely difficult task to make a guitar string look like a series of switches to a synthesizer. If you expect to just pick up a V2 G2M, plug in your guitar, and start sounding like a piano, you’re going to be disappointed. However, the more I play with the V2 G2M, the more I feel this is the real deal—sure, it’s not perfect, but with some practice and editing, guitarists can lay down monophonic synths lines with minimal bank account damage. You really have to try one for yourself to gauge if guitar synthesis is for you, but listen to the audio example—there's hardly any editing, and it’s at least somewhat representative of the kind of results you can expect. Props to Sonuus for continuing to refine the guitar-to-MIDI conversion process--V2 G2M is definitely a step forward compared to the original. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. Are you fighting technology, or flowing with it? By Craig Anderton Technology can be overwhelming. But does it have to be? Why can some people roll with any technological punch that’s thrown their way, while others struggle to keep up? Some musicians and engineers feel that technology “gets in the way” of the recording or music-making process. Conversely, there’s also no denying that technology makes possible music that was never possible before, and can even provide the means to streamline its production. If you feel there’s some kind of dichotomy between technology and music, you’re not imagining things: Your brain’s “firmware” is hardwired to deal with artistic and technological tasks differently. In this article, we’ll explore why this division exists, describe how your brain’s firmware works, and provide some tips on how to stay focused on the art when you’re up to your neck in tech. COOPERATION AND CONFLICT Technology and art cooperate in some areas, but conflict in others. Regarding cooperation, think of how technology has always pushed the instrument-making envelope (the piano was quite high-tech in its day). And recording defies time itself: We can not only enjoy music from decades ago, but also sing a harmony with ourselves — essentially, going backward in time to sing simultaneously with our original vocal. Cool. Then there’s the love affair between music and mathematics. Frequencies, tempos, rhythms, SMPTE time code — they’re all based on math. Music loves Math. When my daughter was getting into fractions, I created a sequence that included half notes, quarter notes, sixteenth notes, etc. She immediately “got” the concept upon hearing fractions expressed as rhythms. As to conflicts, first there’s the dichotomy of how the brain processes information (as we’ll discuss next); and second, there are a few societally-induced conflicts. For example, some people think that using technology is somehow cheating (e.g., lowering a sequence’s tempo so you can play along more easily, then speeding it back up). Furthermore, the accelerated rate of technological change itself causes conflicts. Which gear should I buy? Which platform is better? And why do the skills I learned just a few years ago no longer matter? Let’s look at how physiology influences our perceptions of both technology and art, as this will provide some clues on how best to reconcile the two. THE MAN WITH TWO BRAINS Our brain has two hemispheres; each one processes information differently. Consider the following quote from the essay “2044: One Hundred Years of Innovation,” presented by William Roy Kesting (founder of Kesting Ventures) and Kathy Woods (VP and Principal of Woods Creative Services) at a 1994 meeting of the Commercial Development Association: “The right brain is the older of the two hemispheres and functions in an all-at-once mode to produce a complete picture. In contrast, the left hemisphere excels in sequential functions such as words, abstract thinking and numbers.” Essentially, the right brain is the “Macintosh GUI” side that handles intuitive, emotional tasks — like being creative. The left brain is more like the “MS-DOS command line interface” side that works in a more linear fashion, and deals with sequential thought processes. Use Color to Your Advantage. The right brain parses color rapidly. Many programs let you customize color schemes, and hardware companies are becoming more aware of this too. For example, the Alesis Ion synth changed the transpose LED’s intensity when transposing by different octaves, making it easy to see the transposition range without having to read anything. And, its programs were arranged in four banks by color rather than letters or numbers. The “breakthrough” in understanding this difference between the hemispheres comes from the work of Drs. Roger W. Sperry, David H. Hubel, and Torsten N. Wiesel, who shared the 1981 Nobel prize in Physiology. Later studies have modified their findings a bit, but some comments in the Nobel Awards presentation speech, by David Ottoson, are well worth noting. “The left brain half is . . . superior to the right in abstract thinking, interpretation of symbolic relationships and in carrying out detailed analysis. It can speak, write, carry out mathematical calculations and in its general function is rather reminiscent of a computer. It is with this brain half that we communicate. The right cerebral hemisphere is mute. . . It cannot write, and can only read and understand the meaning of simple words in noun form. It almost entirely lacks the ability to count and can only carry out simple additions up to 20. However . . . is superior to the left in the perception of complex sounds and in the appreciation of music . . . it is, too, absolutely superior to the left hemisphere in perception of nondescript patterns. It is with the right hemisphere we recognize the face of an acquaintance, the topography of a town, or landscape earlier seen. “Pavlov . . . that mankind can be divided into thinkers and artists. Pavlov was perhaps not entirely wrong. Today we know from Sperry’s work that the left hemisphere is cool and logical in its thinking, while the right hemisphere is the imaginative, artistically creative half of the brain.” As a result, one option is to explain the art/technology dichotomy as the hemispheres being not necessarily in conflict, but working at cross-purposes. Once “stuck” in a hemisphere’s mode of thought, it’s difficult to transition seamlessly into working in the other one, let alone integrate the two. The “Unified Interface” and the Brain. A “unified interface,” which avoids opening multiple overlapping windows in favor of a single screen where elements can be shown or hidden as needed, speaks to both hemispheres. The right brain takes in the “big picture,” while the left brain can focus on details if needed. Ableton Live has two unified interfaces — a “right brain” one optimized for live improvisation, and a “left brain” one optimized for “offline” editing. But if that’s the case, why are so many good programmers musicians? And why have many mathematicians — going back as far as Pythagoras — been fascinated with music, and vice-versa? THE MUSICIAN’S "FIRMWARE" The NAMM campaign “music makes you smarter” is rooted in truth. Recent research shows that many musicians indeed use both halves of the brain to a greater extent than non-musicians. According to Prof. Dr. Lars Heslet (Professor of Intensive Care Medicine at Copenhagen State Hospital in Denmark, and a researcher into the effects of music on the body): “The right brain hemisphere is specialized in the perception of spatial musical elements, that is the sense of harmony and pitch, whereas the left hemisphere perceives the progress of the melody, which requires musical memory.” In other words, both halves of the brain need to be in play to fully appreciate music. This may explain why musicians, critics, and average listeners have seemingly different tastes in music: The critics listen with the analytical (left) side of their brain, the non-musicians react emotionally with their right brain, and the musicians use both hemispheres. Here’s an interesting quote from Frederick Turner (Founders Professor of Arts and Humanities at the University of Texas at Dallas) and Ernst Pöppel, the distinguished German neuropsychologist: “Jerre Levy . . . characterizes the relationship between right and left as a complementarity of cognitive capacities. She has stated in a brilliant aphorism that the left brain maps spatial information into a temporal order, while the right brain maps temporal information onto a spatial order.” Does that sound like a sequencer piano roll to you? Indeed, it uses both temporal and spatial placement. The same thing goes for hard disk recording where you can “see” the waveforms. Even though some programs allow turning off waveform drawing, I’d bet very few musicians do: We want to see the relationship between spatial and temporal information. We Want Visual Feedback. Which track view do you like better — the one that shows MIDI and audio data, or the blank tracks? Odds are you prefer a relationship between spatial and temporal information. Again, from Turner and Pöppel: “Experienced musicians use their left brain just as much as their right in listening to music shows that their higher understanding of music is the result of the collaboration of both ‘brains,’ the music having been translated first from temporal sequence to spatial pattern, and then ‘read,’ as it were, back into a temporal movement.” HEMISPHERIC INTEGRATION: JUST DO IT! The ideal bridge between technology and art lies in “hemispheric integration” — the smooth flow of information between the two hemispheres, so that each processes information as appropriate. For example, the right brain may intuitively understand that something doesn’t sound right, while the left brain knows which EQ settings will fix the problem. Or for a more musical example, a songwriter may experience a distinct emotional feeling in the right hemisphere, but the left hemisphere knows how to “map” this onto a melody or chord progression. Without hemispheric integration, the brain has to bounce back and forth between the two hemispheres, which (as noted earlier) is difficult. This is why integration may expedite the creative process. Here’s another quote from William Roy Kesting and Kathy Woods: “ . . . just as creative all-at-once activities like art need left-sided sequence, so science and logic depend on right-sided inspiration. Visionary physicists frequently report that their insights occur in a flash of intuition . . . Einstein said: ‘Invention is not the product of logical thought, even though the final product is tied to a logical structure.’” Mozart also noted the same phenomenon. He once stated that, when his thoughts flowed best and most abundantly, the music became complete and finished in his mind, like a fine picture or a beautiful statue, with all parts visible simultaneously. He was seeing the whole, not just the individual elements. MEET THE INFORMATION SUPERHIGHWAY The physical connection between the two hemispheres is called the corpus callosum. As Dr. Lars Heslet notes,“To attain a complete musical perception, the connection and integration between the two brain hemispheres (via the corpus callosum) is necessary. This interaction via the corpus callosum can be enhanced by music.” Interestingly, according to the article “Music of the Hemispheres” (Discover, 15:15, March 1994), “The corpus callosum — that inter-hemisphere information highway — is 10-15\\% thicker in musicians who began their training while young than it is in non-musicians. Our brain structure is apparently strongly molded by early training.” Bingo. Musical training forges connections between the left and right hemispheres, resulting in a measurable, physical change. And that also explains why some musicians are just as much at home reading about some advanced hardware technique in our articles library as they are listening to music: They have the firmware to handle it. THE RIGHT/LEFT BRAIN “GROOVE” Producer/engineer Michael Stewart (who produced Billy Joel’s “Piano Man”), while studying interface design, noticed that someone involved in a mostly left- or right-brain activity often had difficulty switching between the two, and sometimes worked better when able to remain mostly in one hemisphere. (Some of his research was presented in an article in EQ magazine called “Recording and the Conscious Mind.”) For example, as a producer, he would often have singers who played guitar or keyboards do so while singing, even if he didn’t record the instruments. He felt this kept the left brain occupied instead of letting it be too self-critical or analytical, thus allowing the right brain to take charge of the vocal. Another one of his more interesting findings was that you could sort of “restart” the right brain by looking at pictures — the right brain likes visual stimulation. Stewart was also the person who came up with the “feel factor” concept, quantifying the effects that small timing differences have on the brain’s perception of music, particularly with respect to “grooves.” This is a fine example of using left-brain thinking to quantify more intuitive, right-brain concepts. Quantization and Feel. Quantization can hinder or help a piece of music, depending on how you use it. For example, set any quantization “strength” parameter to less than 100\\% (e.g., 70\\%) to move a note closer to the rhythmic grid but retain some of the original feel. Also, quantization “windows” can avoid quantizing notes that are already close to the beat, and “groove” quantizing (which quantizes parts to another part’s rhythm, not a fixed rhythmic grid) can give a more realistic feel. Timing shifts for notes are also important. For example, if in rock music you shift the snare somewhat later than the kick, the sound will be “bigger.” If you move the hi-hat a little bit ahead of the kick, the feel will “push” the beat more. TECHNOLOGICAL TRAPS Technology has created a few traps that meddle with hemispheric integration. When the left hemisphere is processing information, it wants certainty and a logical order. Meanwhile, the right brain craves something else altogether. As mentioned earlier with the examples regarding Michael Stewart, in situations where hemispheric integration isn’t strong — or where you don’t want to stress out the brain to switch hemispheres — trying to stay in one hemisphere is often the answer to a good performance or session. Quite a few people believe pre-computer age recordings had more “feel.” But I think they may be looking in the wrong place for an answer as to why. Feel is not found in a particular type of tube preamp or mixer; I believe it was found in the recording process. When Buddy Holly was cutting his hits, he didn’t have to worry about defragmenting hard drives. In his day, the engineer handled the left brain activities, the artist lived in the right brain, and the producer integrated the two. The artist didn’t have to be concerned about technology, and could stay in that “right brain groove.” Cycle Recording: Let the Computer Be Your Engineer. Cycle (or loop) recording repeats a portion of music over and over, adding a new track with each overdub. You can then sort through the overdubbed tracks and “splice” together the best parts. This lets you slip into a right-brain groove, then keep recording while you’re in that groove without having to worry about arming new tracks, rewinding, etc. If you record by yourself, you’ve probably experienced a situation where you had some great musical idea and were just about to make it happen, but then you experienced a technical glitch (or ringing phone, or whatever). So you switched back into left brain mode to work on the glitch or answer the phone. But when you tried to get back into that “right brain groove,” you couldn’t . . . it was lost. That’s an example of the difficulty of switching back and forth between hemispheres. In fact, some people will lose that creative impulse just in the process of arming a track and getting it ready to record. Now, if you have an Einsteinian level of hemispheric integration, maybe you would see the glitch or phone call as merely a thread in the fabric of the creative process, and never leave that right-brain zone. We’ll always be somewhat beholden to the differences between hemispheres, but at least we know one element to reprogramming your firmware: Get involved with music, early on, in several different facets, and keep fattening up that corpus callosum. And it’s probably not a bad idea to exercise both halves of your brain. For example, given that the left hand controls the right brain and the right hand controls the left brain, try writing with the hand you normally don’t use from time to time and see if that stimulates the other hemisphere. JUST BECAUSE WE CAN . . . SHOULD WE? Technology allows us to do things that were never possible before. And maybe we were better off when they weren’t possible! For example, technology makes it possible to be artist, engineer, and producer. But this goes against our very own physiology, as it forces constant switching between the hemispheres. Would some of our greatest songwriters have written such lasting songs if they’d engineered or produced themselves? Maybe, but then again, maybe not. And what about mixing with a mouse? Sure, it’s possible to have a studio without a mixing console, but this reduces the mixing process to a linear, left-brain activity. A hardware mixing console (or control surface) allows seeing “the big picture” where all the channels, EQ, pans, etc. are mapped out in front of you. AVOIDING OPTION OVERLOAD Part of the fix for hemispheric integration is to use gear you know intimately, so you don’t have to drag yourself into left brain mode every time you want to do something. When using gear becomes second nature, you can perform left-brain activities while staying in the right brain. As just one example, if you’re a guitarist and want to play an E chord, when you were first learning you probably had to use your left brain to remember which fingers to place on which frets. Now you can do it instinctively, even while you stay in the right brain. The same principle holds true for using any gear, not just a guitar. Ultimately, simplification is a powerful antidote to option overload. When you’re writing in the studio, the point isn’t to record the perfect part, but to get down ideas. Record fast before the inspiration goes away, and worry about fixing any mistakes later. Don’t agonize over level-setting, just be conservative so you don’t end up with distortion. Find a good “workstation” plug-in or synthesizer and master it, then use that one plug-in as a song takes shape. You can always substitute fine-tuned parts later. Also maintain a small number of carefully selected presets for signal processors and instruments; you can always tweak them later. And if you’re a plug-o-holic, remove the ones you don’t use. How much time do you waste scrolling through long lists of plug-ins? Use placeholders for parts if needed, and don’t edit as you go along — that’s a left brain activity. With software, templates and shortcuts are powerful simplifying tools that let you stay in right brain mode. Templates mean you don’t have to get bogged down setting up something, and hitting computer keys (particularly function keys) is more precise than mouse movements. Efficiency avoids bogging down the creative process. MAKING MUSICAL INSTRUMENTS MAGICAL As Robert Pirsig’s “Zen and the Art of Motorcycle Maintenance” says, “If the machine produces tranquility, it’s right.” Reviews and other opinions don’t matter if something feels right to you. Which Type Of Graphic Interface Works for You? The interface is crucial to making an instrument feel right. Compare the screen shot for one of the earliest software synths, Seer Systems’ Reality, to that of G-Media’s Oddity. Reality has more of a spreadsheet vibe, whereas the Oddity portrays the front panel of the instrument it emulates; this makes the signal flow more obvious. Companies can supply technology, but only you can supply the magic that makes technology come alive. No instrument includes soul; fortunately, you do. As we’ve seen, though, to let the soul and inspiration come through, you need to allow the creative hemisphere of your right brain full rein, while the left brain makes its seamless contribution toward making everything run smoothly. Part of mastering the world of technology is knowing when not to use it. Remember, all that matters in your music is the emotional impact on the listener. They don’t want perfection; they want an emotionally satisfying experience. Be very careful when identifying “mistakes” — they can actually add character to your recording. And finally, remember that no amount of editing can fix a bad musical part . . . yet almost nothing can obscure a good one. The bottom line is that you need to master the technology you use so that operating it becomes automatic, then set up a work flow that makes it easier to put your left brain on autopilot. That frees up the right brain to help you keep the “art” in the state of the art. We’ll leave the last word on why you want to do this to Rolf Jensen, director of the Copenhagen Institute for Futures Studies: “We are in the twilight of a society based on data. As information and intelligence become the domain of computers, society will place a new value on the one human ability that can’t be automated: Emotion.” Craig Anderton is Executive Editor of Electronic Musician magazine. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  10. Get the more out of this popular virtual studio By Craig Anderton Reason is a great program, but it's more versatile than many people realize. These tips and tricks cover some lesser-known ways to make Reason do your bidding. BETTER SAMPLE PLAYBACK Some people erroneously believe that Reason’s sound quality doesn’t equal dedicated hardware—they probably have the “low bandwidth” option enabled in SubTractor or the NN19 (a holdover from the days when computers had much less power), or didn’t check “High Quality Interpolation” in the NN-XT and NN-19 samplers. These simple steps can make a major improvement in sound quality. COMPRESSOR SIDECHAINING While DAWs are just starting to implement sidechaining, Reason’s MClass Compressor already includes sidechaining—hit Tab to flip the rack around to reveal Sidechain In jacks for the left and right channels. Bonus: The compressor’s Gain Reduction signal is available as a control voltage. BETTER MIXER EQ On the back of the ReMix mixer, a switch in the lower left chooses between “Compatible EQ” and “Improved EQ.” Use Compatible for projects created in older versions of Reason; for new projects, use Improved. The CPU hit isn’t much, and the quality is better. MIDI-TO-CV CONVERTER The RPG-8 can serve as a MIDI-to-CV converter when the Arpeggiator is set to Off. For example, make the RPG-8’s track active in the Sequencer, and send the RPG-8 Gate and CV outs to a SubTractor synthesizer. But instead of routing the RPG-8 Mod Wheel out to the SubTractor’s Mod Wheel in (the default), you can send it to Pitch, Filter 2 Freq (as shown in the screen shot), or Amp Level, which do not have corresponding amount controls in SubTractor’s Mod wheel section. COMPLEX LFO PATTERNS Reason can produce sample-and-hold control effects, as well as more randomized modulation, by feeding multiple LFO outputs into a Spider CV Merger, then sending the Merged output to the parameter you want to control. For example, for a synced sample-and-hold filter effect, use square LFO waveforms (try setting sync on one to 1/4 and the other to 1/8T) and send the merged output to something like Thor’s Filter 1 Freq input. REASON’S BONUS EQUALIZER In addition to the MClass equalizer, PEQ-2 parametric EQ, and ReMix channel EQs, the BV512 vocoder has an equalizer mode with up to 32 bands—select Equalizer instead of Vocoder with the switch to the left of the display. You can even do primitive “room tuning” with this if you insert the vocoder as the last processor in the signal chain. EFFECT MONO/STEREO IN/OUT Some of Reason’s effects sum the inputs to mono before creating a stereo output, some are stereo in/stereo out, some do mono in/stereo out but not mono in/mono out, etc. To see how a particular effect handles signal flow, Tab to the back of the rack, and check the small graphics toward the left side of a device. For an explanation of what the Signal Flow graphs mean, go to page 683 in the Version 5 PDF Operation Manual or page 334 for the Version 4 manual (manuals are located in the program folder's documentation folder). MORE HEADROOM The MClass Equalizer has a low cut switch that reduces response below 30Hz at 12dB/octave. Insert this at the output to remove any subsonics. Patching this before a compressor will also allow the compressor to work more efficiently, as its control mechanism won’t be influenced by subsonic signals. BETTER DISTORTION (AND OVERALL SOUND) Reason will run at 96kHz (go Edit > Preferences > Audio > and choose the desired sample rate from the drop-down menu). Don’t think 96kHz makes a difference? Choose a really distorted patch in Scream, and you’ll hear a definite smoothness in the high frequencies that you just don’t get at 44.1kHz. THE TRUTH ABOUT CLIPPING Reason uses 32-bit floating point math for internal calculations, yielding virtually unlimited dynamic range within Reason. As a result, clipping that occurs within Reason itself (e.g., an effect meter goes “in the red”) won’t produce audible distortion. However, clipping at the hardware interface will. It’s a good idea to leave the Hardware Device unfolded, and check the Audio Output meters from time to time to make sure they’re not distorting. IF YOU DON’T LIKE PATCH CABLES . . . Type “L” (not Ctrl-L or Command-L, as erroneously stated in the Operation Manual) to show/hide the patch cords on the back. Jacks that are in use have a colored “hole” with the same color as the patch cord that normally connects to it, and you can see where it connects by placing the cursor over the jack—a “tooltip” style of text appears describing the connection. MORE EFFECTIVE VOCODING You’ll get the most noticeable vocoding effects if the carrier sound you use has lots of harmonics (e.g., a synth sound with an open filter and sawtooth waves). Not enough harmonics? Patch a Scream4 between the carrier source and the vocoder carrier input. COPY RIGHT To copy a Reason device quickly, Ctrl-click (Windows) or Option-click (Mac) on a device’s “rack ears” and drag it into an empty space in the rack. THE MIDI GUITAR CONNECTION Reason makes a great sound module for MIDI guitar because it’s easy to load up six devices, put the MIDI guitar in mono mode (separate MIDI output for each string over its own channel), and assign each device to a string. MIDI IMPLEMENTATION There’s some confusion about how to control Reason parameters via MIDI, as it’s possible to use custom control mappings. But for most applications, especially when rewiring into a host, it’s best to use the default MIDI controller numbers for the various parameters. A comprehensive MIDI implementation chart is located in the Documentation folder (in the Reason directory). Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  11. Whether playing live or re-amping in the studio, get the best of both physical and virtual worlds by Craig Anderton You love the sound of tubes, but you don’t like being limited to specific cabinets. Or you really like amp sims, but you’re not totally sold on the preamp sounds. Or you love your guitar amp, but you wish it could do more...like split off into other cabinets, or do stereo imaging. You can resolve all these issues—and more—by combining the best of the physical world (in the form of tubes) with guitar amp sim cabinets. Virtually all amp sims let you bypass the preamps and power amps, leaving you with only the cabinet emulations. So what do you feed the cabinets? Glad you asked! I have an Orange TH30 combo amp here on loan, with a tube-driven effects loop that features a post-preamp send jack (the entire signal path is all-tube). So, it’s a simple matter to pull out the post-preamp/post-EQ signal (Fig. 1), feed it into an audio interface, and run it through an amp sim cab. Fig. 1: Like many other amps, the Orange TH30 has a post-preamp effects send jack that you can use to send a tube preamp signal into an amp sim’s cabinet. As with most effects loops, plugging into the send doesn’t interrupt the signal flow; that happens only if you also plug into the loop receive jack too. Therefore, you can mic the TH30 cabinet at the same time you’re feeding the amp sim. Just note that there may be some subtle timing issues, where the miked sound is a little delayed compared to the direct sound because the mic is a finite distance from the speaker. Nudging the sim sound a bit later in time can solve this; delay it in tiny increments until it sounds "right." The sound of the miked TH30 cabinet; on-axis dynamic Shure SM58 about 2” back from the grille cloth. This is the sound of the preamp send signal by itself, without any cabinet frequency-shaping. Now let’s take that signal, feed it through a bunch of cabinet simulations, and listen to the results. If you compare them to the original miked sound, it’s obvious the sim versions would be great to layer with the “real” sound, either to beef up the overall timbre, or provide options (like cool stereo imaging) you couldn’t get simply by miking a cabinet. So yes—now I can hear the sound of a TH30 through its 12” Celestion speaker, as well as through a virtual 4 x 12”...or 1 x 10”...or... IK Multimedia AmpliTube 3 Let’s start off with IK Multimedia’s AmpliTube 3 (Fig. 2). AT3 has pretty evolved virtual miking options, as each cabinet can have two mics, two room mics (with variable width), panning, mixing, and the like. You can move the mics closer in or further away to change the sound. Also note that you can put two signal chains in parallel, each with its own cab and miking.[attachment=141098:name] Fig. 2: With AmpliTube 3, you can bypass the preamp/power amp and listen to only the cab. This is the sound of going through AmpliTube’s 4 x 12 Modern M3 cabinet. This one uses the Fender Pro Junior cabinet. Notice how different the sounds are. Waves GTR 3 Now let’s turn our attention to Waves’ GTR (Fig. 3). Among other amp options it has a stereo module with two independent amps/cabs, and again, you can bypass the amps to hear only the cab sound. In addition to pan and level controls, each amp also has a delay control and a phase switch. Note that the "HD" switch is enabled in the upper right to give a smoother tone, with the tradeoff being more CPU drain. [attachment=141099:name] Fig. 3: GTR combines two different amps and cabs in a single module, along with miking options. This example combines a 12” open-back and 12” closed-back cabinet. This audio example uses a 2 x 10” cab for a brighter sound, with a darker 4 x12” cab. Native Instruments Guitar Rig 4 Pro Native Instruments’ Guitar Rig 4 (Fig. 4) has several different cabinet options. If you load an amp, it appears along with a matched cabinet. But you can also load a separate cabinet module if you don’t want to use the matched cabinet, or want more flexibility in miking. The matched cabinet offers tone, pan, and "air" controls. Finally, there’s a Control Room module which features a lesser number of cabinets, but adds extensive virtual miking options. [attachment=141100:name] Fig. 4: In this Guitar Rig 4 patch, the signal from the TH30 has been split into two paths, with each one feeding a separate cabinet. Click on this audio example to hear the sound of the split cab configuration. This example uses the Control Room option to create space from a single cabinet. Line 6 POD Farm 2 POD Farm 2 (Fig. 5) from Line 6 doesn’t let you separate the power amp from the cabinet, but if you choose cabinet only, the power amp is a clean power amp that has no significant effect on the sound. POD Farm 2 has a dual mode that allows for two independent signal chains. Each amp can have its own miking and amount of “air.” [attachment=141101:name] Fig 5: Here’s an example of Dual Mode in POD Farm 2, using two different amps. Dual sound using a 2 x 12” Line 6 cab in parallel with a 1 x 12” Class A-30 amp. With this example, there’s a single black face cab. Studio Devil Virtual Guitar Amp Plus Studio Devil’s Virtual Guitar Amp Plus (Fig. 6) has a variety of cabinets. Although you can’t bypass the preamp section, setting it to Classic Clean essentially gives you the cab sound only. [attachment=141102:name] Fig. 6: Virtual Guitar Amp Plus is set to the Green 4 x 12” cabinet. This clip plays back through the VGA+ Green 4 x 12” cabinet. Peavey ReValver MkIII And last but certainly not least, we have Peavey’s ReValver Mk III (Fig. 7). This sim is unique in that it offers two different cabinet emulation technologies: Speaker Construction Set, and Convolution. [attachment=141103:name] Fig. 7: Speaker Construction Set literally lets you create your speaker cabinet - choose the size, type and number of speakers, mic type, and so on. I really like the Speaker Construction Set approach, as it’s virtually unlimited. It’s a little annoying that you can’t hear tweaks you make in real time; you have to make the tweak, apply, listen, tweak, apply, listen, and so on. But when you find something good, just save it for next time and you won't have to go through those tweaks again. Click here to listen to the sound of the Speaker Construction Set in action. The Convolution-based speaker cabinet gives highly realistic sounds, and you can load your own impulses if you want to go totally nuts. While not as flexible as the Speaker Construction Set in some ways, it’s easy to load up different speaker mode impulses so you can decide which one you like best. Here’s the sound of Peavey’s 4 x 12” “American” speaker convolution effect. CONCLUSIONS This technique is a great way to combine tubes and code, and of course, for live performance it’s a lot easier to carry a small tube amp and a laptop to do your various cabinet sounds. In the studio, this technique lets you multiply one amp sound into many...very cool. So which cabinet emulations are best? Well, that’s a tough call because each one has standout models, along with other ones I don’t like as much. But, if I change pickups or guitars, sometimes the ones I don’t like come into their own. Also, making one small change with the miking can turn a loser into a winner – or vice-versa. In any event, this is a fun approach to getting a lot more options – check it out! Also, for additional techniques, check out the article How to Make Amp Sims Sound More "Analog" as this can help create a creamy, warm amp sim tone, and applies to using only cabinets as well as complete amp sims. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  12. USB memory sticks give huge performance gains with Ableton Live By Craig Anderton Many musicians use Ableton Live with a laptop for live performance, but this involves a compromise. Laptops often have a single, fairly slow (5400 RPM) disk drive, and a limited amount of RAM compared to desktop computers. Live gives you the choice of storing clips to RAM or hard disk, but you have to choose carefully. If you assign too many clips to disk, then eventually the disk will not be able to stream all of these clips successfully, and there will be audio gaps and dropouts. But if you assign too many clips to RAM, then there won’t be enough memory left for your operating system and startup programs. Fortunately, there’s a very simple solution that solves all these problems: Store your Ableton projects on USB 2.0 RAM sticks. That way, you can assign all the clips to stream from the solid-state RAM “disk,” so Ableton thinks they’re disk clips. But, they have all the advantages of being stored in RAM—there are no problems with seek times or a hard disk’s mechanical limitations. Best of all, the clips place no demands on your laptop’s hard drive or RAM, leaving them free for other uses. Here’s how to convert your project to one that works with USB RAM sticks. 1. Plug your USB 2.0 RAM stick into your computer’s USB port. 2. Call up the Live project you want to save on your RAM stick. 3. If the project hasn’t been saved before, select "Save" or "Save As" and name the project to create a project folder. Fig. 1: The "Collect All and Save" option lets you make sure that everything used in the project, including samples from external media, are saved with the project. 4. Go File > Collect All and Save (Fig. 1), then click on "OK" when asked if you are sure. Fig. 2: This is where you specify what you want to save as part of the project. 5. When you’re asked to specify which samples to copy into the project, select "Yes" for all options, and then click OK (Fig. 2). Note that if you’re using many instruments with multisamples, this can require a lot of memory! But if you’re mostly using audio loops, most projects will fit comfortably into a 1GB stick. 6. Copy the project folder containing the collected files to your USB RAM stick. 7. From the folder on the USB RAM stick, open up the main .ALS Live project file. 8. Select all audio clips by drawing a rectangle around them, typing Ctrl-A, or Ctrl-click (Windows) on the clips to select them. Fig. 3: All clips have been selected. Under "Samples," click on RAM until it's disabled (i.e., the block is gray). 9. Select Live’s Clip View, and under Samples, uncheck "RAM" (Fig. 3). This converts all the audio clips to “disk” clips that “stream” from your USB stick. Now when you play your Live project, all your clips will play out of the USB stick’s RAM, and your laptop’s hard disk and RAM can take a nice vacation. This technique really works—try it! Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  13. I meant can it process each string individually with the on-board processors - so for example, could you load a smart harmony patch in the on-broad processors that shifts each string independently for a polyphonic harmonic shift that outputs from the standard guitar out? (Basically what the Variax does) Oh, okay...no, the current guitar doesn't do hex processing onboard. Don't know if that's planned for the future or not, but for now, you need a computer-based setup to use the hex outs. I'm assuming this has to do with the amount of DSP that's required, six instances of Guitar Rig 4 can even be a challenge for a computer. However, as to pitch changes per string, that's what the Robot Tuning does. It doesn't change the string pitch electronically but mechanically, by actually tuning the string to a different pitch.
  14. Hi, Craig! Long time listener, first time caller. Welcome! Don't be a stranger. My two-part question is, have you heard specifically what Gibson has done to improve the guitar, especially under the hood...and can you share that information here? Well, just from a sound design standpoint, the single delay went to a dual delay. This has been great for programming, as for some patches I'm using a short, static delay mixed at a fairly low level to add "fatness." Also, as the delays can be modulated (I pushed for that, and bless 'em, they said "sure"), if I modulate both delays and the chorus, it's possible to get a huge chorus sound. The looper delay is longer, too You can do some truly wild stuff. There have also been some functionality changes on the togpots. For example, the distortion togpot rotation has changed from dry/wet to distortion drive. Similar results, but more what people expect. A REALLY cool addition is that before, the pickup selection was "baked into" patches - in other words, I'd design a patch to use a specific pickup or pickup combination. It still works that way, but by moving the piezo togpot to the up position, the knife switch acts like a standard pickup selector instead of a patch selector. So you can do something like select a patch, but if you decide you'd rather it had a bridge pickup sound than neck pickup, you can do this. Another change is they added an octave divider. It works about the same as any other I've tried - not perfect, but useable - and I've been able to create some pretty cool bass patches, as well as bass+distorted guitar. But really, those are relatively minor changes compared to what else is going on with the pedals, interface, and editor. That's all happening in Nashville, Southern California, and Germany, and frankly a lot of it's above my head...all I know is every now and then someone says "Hey Craig, you need new firmware" and I download new firmware into the guitar. Supposedly I'm going to need a new audio engine board soon in order to communicate with the peripherals, so I guess they were serious about the audio engine being easily user-replaceable. I also know they're optimizing various aspects of the circuitry to extend battery life, and making a few UI changes on the "GearShift" knob that make it easier to see where you are in a bank. There are also many other technical changes I'm not concerned with personally because they don't impact designing sounds, but apparently some of these are quite profound. As to me...at the moment I've pretty much completed five banks of five patches each: Acoustic, rock/blues, country, metal, and hip-hop. I have to say, I'm really happy with the sounds, I'd use them on a record any time. I guess I'm patting myself on the back somewhat here, but really, I wouldn't be able to do the patches I've done without the toolset that Gibson came up with (I did have input into the effects, but they tended to be more about details - like insisting that the delays be modulatable, and that there be a variable low-pass filter in the delay feedback loop). The one thing that keeps surprising me every time I pick up the guitar is how absurdly quiet it is, even with high-gain, distorted patches. I've been told this has a lot to do with the effects being built-in, but whatever...it's nice to leave "white noise generators" behind. I remain very psyched about this guitar personally, because I feel like I've really had a chance to learn my way around the gazillion parameters, so the sounds just keep getting more refined. It's still too expensive for most people, and people will still continue to diss it for whatever reason, but as a glimpse into the future this is a significant instrument. 10-20 years from now, I think having DSP in guitars will be common just because once exposed to it, more and more people will get into it and want it. I really see guitars splitting into two families, in the same kind of way that keyboards split into traditional pianos and electronic keyboards.
  15. so if I understand what this can do, you could load in software to process each string independently - like the VG-99 or Variax? But more of an open platform (i.e., you can load 3rd party software). That seems pretty cool as long as the processor has some oomph. Yes, you can, but you could with Dark Fire and Dusk Tiger too, and even could with the Gibson Digital Guitar. The six outs come from the piezo pickup, and are multiplexed down a standard cable that terminates in a FireWire interface that comes with the guitar (it's designed by Echo Electronics). The outputs show up in your computer as six separate inputs (one for each string). With Dark Fire and Dusk Tiger - and I presume this will be the case for Firebird X - you have two additional outputs: one for the magnetic pickups, and one for the piezo overall output (not individual strings, the same output used for the acoustic sounds). I posted a review of the Digital Guitar when it first came out, and you can hear a bunch of audio examples if you want to hear some of the things you can do with hex outs. My favorite application is running the bottom three strings through octave dividers, the top four through chorus, and the magnetic pickups through distortion. It's the setup I use with EV2, a two-piece band project with Brian Hardgroove (Public Enemy) on drums. FYI Firebird X comes with Guitar Rig 4 (full edition, not LE) and Ableton Live 8 lite, and the patches I'll be designing for it once I have the interface here will be downloadable from the Gibson site.
  16. One of the videos showed some serious lag between switching settings, is this true? It was the case, but is improving with each software iteration. I don't know much of the details, but my experiencing with beta testing software like DAWs is that they run slowly due to the debugging code. Once that's removed, the program runs much faster. Possibly that's what's happening here. How easy will it be to upgrade the unit later? Regarding "upgrade" in the sense of more patches or re-arranged patches, there's an editor for it so you'll be able to alter the internal sounds and tunings as desired. I have a folder of patches I've done that probably won't make the cut for the onboard presets because they're either too "out there" or represent variations on the onboard presets, and I've suggested they be offered as free downloads. As to how the software gets upgraded (new effects or whatever), I believe it's going to be done through Bluetooth rather than something like USB or an EPROM change. The audio engine itself is on a small user-replaceable board, so upgrading that would be simply pulling it out and putting in another one. If you look at the fender guitar that is coming out on several gaming systems for rock band 3 you see that the controller is a regular guitar that is able to transmit a signal to your gaming system. Why would you not create a regular controller and have it transmit what you are doing through either wireless or a cable and have an effects unit process that information. This way the effects unit could be updated, you would have a platform for people to extend, etc.... This somehow makes more sense to me then to put everything in the guitar. One issue is that the guitar produces actual audio, and that takes a lot of bandwidth. I don't know if current consumer technology (like Bluetooth or whatever) can handle real-time audio at high fidelity, especially something like hex outputs, with enough reliability for live use. The idea of having all the guitar's controls send control signals to an outboard effects unit is a good one, and I would be surprised if Gibson didn't consider it. However, I suspect that part of the goal with Firebird X was to have something with an output jack you could just plug in to an amp. Also, the pedals and footswitches are set up as you describe, with wireless communication with the guitar so at least part of what you're talking about is being done externally. Another issue might be (and again, I don't know the technical nuts and bolts) is that the togpots and controls are all analog, so I presume they're going into A/D converters. The resolution seems high - I don't hear "stair-stepping" when moving the togpots - so being able to generate multiple high-resolution control signals simultaneously without delays might be an issue. Nonetheless, your points are good ones and the kind of discussion I was hoping to see in this thread!
  17. I like tech stuff. But, what rational musician would spend the money this costs and only have two hours of battery/play life? Seriously. The two hours I quoted was for the sound design work, which really exercises the guitar. When I was just playing it, the battery life was more like three hours. But also, from what I understand there are various power-saving options that haven't been implemented yet, like turning off the piezo pickup electronics if it's not used in a patch. Again, let me emphasize I'm just doing the sound design but I get the sense that battery life will be extended in production models. That said, I can change the battery literally in seconds. As long as one is charged up that I can swap out, then I can go for at least four hours and I rarely play sets that are much over three. So personally, it's not a big problem.
  18. Thanks for this, Craig, I'm glad someone is giving it a chance. I'm more old school, but I'm not averse to innovation. Firebird X is a different type of instrument - more electronic guitar than electric guitar. There's nothing wrong with old school at all. If you look at the pattern over the years with Gibson, the high-tech instruments run on a parallel track with their conventional guitars. I don't know facts and finances, but I'd bet just about anything it's the electric guitars that keep the lights on at Gibson, not the electronic ones. But, I have to say, what the high-tech guitars can do is fascinating. I feel like I'm in at the ground floor of something that will continue to evolve and be refined in the years ahead. For example, having electronics inside the guitar offers potential that Firebird X doesn't tap yet, but I'd like to see tapped in the future where what you play on the guitar, and how you play it, ties into to effects control so that the effects become a partner in your playing rather than being just "processors." I definitely realize this type of instrument isn't for everyone. But, I also feel there are people who could really get behind it if a) they knew what it does, and b) could afford it. Price is definitely a barrier, although I've never seen high-tech devices go up in price over time. Either the price goes down, or the price stays the same but capabilities increase.
  19. Any questions? Yeah - how does it feel to have something that ugly around your neck? ... sorry... couldn't resist... will go back to my cave now. Well, it feels pretty light! I don't find the looks a problem. To me it looks more like a car from the 50s...chrome and tail fins, sort of a retro vibe. Then again, I like guitars that are out of the ordinary - if you've seen any of my Frankfurt Musik Messe coverage, a lot of my guitar coverage consists of axes that make Firebird X look downright conservative. Maybe it comes from lusting after Burns guitars when I was a kid My 15-year-old daughter hated the way Dusk Tiger looked, but thinks FBX is the coolest-looking guitar I have. I will say it looks different in person than in the photos, where it appears kind of orange. My guitar is a lot darker, so the controls tend to disappear into it more. But - I also hear they've changed the finish although it's still red-based. Haven't seen it, though, so I don't know what to expect. Overall I'm not that concerned with the looks one way or the other. With a guitar like this, the ergonomics and sound matter most to me. All those controls look weird on a guitar, but they're placed so that they're easy to use.
  20. Now, here's what I really like. The Robot Tuning thing is so convenient. Alternate tunings are useable, and when recording, it's so easy to touch up tuning in seconds. The "togpots" that let you alter sounds in real time are fantastic. I thought that the idea of having to take your hands off the strings to make an adjustment was dead in the water - I mean, we're guitarists, we need to use our hands! But in practice, I don't play every single millisecond and it seems that most of my togpot tweaking happens when something like a chord is sustaining, to add a dramatic element. Being able to bring in chorus, delay, distortion etc. in real time is very cool, because you can "morph" sounds - it's not an on/off situation like a footswitch, nor do you have to bend down to a footpedal, or around to a rack, to tweak a control. This was one of the major surprises of the guitar between theory and practice - in theory, I thought it wouldn't work but in practice, it does. I LOVE the sound of the pickups. There are a zillion different combinations you can get - humbucker, parallel, single coil, in phase, out of phase, reversed coils, etc. Some of these almost sound like clavinet or FM synth sounds, without any electronic processing at all. And because of the onboard electronics, you can take pickup sounds that would normally be excessively low in volume (e.g., out of phase sounds) and apply gain to bring them up to a suitable level. The sound is super-clean and clear because the pickups are matched perfectly to the on-board electronics - there are no impedance or level-matching issues. Gibson claims a dynamic range of greater than 100dB, and I believe it. Even high-gain sounds have very little noise and hiss, and the onboard noise gate takes care of any residual hiss. BUT - the important thing here is that due to the low noise, the gate can sit at a really low level, like -75dB. As I've always said, noise reduction works best on signals that don't have much noise, and that's the case here. The noise gate takes the sound from quiet to dead quiet, not noisy to dead quiet. The onboard DSP is great. I'm getting sounds that I really like. Of course processors like distortion are very subjective, but when I hear the tracks coming back to me from my DAW, I'm very, very happy with the sound. The convenience is another factor I didn't really consider until I had the guitar in my hands. When playing, it's just so easy to dial in the sound I want - turn a knob, flick the knife switch, done - all from the guitar itself. Granted that calling up a preset from an amp sim or a POD HD isn't exactly labor-intensive, but even being able to work 15-20% faster is a big deal to me, and I'd estimate that's the workflow improvement I'm getting from Firebird X, what with the Robot tuning and quick sound switching. Finally, the guitar is really comfortable to play, either standing up or sitting down. FWIW the original body design was even smaller, which I liked a lot but didn't go over well with the initial round of pro guitarists who participated in the beta tests...they felt it looked too diminutive. Oh well. It's still very comfortable. After working with this guitar in a relatively stable form for several weeks, I have no doubt it will become my go-to guitar for the reasons mentioned above. Oh, one other thing about batterles: They charge in less time than they take to fully discharge. So during the sound design process, which is hard on batteries, when a battery dies I put it in the charger and swap it out. When the next battery runs out of power, the one in the charger is ready to go. Any questions?
  21. My opinions are not all positive. Here are some limitations. The guitar will not function without the internal battery. Unlike Dark Fire and Dusk Tiger, you can't get an output because the pickups are active. Fortunately, the battery is an inexpensive, standard camcorder type and is user-replaceable, so switching out power isn't bad, and the battery lasts at least two hours. Still, no juice = no sound. I don't see any way that it's possible to add something like a Bigsby vibrato tailpiece due to the way the bridge is constructed, where the strings have to be electrically insulated from each other for the Robot Tuning function to work. Any touring pro will likely want a backup guitar, and that really ups the cost. I will say that the previous high-tech guitars from Gibson have been 100% reliable; I've never experienced a failure. But, that doesn't mean one shouldn't be prepared, and simply having a replacement DSP engine card may not be enough if there's a problem with the wiring harness, or an electro-mechanical part like a switch. Of course synth players are familiar with the concept of playing an instrument where electronic failure is possible, but for guitarists who use a standard, passive guitar where almost nothing can go wrong short of dropping the guitar, the idea of having all this technology inside a guitar can be scary. The onboard faders are short-throw, and while some of them are useful for changing effects when playing live, some require really precise settings to get the setting you want. I see them more as something for studio work when you want to tweak a sound than a tool that's really viable for live performance, although to be fair, I have seen people use the faders really well...it may just be a question of practice.
  22. There's been a huge amount of bashing of this guitar online, and that's fine (after all, this is the internet!) but I thought some of you might have actual questions about or interest in the guitar, hence this thread. I have a prototype version here so I know it as well as anyone else. However, there are a few things you need to know. 1. I am being paid to do sound design for the presets. However, I don't like the guitar because I am being paid to work on it; I am being paid to work on it because I like what this guitar is all about. Furthermore, I've lowered my standard rate in return for getting to keep the guitar. I don't need the work, but I love sound design and doing a guitar is a particularly fascinating challenge. 2. I am biased in favor of the whole high-tech guitar concept (which is why I started this forum). I like automatic tuning, polyphonic outputs, onboard effects, etc. This doesn't invalidate my other guitars - I see it as the difference between synthesizer and piano. They are different instruments and just because you like synth doesn't prevent you from liking piano, and vice-versa. 3. All the opinions stated in this thread are mine alone. I am not an employee or representative of Gibson, and they have neither approved nor discouraged my doing this thread. I am doing this thread solely in my capacity as Editor in Chief of Harmony Central, where my job is to create content of interest to musicians for this site. Now, a few things about the guitar. Yes, it's expensive - too expensive for most people by a long shot. However, I don't feel that's a reason to trash it. High-tech always costs a lot when introduced due to R&D and development, and prices come down over time as those costs are amortized. I do not know if Gibson has any specific plans for the technology in Firebird X to "trickle down" to future products, but common sense would indicate they wouldn't spend a huge amount of $$ on this technology only to walk away from it. Second, I really like this guitar. Like most people, when I first heard "on-board effects" my reaction was . I remember those guitars with cheap fuzztones built in, and for someone who resists even having a battery in a guitar, I was skeptical. But, the process of doing sound design has turned me around 180 degrees. The onboard DSP engine is extremely flexible, and the sound quality of the effects is equal to or better than the plug-ins in my computer. What's more, compared to plug-ins, there's no latency - I can dial up the sound and record it, just as if I was using external effects or an amp. I also like that not only is the software updatable, but that the internal DSP hardware comes on a user-replaceable card. This is good for servicing - I'd rather carry a spare card than a spare guitar should something go wrong - but it also means that it's possible to update the current hardware with more powerful hardware at some point in the future. Whether Gibson has plans to do that is something I don't know, but at least the option is there. Finally, the playability is great. The guitar is light, the neck is fast, and in a way, the feel reminds me more of a "super-Tele" than anything else. So...I'd be glad to answer any questions you might have about the guitar or the technology, to the best of my knowledge. I do feel this sort of technology will eventually work its way into more and more guitars, not just from Gibson, and it's interesting to peer into the future and speculate as to where this type of guitar is going. I'll also be posting audio examples of some of the sounds I'm developing.
  23. Take any music and match its tempo with Acid – or vice-versa by Craig Anderton Although beat-oriented programs like Acid and Ableton Live usually use relatively short loops to create projects, you’re not limited to that option: It’s possible to “beatmap” long, unlooped files so that they can match up with the project tempo. This works by importing the file you want to beat map (even ones with slight tempo variations will work). You then add markers so that the file plays back in sync with the rest of the project, or the project tempo syncs to the file. So how does Acid know what’s a “long file” instead of a loop? Acid’s default is that it assumes files over 30 seconds are not loops, but you can change the default time under Preferences > Audio Tab, in the top right field. Here’s how to do it. Fig. 1: Choose the file you want to beatmap. First, drag the file you want to beatmap into an Acid track. Acid supports drag-and-drop from the desktop if you like to roll that way. Fig. 2: The Beatmapper wizard appears. Upon loading the file, Acid’s Beatmapper wizard appears. When it does, click Yes, then click Next. Fig. 3: Identify the file’s downbeat so Acid knows where the music begins. Next, move the downbeat marker to the file’s first beat, then click Next. Note that the Play and Stop buttons let you audition the file shown in the Beatmapper so you can make sure you’ve chosen the right place in the file for the downbeat. Fig. 4: Tell Acid the length of a measure. The Beatmapper wizard now estimates the length of one measure. Adjust the end marker to define the exact end of the measure. Zoom in if needed with the (+) button to position the measure end as precisely as possible, as Acid needs this data to map tempo properly. Note that you can enable a metronome sound as a rhythmic reference if you find that helpful. Fig. 5: Verify that you’ve created a loop that loops seamlessly. Click on the Play button to verify that the highlighted region loops correctly, then click Next. If the loop isn’t right, re-adjust the markers that define the loop. Fig. 6: Apply a similar process to identify the other measures in the file. Next, click on the Play button again, then click on the > button (or click within a measure) to step through each measure and verify that its start and end points are set correctly. When they are, click on Next. Fig. 7: Success! The file is beatmapped. The file is now beatmapped, and the wizard shows three options: Change Project Tempo to Match Beatmapped Track (if unchecked, the Beatmapped track follows the existing project tempo), Preserve Beatmapped Track Pitch When Tempo Changes, and Save Beatmapper Information with File. Check the desired options, then click Finish. After the clip is beatmapped, you’ll want to save these changes. Click on the clip, then click on View > Clip Properties; next, click on the Save button to embed the Acidization information in the file. At this point you can also click on the Stretch tab for beatmap editing without having to invoke the Beatmapping wizard. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. Expand Logic Pro with ReWire-compatible synthesizers and programs by Craig Anderton ReWire is a software protocol that allows a ReWire client to feed audio into a ReWire host’s mixer, all within the same computer. (There are other aspects as well, such as transport sync between the two applications; you can find more information in Propellerhead Software's ReWire pages, as well as tutorials on using ReWire with various programs.) For example, Propellerheads' Reason makes an excellent ReWire client that can provide a suite of virtual synths for a ReWire host like Logic, Sonar, Cubase, Live, Pro Tools, and the like. Furthermore, some programs (like Ableton Live) can act as a ReWire client or host. So if you want to use Ableton Live’s groove-oriented features as a client with a more linear-oriented ReWire host such as Pro Tools, that’s possible as well. Apple’s Logic Pro traditionally had a reputation of being temperamental to use with ReWire, but that changed with Logic 8, which greatly simplified the process. To illustrate the process of ReWiring with Logic Pro, we’ll show how to ReWire Reason’s mixed output into Logic, then how to assign Reason instruments to individual tracks. Note that the order in which programs are opened usually matters with ReWire. With Logic Pro, the ReWire-compatible program has to be opened after Logic Pro, and closed before you close Logic Pro. Start off with an open Logic project; click on Track, then select New (Fig. 1). Fig. 1: Create a new track in Logic. When the New Tracks dialog appears, select 1 track for Number, External MIDI for Type, and check Open Library (Fig. 2). Then, click on Create. Fig. 2: Specify the desired parameters for the new Track. Under the Library tab, a list of available ReWire devices appears (Fig. 3). Double-click on (in this example) Reason, or whichever ReWire device you want to use. The ReWire device will launch. Fig. 3: Locate the ReWire client in a list, then launch it. You now need to create an auxiliary input to accept Reason’s output. So, click on the Mixer tab, and under Options, choose Create New Auxiliary Channel Strips (Fig. 4). Fig. 4: Create an auxiliary input that Reason can feed. Now you need to define the auxiliary input’s characteristics (Fig. 5). Specify the Number (1), Format (stereo), Input, and Output. Under Input, select Reason and RW:Mix L/R to pick up Reason’s mixed output; then click on Create. Fig. 5: Set up parameters for the auxiliary input. Assuming the Mixer tab is still selected, you’ll see Reason’s main, mixed stereo output appear as a track (Fig. 6). Fig. 6: Reason’s output appears as a track. However, you’re not limited to feeding in only Reason’s mixed output; it’s also possible to bring individual Reason instrument outputs into tracks. Create another auxiliary input and define it (as shown previously in Figs. 4 and 5), then click on Create. In Fig. 7, Reason output channels 3+4 are being selected, because Dr. REX has been patched into these outputs in Reason itself. Fig. 7: Reason output channels 3 and 4 are being fed into Logic Pro. After clicking on Create, you’ll see a new track with the individual instrument you selected (Fig. 8). You can keep following a similar procedure to assign additiona instruments. Fig. 8: A new track appears with the individual instrument. Finally, note that this is also a good time to rename your tracks so you can keep your project organized. Assuming Track Name is selected under View, double-click on the track name and enter text into the text field. Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Create new arrangements simply and easily Cubase allows creating an Arrangement Track, which lets you define specific parts of the project (e.g., verse, chorus, fill, etc.). You can then assemble these parts into a Playlist, and play the parts back in a different order. For example, you might want to insert an additional fill, or change where a verse occurs. This allows you to experiment with the order of different parts of the project to create new arrangements. While the article presents the basic procedure, the Arranger Editor has many powerful features, such as setting the number of repeats for a part. Check the Cubase Operation Manual for details. As you’ll likely be marking off parts of the song with measure boundaries, select Grid for the Snap parameter, and set the value to a Bar. (However, you might want a grid with finer resolution if you need sections that are smaller than a measure’ like part of a fill.) Go Project > Add Track > Arranger. The Arranger Track appears; move it above the other tracks if desired. Click on the pencil tool, then click and drag in the Arranger Track to mark off song segments. These can overlap (in this example, the D part includes the last four bars of the B part and the first four bars from the C part). To create the Playlist, choose the Arrow tool, then double-click on the segments in the desired order. To rename a segment, click on the Show Event Infoline button if the Infoline is not already showing. Then click on the segment to select it, and rename it under the Infoline’s Name field. To alter the arrangement, click on the Open Arranger Editor button; the Arranger Editor window appears. Click on a part in the Current Arranger Chain, and move it up or down in the chain. Note that you can also change the length of an Arrangement part, as well as move it, by using the Arrow tool and applying standard resizing/moving techniques. To play back the arrangement, click on the Activate Arranger Mode button, then click on Play. Experiment with with as many arrangements as you like!
×
×
  • Create New...