Jump to content

Anderton

Members
  • Posts

    18,256
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Anderton

  1. by Craig Anderton We all want a good mix where the instruments stick together like glue, with drama and clarity. Toward that end, it would be great to be able to say "add this amount of compression, this type of EQ on these instruments, and you're done!" But if it were that easy, every recording would sound great. Instead, we'll have to be more general. It's also important to remember that tips are not rules. For example, most producers say that mixes should have space, and I agree. But then there's the Stones' Exile on Main Street, whose cluttered, chaotic mixes are a thing of beauty. Which brings us to tip #1: 1 Let the music tell you what it wants. This is something engineer Bruce Swedien (Quincy Jones, Michael Jackson, too many others to list!) emphasizes in his master classes. The music will tell you what it wants, but you have to listen. Rather than sound like something else, bring out what's unique in what you have. The fewer preconceived notions you bring to music of how it should sound, the better the odds of coming up with something innovative. 2 Pay attention to the details. Listen to every track, in isolation (and preferably on headphones), before you start mixing. With hard disk recording/editing, you can massage each track to eliminate any little pops, clicks, hisses, etc. Cut the spaces between phrases to eliminate any residual hiss or noise, add a fade-in to over-enthusiastic breath inhales on vocals, run the bass through Melodyne if there are tuning problems...all these little improvements will add up to make a big difference in the overall sound. 3 Always consider the context. A common mistake among newbie recordists is to solo a track and add EQ and effects to make it sound fantastic. Then they solo the next track and do the same thing. But there's only so much bandwidth and dynamic range: Mixing all these "rich" sounds together can result in a mess. Each track is a piece of the puzzle, and needs to fit with the other tracks. 4 Differentiate instruments with EQ, not just panning. I always start mixing with all tracks panned to center, then use EQ to carve out frequencies so tracks don't "step on" each other (Fig. 1). For example, in a dance mix where the kick should hit hard, I'll shave some low end off the bass while emphasizing its pick or filter attack. But with something that's more old school R&B, I'll keep the bass full, and instead accent the kick drum's mid and beater. Once you can clearly differentiate all the instruments in mono, then bring on the panning. Fig. 1: In this screen shot from PreSonus Studio One 4 , the bass (left fader) has a 2.4 dB shelf to fill out the low end. The drums (right fader) have a 4 dB boost, (with a fairly sharp Q) at 160 Hz to bring out the kick's lower-mid sound. This lets the bass have more low-end prominence, but the kick drum is still very present. 5 Be brutal when you edit. I'm ruthless about cutting out whole sections of songs if they don't work. Keep the pace moving, while of course respecting the dynamic flow. Recommended listening: "Shhh/Peaceful" from In a Silent Way, by Miles Davis. It was edited down from far more material to create a beautiful, concise listening experience. And don't fall in love with parts; if a part doesn't support the music as a whole, that's why the "delete" key was invented. 6 Automatable EQ is your friend. Drop some of the piano midrange during the vocals so they don't compete with the piano. Increase the upper mids a bit on the acoustic rhythm guitar part so it "cuts" through the mix, then drop it back when the part reverts to rhythm guitar. Even changes of one or two dB affect the overall sound, and most hosts allow EQ automation (Fig. 2). Fig. 2: Here's how to automate EQ in Cakewalk by BandLab. The acoustic rhythm guitar is about to open an automation lane for the High Mid Frequency EQ from the four-band, QuadCurve parametric EQ. You can then draw an envelope, vary controls with automation write selected, or create automation "moves" using a control surface. 7 Remember dynamics - ride the faders. When recording, there's a tendency to use the maximum available headroom. You can restore a sense of dynamics by playing the faders as you mix - subtle changes in dynamics can make a mix "breathe." And while mixing with a mouse is great for editing and touching up, it's lousy for performing. Spring the bucks for a hardware controller (Fig. 3) to add some human feel. Fig. 3: The FaderPort 8 from PreSonus is a cost-effective, ergonomic, Mackie Control-compatible fader box for adding real-time control to a mix. 8 Always be in "record automation" mode. As soon as you start mixing, enable automation recording. Sometimes your gut hears music better than your head, and your initial emotional reaction toward a song might be what the music wants. 9 Don't try to master while you mix. A lot of people will slap a multiband compressor across the final output bus and go "okay, it's mastered now!" Wrong. A good mastering engineer can make a good mix sound great, and a great mix sound transcendent. Although I'll switch in some compression on occasion to get a rough idea of how mastering will influence the sound, when it's time for the final rendering to stereo or surround, compression is outta there. Although not everyone agrees - and there can be valid reasons for mastering while you mix - to me, mastering is a different discipline than mixing. 10 Optimize your room acoustics. This is the foundation of a good mix: Mixing great music in a room with poor acoustics is like trying to make a great dinner in a cockroach-infested kitchen with a mis-calibrated food thermometer and mislabelled measuring cups. If your mixes sound great in your studio and not-so-great everywhere else, you definitely need an acoustics makeover. -HC- ___________________________________________
  2. by Craig Anderton A multiband dynamics processor isn't a processor shared by several different bands, but a device that combines elements of both EQ and dynamics control. If used properly, multiband dynamics (also called "multiband compression" because that's the most popular application) can give more transparent and effective dynamics control than traditional, single-band compressors. Plug-in mania has given a big push to multiband compression, because hardware units required a costly collection of knobs and switches, while software versions are not subject to those limitations (Fig. 1). Fig. 1: Two multiband dynamics processors: Waves' C4 Multiband Parametric Processor (right), and Cakewalk LP-64 Linear Phase Multiband (left). Formerly used almost exclusively for mastering, multiband dynamics processing is common and inexpensive enough to be used for all types of dynamics control. They're particularly useful for instruments with wide frequency and dynamic ranges, such as piano and drums. DYNAMICS CONTROL BASICS A compressor is a special-purpose amplifier. This amplifier turns down its gain whenever the input signal exceeds a certain user-definable threshold, thus lowering the level of peaks. As the peaks are lower and no longer use up the maximum available headroom, it's possible to turn up the level of the overall compressed signal, thus making it sound louder. A limiter clamps the output so that within reason, it won’t exceed a particular threshold, regardless of the input level. An expander does the inverse of a compressor: as a signal falls below a threshold, the amplifier turns down the signal so it loses level at a faster rate than normal. MULTIBAND COMPRESSION BASICS A multiband compressor splits an incoming signal into several bands (typically 3 to 5), like a graphic equalizer or spectrum analyzer. A compressor follows each band, so that each compressor affects only a specific band of frequencies. The compressors can also usually serve as limiters or expanders. The rationale behind band-splitting is simple. With a traditional compressor, a strong signal that exceeds the threshold brings down the gain, but this affects all frequencies. So, if a strong kick drum hits, it will bring down the level of the cymbals and other high-frequency sounds as well as the kick. With a multiband compressor, you would dial in the frequency range of one compressor to that of the kick, another to lower midrange, another to upper midrange, and another to treble. Thus, when the kick hits, it's compressed but this doesn't affect the other bands, which are compressed according to their own needs. TWEAKING MULTIBAND COMPRESSOR PARAMETERS A lot of musicians have a hard time getting a grip on how to use single-band compressors. Often, they set the compression so high that there are "pumping" and "breathing" artifacts, or set the attack so fast that all percussive transients are lost. Tweaking a multiband compressor is far more complex, because now several bands have to be set just right. Here are some general guidelines on setting up multiband compression. 1. Listen to the sound before you touch any knobs, and analyze what needs to be done. If you just want a hotter, louder sound, that's the simplest application: just split the signal into bands that divide up the spectrum, and set the compressor parameters similarly. In this case, the multiband compressor acts like a standard compressor, but gives a more transparent sound because of the multiple bands. Using a multiband compressor to fix response problems is more complex. For example, suppose you're mastering a tune, and the bass end sounds kind of muddy because the kick and low toms ring too long, the high end is shrill, and the vocals in the midrange sound buried in the mix. You don't want to compress the low end and make the mud louder, nor do you want to emphasize the high end. But you do want to compress the midrange to bring the vocals more to the front. Once you've assessed what you need to do, it's time to... 2. Set the frequency ranges. Most software multiband compressors let you edit the frequency range, from narrow to broad, covered by each compressor. Generally, you can also solo individual bands to hear what they contribute to the overall sound. Set the compressors for a 1:1 ratio and high threshold so they don't affect the signal, then solo a band and listen. In our example above with the muddy low end, you might omit compression entirely and add some expansion to reduce ringing. Tweak the band's range so that it covers the muddy bass area but nothing else. As most multiband compressors have level controls for each band, I find it handy to first treat the device like a graphic EQ. If turning up a band improves the sound, that may indicate that a little compression is in order. If turning a band down helps, then I generally don't compress it, or use subtle expansion to emphasize peaks more while de-emphasizing lower-level signals. 3. Start editing the compression settings. This is the trickiest part, because anything you do in one band affects how you perceive the other bands. For example, if you compress the midrange, the treble and bass will appear weaker. You don't want to get into a situation where you bring up the midrange, which makes you then want to bring up the bass and treble, and then you have to compensate for that. Avoid going over 1.5:1 or 2:1 compression ratio at first, and keep the threshold relatively high, such as -3 to -9 dB. This will tame the highest peaks, without affecting too much else of the signal. Listen after each change, and give your ears a chance to get acclimated to the sound before making additional changes. If your multiband compressor lets you save presets, save them periodically as temp 1, temp 2, temp 3, etc. That way you can go back to a previous, less radical setting if you start to lose your way. In general, it seems best to work on any "problem bands" first. Get them sounding right, then tweak the other bands to work well with them. For example, suppose you compress the upper midrange to better articulate voices, or bring out melodic keyboard lines. Once that's set, adjust the bass to support (but not overwhelm) the midrange, then tweak the treble to suit. 4. The final step: it's an equalizer. After the dynamics are under control, my final tweak is usually adjusting each band's output level for the best overall balance. In fact, one of the nice things about a multiband compressor is that some bands can compress while others expand, and still others just do nothing—if set for zero compression, they act like bands in a graphic EQ. MORE THAN PROGRAM MATERIAL In addition to mastering, multiband dynamics can also dress up individual instruments. Bass is one of my favorites; compress the high end to bring up finger pops and slaps, compress the very low end for a smooth, sustained sound, but leave the lower midrange alone. Piano is also a great candidate for multiband compression. Leave the low end alone except for very light limiting, so that a good hit in the bass range has strong, prominent dynamics—don't squash the low end too much, or the notes will lose drama. But compressing the upper midrange a bit can help melody lines cut through a mix better, and boosting—but not compressing—the very highest frequencies (around 8kHz and above) adds "air." Drums work well with multiband compression, because each drum tends to have its own slot of the frequency spectrum. A multiband compressor can almost remix a drum part by compressing or boosting certain drums, and using expansion to reduce "ringing" if you need to tighten up a drum sound. So once again, let's hear it for plug-ins...the world of software has turned a formerly esoteric piece of hardware into something cost-effective enough for just about any studio. To me, serious progress. -HC- ___________________________________________
  3. by Craig Anderton Is noise really a problem? After all, analog tape hiss is virtually extinct. We have 24-bit converters with signal-to-noise ratios that far exceed that of any delivery medium. Low-level hum? Today's signals have plenty of level to overcome what the cables might pick up, and besides, cables are better shielded too. Yes, it's a beautiful, noise-free world... Except for that faint air conditioning rumble, hash that comes in through your guitar pickups, noisy electronics in a vintage electric piano, your monitor's 15kHz oscillator frequency, that tube preamp you like so much, and...you get the idea. Noise can still be a problem, and we still need to deal with it. Here's how. ANDERTON'S LAW OF NOISE REDUCTION Here it is: "Noise reduction works best on signals that need it the least." You can get rid of a teeny bit of hiss fairly easily, but you can't remove lots of noise without removing (or at least modifying) part of the signal itself; nothing can save a horribly noisy signal. And this law has a corollary: "Noise is reduced by lots of small steps, not one big one." Minimize noise at every opportunity - cut a dB here, another dB there, and eventually it adds up. You'll probably end up using a combination of the following seven techniques. 1 LEVEL OPTIMIZATION How it works: Noise is all about the ratio of signal to noise, so if you weren't asleep in 3rd grade, you know that more signal results in less noise. The object of level optimization is to feed the most signal possible (short of distortion) into your system, while leaving a few dB for headroom. How you do it: Start with the simple stuff - make sure a mic's pad switch (if present) isn't engaged unless needed. Also, turn instrument volume controls up all the way if possible (with digital instruments, this may give better resolution too). If you're a fan of close miking, high bass levels may require turning down the mic pre input gain. Check your mic for a low cut filter, and engage it if it doesn't affect the sound adversely so you can get more level into your mic pre. Similarly, a pop filter can reduce low frequency "plosives" that would cause similar problems. Also, good mic technique that provides a consistent level lets you turn up the input gain as much as possible. Finally, "know thine metering." A mixer channel's overload LED may turn on when a signal exceeds zero, or it may be conservative and turn on when the level hits about -6dB. Check your DAW's waveforms to see if clipping truly occurs when the overload LED kicks in. Nasty stuff: You can increase level by only so much until distortion occurs. Mitigating factors: You don't need any special gear for this. 2 FILTERING How it works: This "brute force" technique uses a steep low-pass filter to remove very high frequencies, where hiss energy tends to hang out. Some instruments don't have much high frequency energy, so you can remove some of the hiss without destroying much of the signal, if any (Fig. 1). Fig. 1: Sonar X3's Quad Curve EQ has a dedicated lowpass band whose slope can go up to 48dB/octave. The controls and resulting response are outlined in red for clarity. Furthermore, a steep low cut filter can reduce low frequency rumble. The filter in your mic or channel strip might not be steep enough; you want at least 24dB/octave. And for hum, a steep notch at 60 or 50Hz (depending on where you live) can attenuate the hum's fundamental frequency. How you do it: Insert a filter after the signal's output. A slightly more advanced variation is to record with the high frequencies purposely "hot" - for example, use a mic with a big high frequency bump. When you pull back the highs a bit to reduce hiss, the signal returns to a more balanced sound. Nasty stuff: Removing highs may also remove some of a sound's "sparkle," and reducing lows can thin the sound. Mitigating factors: You probably have an equalizer that can provide decent filtering, so no extra investment is required. Also, there are variations on this theme, such as adding envelope control to filtering. This is somewhat like gating (see next), as the amount of processing depends on the input signal. With a high input level, the filter kicks open and lets through high frequencies. With low input levels, the filter frequency closes down, removing high frequencies and presumably, some hiss along with it. 3 DSP How it works: Companies like iZotope, Waves, Sony, Steinberg, and others make software-based solutions (stand-alone and/or plug-in) that use sophisticated algorithms to analyze, and remove, noise. This can include not just hiss but crackles, pops, vinyl scratches, hum, and rumble. How you do it: Open the program, load the file you need to process, tweak the parameter values for best results, and save the cleaned version (Fig. 2). Or, insert a plug-in, and render the cleaned file. Fig. 2: The Noise Reduction in Sony Sound Forge takes a "sample" of only the noise signal (this requires having a section with only noise, though), then subtracts only that sound from the file. Nasty stuff: Cost. Although some digital audio editing programs (e.g., Sony Sound Forge) include noise reduction, separate versions can cost up to several hundred dollars. Also, extreme cleaning can produce audible artifacts. Mitigating factors: When these programs work, the results can be miraculous. 4 EXPANSION How it works: Expansion is the reverse of compression. Below a certain threshold, amplifier response becomes non-linear so that a small decrease in input level results in a large decrease in output level. For noise, the result is similar to gating, as small amounts of hiss are "pushed down" toward the bottom of the dynamic range. How you do it: Expansion is usually part of a dynamics control processor that also does compression. To reduce hiss, you set an expansion threshold level just above the hiss, then add a really steep ratio, like 10:1 or higher. This causes the output level to drop dramatically for relatively small input level decreases (Fig. 3). Fig. 3: iZotope's Alloy 2 is set up here so that its multiband dynamics processor has its highest band set for downward expansion. Signals below about -60dB are expanded downward with a 10.1:1 ratio, as shown by the yellow line on the dynamic response graph. Nasty stuff: If the noise level is fairly high, expansion gives problems similar to gating. Mitigating factors: Expansion typically includes attack and decay controls, which can provide a more natural effect. 5 COMPANSION How it works: Nowadays, this is pretty much a hardware-oriented technique, not something for plug-ins. But during the days of analog tape, compansion (compression/expansion) was the Holy Grail of noise reduction. It worked by compressing the signal going into a noisy signal path (like tape). At the tape output, the signal would be expanded to restore the original dynamic range. However, any hiss added by the tape would be expanded downward (see "Expansion" above), thus reducing the hiss. For example, dbx noise reduction added 2:1 compression and a high frequency treble boost to the incoming signal. At the output, there was 1:2 expansion and a high frequency treble cut equal and opposite to the original treble boost. How you do it: Patch the compressor at the input, and the expander at the output. You also need to calibrate levels carefully, otherwise the compression and expansion might not be exactly complementary. Nasty stuff: Sometimes you can hear "modulation noise" riding along with the signal, and with poor level calibration, the sound will tend to "flutter" or waver. Also, the compression and expansion have to be exactly complementary. Mitigating factors: Although digital recording pretty much eliminated the need for compansion-based noise reduction, the principle lives on in some guitar effects and other noisy analog systems. 6 AUTOMATION How it works: This DAW-specific way to reduce noise works similarly to gating. But unlike gating, which is an automatic process, automation lets you "customize" when to mute and unmute (as well as fade in/fade out) noisy passages, and can give very good results (Fig. 4). Fig. 4: Here's a volume automation envelope being added in MOTU's Digital Performer. Note the slight fade times to ease the transition from silence to signal. How you do it: You visually inspect a waveform, and create automation moves. When the "signal" ends and the "noise" begins, you draw the automation curve so that it fades out into silence, remains silent during the noisy part, then fades back in again when the signal reappears. Nasty stuff: If you have lots of tracks that need to have noisy passages turned into silence, it can take a long time to do this manually. Mitigating factors: Some programs automate the process, essentially by adding noise gate-like options where signals below a certain level are converted to silence. But this has the same limitations as using a noise gate. Another time-saving option is to automate mutes in real time as a track plays, but this won't be as precise as drawing in curves. 7 GATING How it works: With this old school technique, you set a threshold level just above the hiss level. For any signal below this threshold, the "gate" closes and doesn't let any signal, including hiss, through to the output. Once the input signal exceeds this threshold, the gate opens and lets the signal through. The noise is still there, but with sufficiently high signal levels, the desired signal will likely mask the hiss. Some noise gates (Fig. 5) include bells and whistles like frequency selectivity, "lookahead" option so that the gating occurs just before a transient, and a "hold" parameter to set a guaranteed gate open time. Fig. 5: The Sonitus fx:gate is a pretty old processor, but can gate a signal based on amplitude or on amplitude within a specific frequency range. It also has a lookahead option. How you do it: Place a noise gate at the output of the signal you want to clean up. If your signal source feeds something like a high gain preamp or compressor, you might also consider putting a gate before the high-gain stage so that any crud doesn't get amplified. Nasty stuff: As the signal transitions between the gate open/closed conditions, there can be an abrupt, noticeable change in sound. If the signal criss-crosses over the threshold, there can even be "chattering" between the gate's two states. Mitigating factors: Adding an attack time smooths the transition from off to on, and a decay time does the same when going from on to off (and also discourages chattering). Another option is not to close the gate fully, but apply perhaps 10dB of reduction. The gate then transitions from the existing noise to a smaller amount of noise, which is not as blatant a change. Which of these seven techniques (or combination thereof) will work best for a given situation comes down to trial and error. But persevere, and you may be very surprised at how much you can clean up a signal by using the right tools. -HC- ___________________________________________
  4. Sound Effects with Guitar Think sound effects are solely the domain of keyboards? Think again by Craig Anderton Samplers and keyboards make it easy to come up with FX: load a file, punch up a preset, and hit a key. Yet electric guitar, in conjunction with a good multi effects processor or amp sim, can make sounds that are more organic and complex than what you can obtain from a bunch of canned samples. No, you can’t generate car crashes and door slams—but for ethereal pads, suspense music, industrial noises, alien backgrounds, and much more, consider using guitar as your instrument of choice. Why let keyboard players do all the cool sound effects? Here are my Top 10 tips for creating truly weird guitar sounds. Just remember Rule #1: extreme effects settings produce extreme sounds. Generally, you’re looking for the boundaries of what an effect can do; all those +99 and -99 settings you’ve been avoiding are fair game for producing truly novel effects. 1. Is everything in order? If you’re using hardware instead of amp sims, it’s essential to be able to change the order of effects by repatching individual effects boxes or using a multi effects with customizable algorithms. For example, a compressor generally goes early in the chain, with chorusing added later on so that the effect processes the compressed signal. However, suppose the chorus has a ton of resonance to create some really metallic sounds. This could produce such drastic peaks with some notes that in order to tame them, you would need the compressor later in the chain. 2. Industrial reverb For a really rude sound, play a power chord through a reverb set for a fairly long time delay, then add distortion after the reverb (Fig. 1). The resulting sound has the added bonus of being able to rid you of any unwanted house guests. Fig. 1: Following Guitar Rig’s Reflektor reverb with distortion produces a dreamy sound—assuming your dreams tend toward the nightmarish. 3. Wet is good It’s usually best to set the effects mix for wet sound only. Having any straight guitar sound can blow your cover because a guitar attack is such a distinctive sound. 4. Attack of the pedal pushers Add a pedal before your effects, not after (Fig. 2). You can cut off the guitar attack by fading in the pedal at the note’s beginning; with effects like long delays and reverbs, you can fade out the source signal while the “tail” continues on. Fig. 2: Choosing when effects will receive input can have a huge effect on the sound, especially with long delays and reverb. 5. Found sounds The guitar itself can generate noises other than those created by plucking strings—here are a few options. Hold a smartphone, calculator, or other portable microprocessor-controlled device up next to the pickups, and you’ll hear a bunch of science fiction sounds worthy of the bridge of the Enterprise. Feed a high-gain effect (such as compression or distortion) and tap the back of the neck with your fingertips. While your high-gain effect is set up, drag the edge of a metallic object (like a screwdriver or butter knife) along wound strings. Use extreme amounts of whammy, and transpose the strings down as low as they’ll go. Tap the guitar body smartly with your knuckles to create percussive effects. These will sound even more interesting through looooong reverb. 6. Turn up the heet The Heet Sound EBow (Fig. 3) is a very cool sustaining device for individual strings. Fig. 3: For many guitarists, the EBow is their “secret weapon” for sustaining single-note lines. This hand-held device picks up vibrations from the string, amplifies them, then drives the string with those vibrations to create a feedback loop. The EBow rests on the strings adjacent to the string being “e-bowed”; moving the EBow further away from, or closer to, the string can create all kinds of interesting harmonic effects. If you want to approximate that famous blissed-out “Frippertronics” tape loop sound, use the EBow to drive a delay set for long echoes (greater than 500 ms) with lots of feedback (more than 80%). 7. Shifty pitches Pitch shifters are a treasure trove of weird sounds. With hardware pitch shifters, add a mixer at the input, then split the pitch shifter’s output so one split feeds into the mixer through a delay (Fig. 4 shows how to patch stand-alone boxes to do this; with a multieffects, a pitch shifter will often include pre-delay and feedback parameters, which accomplish the same result). Fig, 4: How to patch a pitch shifter hardware effect for bizarre “bell tree” effects. Suppose there’s a 100ms delay and pitch shift is set to -1 semitone. The first time the input reaches the output, it comes out 1 semitone lower. It then travels back through the delay, hits the shifter input 100 ms later, and comes out transposed down another semitone. This then goes through the delay again, gets transposed down another semitone, etc. So, the sound spirals down in pitch (of course, with an upward transposition, it spirals up). With short delays, the pitch change sounds more or less continuous while with longer delays, there’s more of a stepped effect. The delay’s level control sets the amount of feedback; more feedback allows the spiraling to go on longer. However, if the delay level produces gain, then you could get nasty oscillations (which come to think of it, have their own uses). 8. Lord of the ring modulators Don’t have a ring modulator? If a tremolo or autopan rate extends into the audio range, the audio modulation “slices” the signal in a way similar to a ring modulator. 9. Fun with flangers Like pitch shifters, chorus/flangers are extremely versatile if you test their limits (Fig. 5). Fig. 5: Waves’ MetaFlanger is set up as described for a strange, morphing effect. Start off with the slowest possible LFO rate short of it being stopped, so that any pitch modulation is extremely slow. Then set the depth to a relatively low setting so there’s not a huge amount of modulation, and feedback to the maximum possible, short of distortion. Edit the output for wet signal only, and try a relatively long initial delay time (at least 20ms). You’ll get metallic, morphing sounds that sound like, for lack of a better description, ghost robots—an unearthly, mechanical effect. If I was doing effects for a movie and building tension for the part where the psycho killer is stalking his next victim, this sound would get first crack at the scene. 10. Parallel universe Some advanced multieffects let you put effects in parallel. One example of how to use this is to create ultra-resonant sounds. Most guitarists know that you can take a flanger, boost the resonance to max, turn the LFO speed to zero, and end up with a very metallic, zingy sound. But you can go one step further with parallel effects: patch a stereo delay in parallel with the flanger, set each channel for a short (but different) delay (e.g., 3 and 7ms), feedback for each channel to as high as possible short of uncontrolled feedback, and output to (of course!) wet only. You’ll now have three resonant peaks going on at the same time. And there are the 10 Tips. Until next time, may your computers never crash and your strings never break. ___________________________________________ Craig Anderton is a Senior Contributing Editor at Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  5. Mixing for Better Mastering What you do during the mix can make a big difference ... by Craig Anderton Mastering your own recording projects at home has become a hot topic, and I’m often asked at seminars whether doing a great mix eliminates the need for mastering. Theoretically this is possible, but the analogy I’d use is putting dressing on a salad. You could put a certain amount of dressing on each piece of lettuce, tomato, etc., then when combined, you should have the same results as putting dressing on the entire salad. This would be like optimizing every track, and assuming that when put together, something would sound “mastered.” But in my experience, I’ve never heard a mix—no matter how good—that couldn’t benefit in some way by a little judicious mastering. Nonetheless, there are techniques you can use while mixing to make the mastering process go more smoothly—here are some of my favorites. MATCHING TIMBRES If you use loops in your music, be aware of loops whose characteristics are wildly different from other loops. For example, suppose most of the loops were taken from a drum machine you use, but you also inserted a few commercially-available drum loops. It’s likely that the latter were already “pre-mastered,” perhaps with some compression or treble-boosting. As a result, they might sound brighter than the loops you created. If you decide to boost the track’s overall brightness while mastering, the commercial loops will now seem “over the top” in terms of treble. I had this happen once when re-mastering a stereo track where everything needed a little extra brightness except for a high-hat loop. It took forever to use notch filtering to find just the high-hat frequencies and reduce those, while boosting everything else. This kind of inconsistency can also happen if you use a lot of analog synths, which tend to have a darker sound, mixed with a few digital synths, which tend to be brighter. This will also give problems when mastering, because if you bring down the highs to tame the digital synths, the analog synths will sound much duller; if you bring up the highs, the digital synths may screech. The solution is simple: to ensure that changes made during mastering will affect all sounds pretty much equally, before mixing, bring “minority” tracks into timbral alignment with the majority of the track’s timbres. However, don’t go overboard with this; some differences between tracks need to be respected (e.g., you might want a track to sound brighter or duller than others, regardless of any equalization done while mastering). BRINGING PEAKS INTO PLACE Another issue involves peak vs. average levels. A lot of engineers use mastering to increase a tune’s average level, thereby making it seem louder (regrettably, some engineers and artists take this to an extreme, essentially wiping out all of a song’s dynamics). To understand the difference between peak and average levels, consider a drum hit. There’s an initial huge burst of energy (the peak) followed by a quick decay and reduction in amplitude. You will need to set the recording level fairly low to make sure the peak doesn’t cause an overload, resulting in a relatively low average energy. On the other hand, a sustained organ chord has a high average energy. There’s not much of a peak, so you can set the record level such that the sustain uses up the maximum available headroom. Entire tunes also have moments of high peaks, and moments of high average energy. Suppose you’re using a hard disk recorder, and playing back a bunch of tracks. Of course, the stereo output meters will fluctuate, but you may notice that at some points, the meters briefly register much higher than for the rest of the tune. This can happen if, for example, several instruments with loud peaks hit at the same time, or if you’re using lots of filter resonance on a synth, and a note falls within that resonant peak. If you set levels to accommodate these peaks, then the rest of the song may sound too soft. You can compensate for this while mastering by using limiting or compression, which brings the peaks down and raises the softer parts. However, if you instead reduce these peaks during the mixing process, you’ll end up with a more natural sound because you won’t need to use as much dynamics processing while mastering. The easiest way to do this while mixing is to play through the song until you find a place where the meters peak at a significantly higher level than the rest of the tune. Loop the area around that peak, then one by one, mute individual tracks until you find the one that contributes the most amount of signal. For example, suppose a section peaks at 0 dB. You mute one track, and the peak goes to -2. You mute another track, and the section peaks at -1. You now mute a track and the peak hits –7. Aha! That’s the track that’s putting out the most amount of energy. If you have “rubber band” mix automation, dive into waveform view for that track, and insert a small dip to bring the peak down by a few dB. Now play that section again, make sure it still sounds okay, and check the meters. In our example above, that 0 dB peak may now hit at, say, -3 dB. Proceed with this technique through the rest of the tune to bring down the biggest peaks. If peaks that were previously pushing the tune to 0 are brought down to –3 dB, you can now raise the tune’s overall level by 3 dB and still not go over 0. This creates a tune with an average level that’s 3 dB hotter, without having to use any kind of compression or limiting. GETTING RID OF SUBSONICS Unlike most analog recording, digital can—and sometimes does—produce energy well below 20Hz. This subsonic energy has two main sources: downward transposition/pitch-shifting, and extensive DSP operations that allow control signals, such as fades, to superimpose their spectra onto the audio spectrum. I ran into this problem recently when doing a remix of a soundtrack tune. The client wanted a really loud, "squashed" mix, so I added a substantial amount of limiting to the finished mix. Yet in some sections, the level went way down—as if some hugely powerful signal was overloading the limiter’s control signal—but I couldn’t hear anything out of the ordinary. Looking at the two-track mix showed something interesting: a massive DC offset (Fig. 1). Fig. 1: The top graphic shows the original file prior to filtering out the subsonics. The lower graphic shows what it looked like after adding a sharp low-frequency cutoff at 30Hz. Normalizing it could raise the level significantly higher. After a bit of research, I noticed that these dips (outlined in red for clarity) corresponded to places in the song where there was a long, rising tone. I had transposed the tone down by several octaves so it sounded like it was coming up from nowhere, but that transposition had moved it down so far into the subsonic region it created a DC offset. That’s the signal to which the limiter was responding. So, I inserted EQ on this one track to create a super-sharp cutoff starting at 30Hz. When I redid the mix, the DC offset was gone. Now that my curiosity was piqued, I called up a spectrum analyzer window and started looking at some of the files that had been subjected to multiple DSP operations. Sure enough, in a few cases there was significant energy below 20Hz. After a while this can add up, robbing available headroom and possibly causing intermodulation problems with audible frequencies. Since then, I’ve started using batch processing functions to run all files used in a project through a steep low-cut filter whose frequency is set just below the audible range. In some tunes this doesn’t make too much difference, but in others, I’ve noticed a definite, obvious improvement in headroom and overall clarity. You can also use a sharp low-cut filter with already mastered material to cut out subsonic frequencies, but it’s much better to do this type of processing before the files are mixed together, as this can lead to a cleaner mix. Okay, now your tune is prepped for mastering. Hopefully, as a result of these techniques any processing required for mastering can be more subtle, so you’ll end up with a clearer, more natural sound – but one that still packs plenty of punch. -HC - ___________________________________________ Craig Anderton is a Senior Contributing Editor at Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  6. Roger Linn Design LinnStrument 128 This just may be the droid you’re looking for… by Craig Anderton The LinnStrument MIDI controller crosses over the line into a new musical instrument, because it proposes a new playing technique as well as the technology that makes this technique possible. The goal is to liberate electronic music instruments (hardware and software) from the conventional “on-off switch” limitations of conventional keyboards. To be fair, these switches have been augmented with velocity, aftertouch, and in some cases, polyphonic aftertouch and the extremely rare release velocity—as well as modulation and pitch bend wheels. However, these seldom translate the immediacy of acoustic or electric instruments, where (for example) how you hold a guitar pick influences the sound of an electric guitar. The LinnStrument really needs multiple reviews: The note layout, the technology it uses, the instruments with which it’s compatible, and the musical impact. But is it compelling enough to take the time to learn a new instrument? Let’s find out. THE NOTE LAYOUT For those not familiar with Roger Linn, he’s contributed to our world of musical electronics as much as other pioneers like Bob Moog and Dave Smith. However he’s too modest to tell you that, which leaves it up to people like me to let you know that when Roger Linn invents something, it’s worth paying attention. It just might be the next sampled drum machine (his Linndrum powered the synth-pop genre), MPC-style beat machines that have become universal fixtures in dance, rap, and hip-hop, or tempo-synched guitar effects like the AdrenaLinn. The main interface is an 8 x 16 matrix of 128 pads; each pad represents a musical note (the LinnStrument 128 is a smaller version of the LinnStrument, which has 200 pads, covers five octaves, and costs 50% more). The pads respond to velocity, pressure, side-to-side motion, front-to-back motion, release velocity, and sliding (e.g., like sliding up and down a guitar string—try that with a conventional keyboard). They’re laid out sort of like the notes on a guitar neck, except the default interval between rows is fourths; the default row offset can be (in semitones) 3, 4, 5, 6, 7, or 12. However, you can enter any interval from -Guitar (backwards guitar tuning), then -16 (high pitches in front) through zero to +16. I stuck with the default, although it’s good to know options are available. The layout is significant. Back in the 80s, I wrote up a project called “the matrix keyboard,” which used an identical keyboard layout based on Chomerics membrane switches. It was indeed just on-off switches, but with my first instrument being guitar, it made sense because I could think in shapes, and those shapes were the same in any key. I found I could play wicked fast solos spanning note ranges that would be impossible to play with a conventional keyboard, and the LinnStrument layout has the same attributes. It will remind many people of a Chapman Stick. Although you can play the LinnStrument standing up like a guitar (there are included guitar strap pins), I found treating it as a tabletop device and laying it on a surface more friendly. Then again I never really got along with playing strap-on keyboards, so I guess that’s not too surprising. Playing with one hand works for solos, but two-handed technique is definitely a better way to exploit what the LinnStrument can do. Make no mistake: this requires new muscle memory. Although laid out like a guitar, guitar technique won’t do you much good unless you’re into tapping; keyboardists need to think in terms of shapes and intervals, like a guitarist. Physically, the LinnStrument is easy to play. Mentally, it’s a new instrument and it takes time to develop the kind of unique physical dexterity needed by any musical instrument. I don’t want to make it sound tougher than it is, but I don’t want to make it sound easier, either. THE INTERFACE The user interface itself works by holding down control keys (momentary if pressed for > 0.5 sec, toggle if pressed for < 0.5 sec.), then tapping pads to make your selection. Most of this involves set-and-forget functions (velocity curve, setups for splits, pressure sensitivity, row offset, footswitch assignment for the dual footswitch jack, and the like). This is fortunate because the labels aren’t exactly readable under stage lights, however the most important functions are laid out in a vertical strip of eight switches along the left side. You’ll be able to make adjustments on the fly after a period of familiarization. Note that all of the user-editable functions are available from the front panel—you don’t need a computer editor to alter parameters, although that could be a welcome addition. (Side note: Not needing a computer editor also means you won’t end up in the same kind of situation as M-Audio Venom owners, who are reliant on computer-based editing software that may or may not ever be updated to deal with newer operating systems. What’s more, the LinnStrument software is open source. If someone wants to write an editor, they can.) There are two 5-pin DIN MIDI jacks, a USB port, and a footswitch jack that can accommodate single or dual footswitches In addition to controlling expected functions like sustain and tap tempo, the footswitch(es) can also control the arpeggiator, jump octaves, send control change messages, and the like. MUSICAL IMPACT Let’s skip ahead to the bottom line. The LinnStrument is one of a handful of electronic controllers, like the ROLI Seaboard and Haken Continuum, whose expressiveness goes way beyond a standard keyboard. Sure, I can make very expressive synth sounds—if I add controller data manually, after playing the notes, using a process that Quincy Jones likened to “painting a 747 with a Q-Tip.” The LinnStrument places that kind of control under your fingers, in real time. This allows for a far more spontaneous musical experience because to alter a phrase from the Department of Homeland Security, “If you feel something, play something.” At some point, you’ll be familiar enough with the LinnStrument—and the synthesizer it controls—to make the kind of sounds you want to make. Interestingly, I didn’t sense a conventional learning curve; it’s more like everything (except the muscle memory) falls into place over a short period of time…although it did take some floundering around to get to the point where I could make that transition. The LinnStrument web site is loaded with helpful information, so if you’re going to learn the LinnStrument—bookmark it. ’Nuff said. THE INSTRUMENTS AND TECHNOLOGY The biggest hurdle with the LinnStrument isn’t the controller itself, but the instrument it drives. The LinnStrument speaks MPE (MIDI Polyphonic Expression) and there aren’t very many instruments that respond well to polyphonic aftertouch, let alone allow each individual note to receive its own data—the main goal of MPE. It’s kind of like having a Testarossa, but only a couple highways where you can really open it up. However, it’s a misconception that you need an MPE synth, because of how the LinnStrument implements one-channel MIDI—you can do polyphonic pressure, 3D-Expressive solos, and performed chord vibrato all on one channel. The main perceived limitation is that polyphonic pitch slides will be automatically quantized; unless you need polyphonic pitch slides, MPE’s benefits aren’t all that noticeable. The web site explains the one-channel MIDI implementation, which is quite clever. The LinnStrument adds expressiveness to any synth that can respond to controllers. There’s a set of instruments available for Logic Pro (and Mainstage), so I took a leave of absence from my Windows workhorse and booted up my MacBook Pro to check them out. They give a good taste of what you can do with the LinnStrument, and before too long I was sliding around the upright bass, playing intervals more associated with bass than keyboards, and adding hand-controlled—not LFO-controlled—vibrato. But while sampled acoustic instruments make a fine match for the LinnStrument, it’s the synths where you get the most visceral experience. I normally don’t associate touchy-feely control with synth sounds, yet that’s precisely what the LinnStrument delivers. You also need a DAW that’s up to the task, although those requirements aren’t that difficult; for MPE, you just need to be able to record multiple different channels in the same MIDI track. I tested the LinnStrument with Cakewalk SONAR, which worked fine. Handling MPE splits was more of a challenge—SONAR can record all MIDI channels into a track, or one MIDI channel. Ideally, you’d want something that can restrict a track to, for example, channels 1-8 with another track handling channels 9-16. To deal this with this, I just put two pairs of eight tracks, each responding to a single channel, in two track folders. That said, while few instruments can make full use of what the LinnStrument can do, the expressiveness you can add to any instrument is noteworthy. For example, I have a lovely feedback guitar patch where pressing on a LinnStrument pad brought in the feedback, while side-to-side motion created vibrato. I should add that the pressure response is not the “afterswitch” you find on many keyboard controllers. Compare the screen shots below with the controller data many keyboard controllers generate; it’s extremely consistent. If you think you’re pressing down a key by a certain amount, if you apply what you think is the same amount of pressure to another key, you’ll get the same results. Here's what aftertouch looks like, with me trying to apply as even pressure as possible. The backward/forward motion produces CC#74. Again, note the consistency. The pitch bend is smoother than some synths with hardware wheels. Something that really appeals to me is how you can do pitch bend wiggles, like on guitar... ...and here's what happened when I tried to strike a pad with ever-increasing force until I hit maximum velocity. I particularly like how the feel from no pressure to full pressure is linear and consistent. This is one area where if the LinnStrument had gotten it wrong, that would have been a deal-breaker. Fortunately, that’s not the case. You feel like you’re interacting directly with the instrument parameters, not changing something that changes something else on the way to changing the parameter. The bottom line is that while I hope new virtual instruments will take full advantage of the LinnStrument—and some already do—it’s nonetheless an excellent controller for whatever synthesizers or samplers you already use. If you take the time to “LinnStrumentify” your patches to take advantage of the added expressiveness, you won’t regret it. SO WHAT? Roger Linn has often said that electronic instruments have more or less eliminated the concept of the instrumental solo in electronically-generated tracks. While you can debate that, there really hasn’t all been much progress since Jan Hammer got guitar envy with his Minimoog. When synths have been used for solos, they tend to be more along the lines of single-note instruments like sax, because for any kind of expressiveness, you needed to dedicate a hand to the wheels or levers, while the other hand played the notes; now both hands can play and add expressiveness. In fact, it took me awhile to get used to using both hands, rather than using my right hand to play notes, and the left hand to work mod wheel and pitch bend. I didn’t have to move the pitch wheel back, hit a note, and then rotate the pitch wheel forward; I just hit a pad a couple semitones below the target note, and slid to the right along the row of pads—assuming, of course, that I’d set the pitch bend range to +/- an octave. BUT WAIT...THERE’S MORE! After all, this is an electronic controller...so there’s no need to limit it just to playing notes. There’s an arpeggiator where you can influence the arpeggiator expressively, which (in conjunction with swing) makes for a more organic and playable experience. There’s also a step sequencer that’s unlike anything you’ve ever played, because you can make each step expressive—as just one example, imagine step sequencing where you can alter the velocity and pitch bend on each step. There’s also an ergonomic nod to those of you (you know who you are) who dedicate a keyboard’s top or bottom octave to MIDI control. The lowest row of pads can be assigned to multiple functions—for example a modulation-like ribbon controller, sustain pedal, and more. One really wild feature that pushes the MPE envelope is being able to split the pads into groups. This not only allows playing two different sounds—not that novel a concept—but you can finger notes on one split, then “strum” them on the other split. A split can also provide a “control surface” for real-time parameter control of sounds being made on the other split. SO IS IT WORTH LEARNING A NEW INSTRUMENT? This is not a toy, or a “let’s push the buttons and make sounds!” kind of controller. It’s a real instrument, with real capabilities. As such, it’s quite easy to find your way around initially and the barrier to entry is low (e.g., you don’t have to build up callouses like a guitar). And the $999 price is certainly reasonable, given the LinnStrument’s custom and precise nature. However like any instrument, becoming a true virtuoso takes effort. If you play virtual instruments, then those efforts will be rewarded if the synths themselves are up to the task. For example, some synths that respond to “polyphonic aftertouch” do indeed respond to it, but convert it into something more like channel aftertouch. Very few instruments have release velocity, which the LinnStrument can generate predictably. That said, even today’s “standard” instruments can benefit from the five modes of expression, although you may need to dig into how to assign parameters to controllers. After playing with the LinnStrument for an extended period, I have no doubt that I could become very good over time at playing it, and I also have no doubt that my music would benefit. Yes, I can “overdub” expressiveness, but is that really expressiveness compared to real-time playing that reacts to the music? And even if it is—which I doubt—with the LinnStrument, that expressiveness is spontaneous. I suspect that as useful as the LinnStrument is in the studio, it has a bright future ahead in live performance. So the bottom line is that Roger Linn has done it again: come up with something musically relevant and novel that opens up new musical paths. Will the LinnStrument power the same kind of electronic music revolution that his sampled drum machine did in the 1980s? Time will tell...but I hope it does, because it allows inserting an element of emotion so often lacking with today’s synthesis. _________________________________________ Craig Anderton is a contributing Senior Editor at Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  7. 5 Mastering Tips Getting into mastering? Then heed these five tips by Craig Anderton Save all of a song’s plug-in processor settings as presets. After listening to the mastered version for a while, if you decide to make “just one more” slight tweak—and the odds are you will—it will be a lot easier if you can return to where you left off (Fig. 1). For analog processors, take a photo of the panel knob positions. After all, that's why smart phones were invented. Fig. 1: Steinberg WaveLab has multiple ways to manage presets. If you use loudness maximizers, don’t set the maximum level to 0 dB. Some CD pressing plants will reject CDs if they consistently hit 0dB for more than a certain number of consecutive samples, as it’s assumed that indicates clipping. Furthermore, any additional editing—even just crossfading the song with another during the assembly process—could increase the level above 0. Don’t go above -0.1dB; -0.3dB is safer (Fig. 2). Fig. 2: Waves' L3 Multimaximizer has its output ceiling set to -0.3 dB. Halve that change. Even small changes can have a major impact—add one dB of boost to a stereo mix, and you’ve effectively added one dB of boost to every single track in that mix. If you’re fairly new to mastering, after making a change that sounds right, cut it in half. For example, if you boost 3 dB at 5 kHz, change it to 1.5 dB. Live with the setting for a while to determine if you actually need more. Bass management for the vinyl revival. With vinyl, low frequencies must be centered and mono. iZotope Ozone has a multiband image widener, but pulling the bass range width fully negative collapses it to mono (Fig. 3). Another option is to use a crossover to split off the bass range, convert it to mono, then mix it back with the other split. Fig. 3: Ozone's image widener can also narrow signals to mono with negative number settings for a band. The “magic” EQ frequencies. While there are no rules, problems involving the following frequencies crop up fairly regularly. Below 25 Hz: Cut it—subsonics live there, and virtually no consumer playback system can reproduce those frequencies anyway. 300-500 Hz: So many instruments have energy in this range that there can be a build-up; a slight, broad cut helps reduce potential “muddiness.” 3-5 kHz: A subtle lift increases definition and intelligibility. Be sparing, as the ear is very sensitive in this range. 15-18 kHz: A steep cut above these frequencies can impart a warmer, less “brittle” sound to digital recordings. -HC- ___________________________________________ Craig Anderton is a Senior Contributing Editor at Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  8. Freakazoid DAW Signal Processing through Automation Make your signal processing come alive with automation by Craig Anderton What makes an acoustic instrument special? Nuances. The way the strings in a piano interact, the angle at which a pick or bow hits strings, and the distance from the rim a drummer plays a drum all make a difference in the final sound. Automation allows adding nuanced expressions to synthesized/sampled music. The most basic automation, for mixing and muting, has been with us for decades. Now we have automation for soft synths and signal processors, and using these in strategic places can definitely augment your music’s emotional impact. We’ll start with a brief overview of automation options, then segue into some typical applications for your own music. AUTOMATION TYPES Here are some of the most common ways to automate effects. Record control motion as you edit on-screen controls. To do this, call up the signal processor plug-in, and manipulate the knob(s) while recording these changes. On playback, the controls will usually reproduce the motions you made (Fig. 1). Fig. 1: The red "W" toward the right indicates that the DAW (Cakewalk SONAR) is recording knob movements from the Tone 2 biFilter2 processor. Previously recorded automation from this knob is in green, above the processor's user interface. This type of automation accommodates the human touch. You can push changes to the beat and work intuitively – as well as go back and edit your moves if touch-ups are needed. You’re limited to moving one parameter at a time if you’re using a mouse, but there are ways around this (described later). How this automation is recorded varies from program to program. The data might be in the form of MIDI controllers, Non-Registered Parameter Numbers (NRPNs), System Exclusive data, etc. Envelope control. Here you draw envelopes (using lines and breakpoint "nodes") to control signal processor parameters (Fig. 2). Note that in most sequencers, moving controls creates envelopes, so these methods are somewhat interchangeable. If I wanted to add a wa-wa effect, I’d go for recording control motion; but the envelope approach works better to add automation changes that need to be synched precisely to the beat (like an effect that decays over 1 measure, then starts over again on the next measure). Rhythmic effects are particularly easy to do if the envelope “nodes” (breakpoints) can snap to a timing grid. Fig. 2: The Automation Lane allows drawing in automation curves, or editing existing automation curves, by drawing nodes that define the automation. External control and recording MIDI data. Some processors accept MIDI data sent from an external hardware controller (e.g., assignable knobs from a keyboard, dedicated control surface like the Mackie Control, etc.). This data edits the processor’s parameters, but is also recorded in a track as MIDI controller data (Fig. 3). On playback, any on-screen knobs usually respond to the changes. While this approach is similar to moving knobs on screen (except you’re using real hardware), the big advantage is that you can edit multiple parameters simultaneously. Fig. 3: The Wah module in Native Instruments' Guitar rig has been set to MIDI learn, which means you can assign it to respond to a particular MIDI continuous controller number, as generated by an external hardware MIDI controller like a footpedal. The controller generated the automation waveform shown on the lower left. A plug-in might implement one of these methods, all of them, or none. It pays to do a little experimentation to find out how your plug-ins (and DAW software) deals with automation. Note that early VST format plug-ins were limited to 16 automatable parameters, but that is no longer the case...and that was long enough ago those plug-ins have probably been updated. FUN WITH AUTOMATION Okay, now that you can automate, here are some of my favorite plug-in automation tweaks. Better chorusing, phase shifting, and flanging. I’m not a fan of the whoosh-whoosh-whoosh of LFO-driven choruses. Even when tempo-synched, the repetition can get more boring than AM radio. There are three simple workarounds. One is to vary the LFO rate control so that it’s constantly in motion rather than locked into one tempo. Another is to set the LFO to a very slow rate, or turn off LFO modulation entirely, and automate the Initial Delay or phaser frequency parameter (Fig. 4). Play with the delay so the effect rises and falls in a musically appropriate way; some programs let you draw periodic waveforms. Sometimes it’s worth overdubbing a second control track with automated feedback (regeneration). Fig. 4: Automation is providing a periodic modulation waveform (the violet automation curve) for the 8-Stage Phaser frequency. However, the “gotcha” is that some parameters crackle or glitch when automated, and this is particularly true of delay times—so you may need to control the LFO rate regardless. Crunch time. With distortion plug-ins, usually the input level sets the degree of crunchiness. For those times when you want to kick up the intensity without causing a massive volume increase, turn up the plug-in’s “Drive” control or equivalent to add crunch. As the signal is already clipping, turning it up more will create a more crunched sound but without an excessive level increase. Crunch time with non-automatable distortion plug-ins. Think nothing can be automated with a plug-in if it lacks automation? Try this trick. Insert a distortion effect into an aux bus rather than as a track insert. It’s highly likely that your DAW program can automate the effects send going to the bus, which allows automating the input level to the distortion plug-in, thereby altering the “crunch factor.” Of course you need to give up an aux bus, but it’s worth it. Delay parameters. This was the application that sold me on the concept of effects automation, and it remains one of my favorites. I often use synchronized echo effects on solos, and heighten the intensity at the solo’s peaks by increasing the amount of delay feedback. This creates a sort of “sea of echoes” effect. Sometimes, I also bump up the delay mix a bit so there’s more delay and less straight signal. Predatohm’s feedback control. The Predatohm plug-in (by Ohm Force) has been around for a long time but remains one of my all-time favorite plug-ins. Its combination of multiband compression and hardcore distortion is just the thing to create some really powerful, industrial-type effects. But its controls, particularly the Feedback Amount parameter, can be touchy. Using automation to bring this up and create the feedback effect, then reduce the feedback before it becomes overbearing, works really well. I also like to automate the Feedback Frequency, especially making it rise or fall slowly over the length of a loop. Tasty! The parametric wa-wa. You know the basic idea—turn up the resonance, and vary the parametric’s center frequency to create a wa-wa effect. But ever notice this doesn’t really sound like a wa-wa? That’s because a parametric has a flat response, with the peak poking out of it. A real wa-wa rejects frequencies around the resonant peak so you don’t hear anything except the peak. To create a similar effect, use a filter that includes highpass and lowpass filter stages, preferably with a variable Q. Set their frequencies so there’s a midrange boost, adjust the Q to suit, and use automation to vary the frequency of both filters simultaneously. Cool! Instant authentic-sounding wa-wa, and without the crackle from a worn-out pot. While we’re in vintage-land, you can use the same concept to create great sample-and-hold effects. A sample-and-hold synth module would sample a control voltage from a waveform, apply that to a filter’s center frequency, and hold it for a particular duration (e.g., an eighth note). It would then take another sample, and hold the filter at the new frequency. The effect was a series of stepped filter changes—sort of like a quantized wa-wa pedal. Note that creating a sample-and-hold “stairstep”-type automation control signal pretty much requires drawing an envelope, as you can’t move a control fast enough to create instant filter frequency changes. However, some programs let you draw random and other periodic waveforms—see this article on how to do tempo sync when you can't do tempo sync. Envelope-based tremolo. Amplitude changes are fun, but a tremolo is pretty limited. Instead, automate amplitude changes in time with the music (you probably don’t even need a plug-in here, you can just automate the channel fader). For example, with a sustained chord, draw a series of “sawtooth wave” envelopes, each of which lasts one beat. This creates a pulsing, rhythmic effect. And then there’s… Plug-in parameter automation merits experimentation. Some parameters might glitch in weirdly useful ways, and some parameters that you might not have thought of automating can produce great effects. We’ve been given tools that let us be really creative; let’s take advantage of them. -HC- ______________________________________________ Craig Anderton is Senior Contributing Editor to Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  9. [uSER=519559]BlueGreene[/uSER] - I like the pottery story a lot. I think the crucial part was talking about how those involved in quantity were learning from their mistakes and refining the process, not just the object. Come to think of it my creative process is sort of a "split the difference" situation: I'll come up with a bunch of ideas, but one seems to stand out and I zero in on that one.
  10. You got that right. I've seen a lot of visually spectacularly movies that had nothing going on inside. Yes, I appreciated the spectacular visuals, but that would be a movie I would watch once and forget...kind of empty calories. OTOH I found "Blade Runner 2049" both visually stunning and with an emotional underpinning of what was human vs. what was replicated, which actually has a lot to do with this thread. In a way that explains how I work. I emphasize getting the music down fast, while the muse is hanging out. Instrumental parts, vocals, and lyrics pretty much tumble out. But after that's done, I get super-detailed in terms of making tons of little changes. I ran across an older version I'd done of a song that was finished, and although the essence was there, the finished version was just so much more complete. I think there's plenty of room for technology and frankly, with obsession to detail. But if it happens during the right-brain/creative process zone, then it can be a distraction. If it's like turning a book over to a copy editor, then all it does is make a stronger result.
  11. I was actually thinking of "composer" in a more general sense, e.g., I compose the songs on my YouTube channel, I don't play them live with a band. Or someone doing beats is composing rather than playing. Would my music be better if it was played by a band? I don't know. It would be different, that's for sure, with one reason being that several elements of what I do couldn't be played live (well I guess they could if you, for example, sampled the backwards guitar parts and played them from a sampler, but that's not the way most people think of a band). The songs, the lyrics, and the vocals would all be the same. I would prefer to play the music live with a band, but that's not an option for now, so I have the studio. And the reality is I do play the parts, except for the drum loops...but even those are manipulated, cut, pasted, combined with individual drum hits, etc. So a lot of work goes into those. The idea of having a band play together is because of the bond that happens among players. Well, isn't there a bond within me for the different parts of me that are expressing themselves? I don't have to look myself in the eye, I am the eye I feel what's lost with the do-it-all approach is feedback from others that could make for a better piece of music, but I'm not sure the parts themselves suffer...especially because I often re-cut parts to reflect changes in the music as it develops.
  12. I think another angle is players vs. composers. Back in the day, it was players who went into the studio and tried to capture the magic of playing live. Composers had to "speak" through someone else. With today's tools, it's possible for composers to make the music they want to make without having to hire a band or orchestra to do so. Whether this is good or bad is probably a matter of personal opinion, but it's a significant factor in how music is made today.
  13. Studio Commercial Potential? Or, at least some extra income... by Craig Anderton There’s nothing quite like generating some extra income from your studio—it not only pays for cool toys but may qualify you for some tax deductions if your studio is a legitimate business concern. Now, before you go “Clearly he’s not talking about me, I don’t know how to do commercials,” hear me out. There are lots of ways to make money in music, but they sometimes aren’t all that obvious. Over the years I’ve done mastering, soundtracks, and narration for industrial videos, radio commercials, sonic logos, web music adaptations, music for trade show kiosks, manufacturer demo tapes, tutorials (Fig, 1) and other projects. These may not deliver killer income, but help maintain a steady income stream. Fig. 1: A tutorial video project, with a video track, three lanes of recorded narration, and three tracks of music loops. And, there’s an easy way to get started in commercials. Listen to some local ads. At some point, you’ll hear one that’s okay but not great. Go to the advertiser, and offer to do a free commercial for them. Say they’re under no obligation to like it or use it, but you’re hoping they will like it and come back for more. Very few companies will say no because they have nothing to lose...but you have a lot to gain. If they use it, then you have the start of a “reel” you can take to other advertisers; and then you can start charging for your work. However, I advise not going to a company whose commercials are really bad. If they don’t care enough, they probably don’t care enough to hire you. So, let’s look at what’s involved in doing a radio spot because, in a lot of smaller markets, that’s low-hanging fruit. Or as the old saying goes, “In the land of the blind, the one-eyed is king.” CHOOSING YOUR TOOLS The best program I’ve found for doing radio commercials is something that handles loops really well. Ableton Live, Cakewalk SONAR, Logic Pro X, and Sony Acid are some of my favorites. I know they’re designed for creating loop-based music but are great for commercials because: Using loops is a fast way to get a music bed happening You can easily “stretch” or “shrink” the tempo to fit timings to the available length They all have video windows if you graduate to TV commercials Multitrack recording lets you do composite narration Pitch transposition comes in handy for processing voice Of course, other hard disk recording programs will work too. FITTING COPY TO MUSIC Discussing copy may not seem very musical, but I feel the most effective commercials have the copy and music flow together as naturally as possible. Most longer radio commercials are divided into sections. For example, for a radio spot promoting a web site's contest, there was an introduction announcing the contest, a middle section promoting the site itself, a final call to action, and a “boilerplate” tag. I believe that when people hear 4/4 music, they anticipate hearing changes every four measures. So, I wanted to arrange the copy so that the sections fell into natural musical divisions. There are several ways to tweak copy to fit in with the music. Edit the copy (with the client’s permission of course) to fit better Alter tempo within the music. For example, if some copy runs just a little bit long for a section, slow down the tempo. If the copy runs a little bit short, don’t worry too much as you may be able to fit in some cool sonic effect. For example, in one part, there was a little space left over, so I inserted a sampled voice saying “yeah!” If you can’t get sections to fall within exact 4-measure boundaries, try using transposition after, for example, 2 measures. This also alerts the listener that a change is happening. Change instrumentation where you want to emphasize a transition. Alter drum patterns, morph piano into organ, double an acoustic guitar...you get the idea. Before actually starting the commercial, I collected a bunch of loops that seemed like they would be useful. I prefer having everything together before starting the narration; it interrupts the flow too much to go looking for sounds in the middle of recording the voice. RECORDING THE NARRATION Composite recording, where you record a bunch of takes and assemble the best bits from each one, is ideal for recording narration. The basic idea behind composite recording is: Record-enable a track and do a pass of narration That track mutes automatically, and then you can record another track Keep recording new tracks. I typically do about a half dozen tracks before stopping. It’s very important to cut all these tracks at the same time, with the same mic, and the same mixer/processor settings. When you cut and paste the best parts of the various tracks to end up with the ultimate composite track, you want each track to be identical in terms of tone and level. If you have to go back a week later and make fixes, you may find yourself having to re-record from scratch if you can’t get everything to match. EDITING THE NARRATION Hopefully, somewhere in that half-dozen or so narration tracks, you have what you need for a complete take. Here’s how to create that take manually, although note that some DAWs have a streamlined comping process that simplifies matters considerably. Set loop locators (if available) around the first phrase of the narration. Make this loop fairly short – comparing sentences is much more difficult than comparing phrases. Compare two tracks at a time. Pick one “winner” and one “loser.” Once you decide which is the loser, cut at the loop locator boundaries, and erase it. Now compare two more tracks and pick the best one. Continue this “round robin” process until you’ve chosen the best phrase. Now move the loop locators on to the next phrase, and start picking the best version. At this point, you have a bunch of little segments of audio. I prefer to bounce these down to a single track so that if it’s necessary to add compression or other processing, it affects the entire track. When bouncing, mute all other tracks, and make sure the levels for all the segments are balanced. You may need to tweak a few levels prior to bouncing. MODIFYING THE VOICE When it comes to vocal talent, I’m no James Earl Jones (then again...who is?). But I make do with what I have, and a few vocal tricks can really help. This is where digital signal processing really shines, as you can transpose a loop or track downward in pitch while retaining the same duration (in other words, transposing down doesn’t lengthen the audio, and transposing up doesn’t shorten it). Of course, the intention is to use this with loop-based music to match loops that are in different keys, but shifting my voice down one or two semitones gives a deep, “FM-DJ-late-at-night” vocal quality. This technique is so effective that with one particular commercial, many people didn’t realize it was me. I started off by shifting the entire track down 1 semitone, but there were some phrases I really wanted to accent. I cut these into separate segments and transposed them down 2 semitones. The effect was very cool. Here are some other vocal tricks that work for me: Delay. I copied the main track to a second track, dropped the second track’s volume, and shifted it about a 16th note later. The echo gave the effect you would hear from an announcer talking over a PA in a medium size venue. (After hearing the commercial, one friend commented that if my current gig tanked, I could always become a monster truck pull announcer. I think that was a compliment. Maybe.) Compression. Compression is great for cutting off the tips of peaks, allowing a higher average level. Limiting. This makes the level just a little bit hotter and increases intelligibility. MIXING, MASTERING, AND CODA From here, the rest of the path is straightforward. With most commercials I've done, mixing simply involved putting the music bed well behind the voice (the narration had virtually no space between words, so there was no point in trying to use “ducking” to keep the music bed up during narrative pauses). Because the music is often so limited—just rhythm guitar, bass, drums, and a few little fills—mixing is a lot easier compared to mixing a song with lots of tracks. The final step is creating a 2-track master WAV file for the client to approve, typically with a little limiting so the commercial “pops” over the radio. At that point, all that's left is providing the commercial on the desired playback medium...and of course, cashing the check. -HC- ______________________________________________ Craig Anderton is a contributing Senior Editor for Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  14. I think it depends on the artist, the producer, the engineer, and the ability to resist fads. Speaking only for myself, the music I've done past mid-90s lets the emotional impact shine through more than what I did before. BUT - I also don't have to please anyone other than myself, and I've made a conscious effort to find more spontaneous methods to facilitate songwriting and recording. It's taken me since the mid-90s to become as proficient with a computer as I am with a guitar (yes, that's a helluva learning curve). These days I feel the computer is more of a songwriting partner than just a method of transcribing. I've finally understood exactly what "non-linear" recording means, and it's really made a difference in terms of being able to capture ideas when they hit. Even compared to 5 years ago, when I first started doing Neo-, the music I'm doing now feels more fluid and capable of capturing the moment better. I gotta love people like Jack White who keep the old school traditions alive, because they work too and worked well for decades. I used them too. But there's nothing magic about them, the magic in any technology is what the people put into it. There's no reason why you can't boot up Pro Tools and have a band play with gobos and be looking each other in the eye. Eventually, people who were not able to resist fads will realize they CAN resist fads.
  15. I also think there was a fascination at first with being able to create "perfect" music. But do that long enough, and it becomes clear that "perfect" music isn't perfect.
  16. The people don't take it out of the music...it's taken out before it reaches the people! But editing doesn't have to take out the human element, it can bring it to the fore...as it has with my vocals. Since I've started paying more attention to the levels of phrases, making them more consistent brings the "human" part of the vocal more to the front, and reduces the need to add compression (which I think can remove some of that emotional impact). Is my finished, edited vocal less "human" (i.e., realistic) than my raw vocal? Absolutely. But I really believe the editing amplifies the human element instead of compromises it. I'll even put in a good word for pitch correction. Since having the freedom to add pitch correction if needed, I sing in a much more free, relaxed, and "daring" way, because I know if I do something cool except for a couple of notes, I can fix the notes and retain the good stuff. That too takes away the human element - those couple of clams - but in return, being more free while singing amplifies the human element.
  17. Stompbox Envelope Filters in the...Studio? Time for another installment in the “Stompboxes Reloaded” series by Craig Anderton The envelope filter Is a variation on the wa-wa pedal. However instead of moving a physical pedal, the envelope filter measures the guitar signal’s level, and uses this to control a bandpass filter’s frequency—higher levels are like pushing the pedal down, while lower levels are like raising the pedal. Envelope filters were popular for guitar, and sometimes bass and keyboards, primarily with funk and R&B music. Bassist Larry Graham, of Graham Central Station and later bassist for Prince, used the Funk Machine envelope filter (Fig. 1), which I designed. Steve Cropper also used it. Fig. 1: The Funk Machine is an extremely rare envelope follower from the mid-1970s. And here’s some interesting trivia: Martha Davis, who later moved to Los Angeles and founded the Motels (a popular 80s band with multiple gold records), did final quality control on the first generation of Funk Machines. The most popular envelope filter was the Musitronics Mutron-III, introduced in 1972. In the 90s Electro-Harmonix asked inventor Mike Beigel to re-create that sound, which resulted in the Q-Tron pedal series (introduced in 1996). The very first envelope filter effects were created with Moog modular synthesizers, and today, Moog Music’s Moogerfooger (Fig. 2) packages that sound in a stompbox. Fig. 2: The Moogerfooger is an envelope filter based on the Moog filter. This is a lowpass filter type, which is different from the usual bandpass response used in wah pedals. The main envelope follower control is for sensitivity, so the envelope can track the proper range of your playing. A compressor between the guitar and envelope filter can help reduce the dynamic range if the filter variations are excessive. Optional controls typically change the filter resonance, filter range, type (e.g., bandpass or lowpass), envelope direction (i.e., higher signals make the filter frequency go lower), and attack and/or decay controls. Increasing these gives smoother filter changes, but don’t follow dynamics as accurately. (The Funk Machine used a photo-resistor as the filter control element, which had an inherent attack and decay time, so these controls weren’t needed.) STUDIO APPLICATIONS Vocals. One trick with vocals is to set the frequency as high as possible, preferably in the 2-3 kHz range, and sweep over a narrow range. Mix this in parallel with the vocal, but at a level where the effect is just barely noticeable—or better yet, at a level where you can’t really tell it’s there, but you can hear the difference if it’s bypassed. This can add a dynamic effect that gives the voice more animation, interest, and definition (people never believe me until they try it!). Bass. Although bass was one of the original uses, like any wa effect filtering thins the bass. For a fuller sound, split the bass into a dry path and a parallel path that includes the envelope filter. With a DAW, patch the bass as a channel input, then use an aux bus to send some of this signal to a spare audio interface output that you patch into the envelope filter input. Connect the envelope filter output to a spare audio interface input, and bring it back into the DAW to provide the processed sound. Drums. Envelope filters can give a trashy, funky sound with drums. As with bass, this often sounds best when you put the envelope filter in parallel with the dry drum sound. However, note that while this is useful for processing a mixed drum track, envelope filtering can also sound good on individual drum tracks, like snare, toms, and kick, giving an almost “Simmons” type of drum sound. Electric piano and keyboards. Because the filter frequency depends on an instrument’s dynamics, anything with a percussive envelope like electric piano will work well with envelope filters. A sustaining sound, like organ, will simply “switch” the filter between high and low settings. This can sometimes produce interesting results, but in general, you’ll want dynamics. EMULATING ENVELOPE FILTERS WITH MODERN GEAR Many modern guitar multieffects include envelope filter effects, and the sound will be similar to older units. Also, amp sims usually include envelope filters (Fig. 3). Fig. 3: Virtual envelope followers, clockwise from top: NI Guitar Rig AutoFilter, Line 6 POD Farm Clean Sweep, Waves G|T|R Wah Wah, and IK Multimedia Ampeg SVX SCP-ENV for bass. Another option is to use synthesizers with external audio input jacks. These usually don’t track the input signal dynamics, but some of them can trigger an internal envelope generator that controls the filter. The triggering occurs when the input signal exceeds a certain level. An added benefit is that these filters will usually be multi-mode types. -HC- [Note: For a related topic, see the article Stompbox Compressors in the...Studio?] ______________________________________________ Craig Anderton is a Senior Contributing Editor of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  18. Using Color in the Studio Use color to improve your studio workflow by Craig Anderton In seminars, I’ve often mentioned the importance of staying in your “right” brain (the hemisphere that processes intuitive and artistic thinking) while recording. When your “left,” analytical brain gets involved, it diverts attention away from the creative process, and it’s hard to return to "right brain" mode. Ideally, you wouldn’t have to think at all while recording. It used to be this way: You had an engineer and producer to take care of the analytic tasks. But if you’re producing or engineering yourself, the best way to stay in creative mode is make your work flow as smooth and intuitive as possible. WHY COLOR MATTERS Your right brain parses non-verbal media (such as music and color) well. When dealing with words, your brain has to recognize the symbols first, then process the information. Color is like a “direct memory access” process that has a more direct pipeline into your “personal CPU.” Stoplights use colors rather than signs that say “Stop,” “Go,” and “Caution” because you react instantly to that red light. Here’s one example of using color: Check out a modern TV or DVR remote, and you’ll see that several of the buttons have different colors. Once you know what the colors mean, it’s a lot easier to see “red” or “blue” than parse the different labels on the keys. If you have remotes that don’t have colors, adding self-adhesive removable labels to buttons will make it a lot easier to pick out important buttons. VARIOUS APPLICATIONS Here are some tips about using color in the studio. For patch cords, buy a selection of enamel paints (model and craft supply shops are a good source) and put a dab on the same color on each end of a patch cord. Ideally, each cord would have a different color. This simplifies tracing a cable’s patching. If you use a hardware mixer, you likely have a “scribble strip” to write down which instruments are on which channels. But try taking this one step further; use some small, round or square colored labels to color-code certain types of tracks. For example, use red for all the drum channels, orange for percussion, etc. This “visual grouping” helps you locate instruments faster. The Mac makes it easy to color-code labels by letting you tag files with colored highlights (Fig. 1). For exmaple, with a sample library you can highlight different types of instruments or sounds (as well as favorites) with different colors, or assign different colors to different project folders. Fig. 1: The tags at the bottom of the context menu let you highlight file and folder names with various colors. SOFTWARE COLOR CUSTOMIZING Today’s software programs often let you tweak the UI colors. There are two, sometimes conflicting, goals: Choosing colors that minimize eyestrain, yet provide enough contrast to emphasize a program’s most important aspects. One issue is readability—yellow type on a black background is considered highly readable. But a black background can be less restful than muted gray or dark blue. As a result, consider using yellow-on-black for important graphic elements that don’t involve lots of background area. For program elements that are less important than others, choose a typeface color that doesn’t contrast as much with the background. Your eye will be drawn first to the important parameters, which have greater contrast. A PRACTICAL EXAMPLE SONAR allows significant color customization, and the Platinum version includes a Theme Editor for extensive customization. The default "Mercury" and "Tungsten" color themes tend toward the “restful on the eyes” philosophy, which makes sense for the greatest number of users. However, different work methods suggest different colorization. I tend to use the Console View for final mixing and fader automation, and the Track View for recording and editing. As a result, I need to see parameters fast and unambiguously in Track View. With the Console View, I’m more interested in something that I can stare at for hours on end. The upper half of Fig. 2 shows that the Track View name text was changed to yellow for tracks, while retaining blue for folders. Fig. 2: Cakewalk SONAR has had a couple color tweaks to make the interface better-suited to my preferences. This makes it very easy to see the track names and differentiate them from folders . The lower half shows colors in the console channel strings, but the meters have also been modified to more of a lime green to make them stand out, with a white instead of orange "you're about to hit red" zone. FUN WITH SATURATION I often have multiple tracks of the same instrument like lead and background vocals, harmony voices, lead and rhythm guitars, and the like. I not only color these the same, but will increase the saturation on the track that's the current focus of my attention. This makes it easy to pick out a specific track from a group of tracks. COLOR MY WORLD Once you become aware of color’s importance, try using it to improve your workflow. It will make a difference! -HC- ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  19. Build Your Own Useful Adapter Boxes The Only Way You'll Get These Boxes: Build 'Em Yourself! by Craig Anderton You won't find any of these adapter boxes at Radio Shack or your local music store, but they're incredibly useful. The only catch: You're going to have to build them yourself. But don't panic! The process is simple, and doesn't take much time (or effort) at all. You do need to know how to solder; but if you don't know how, commune a bit with Google and the internet, and all will be revealed—there's some good info at the Circuit Technology Center site, but there are plenty of other options. You'll need some tools, but you may have some of them already. Variable speed electric drill with a selection of bits (1/16", 1/8", 1/4", and 3/8" are particularly important). When drilling a metal box, you'll need a center punch to create a small indentation prior to drilling. With a plastic box, unless you can find some bits for drilling plastics, begin with a 1/16" hole, then enlarge slowly by using ever-larger drill bits. Vise grips, crescent wrench, and/or nut driver for tightening nuts on jacks and switches. Small needlenose pliers for bending and working with wire and component leads. Diagonal cutters for cutting wire. Wire stripper for removing insulation from hookup wire. An assortment of screwdrivers, including Phillips head and jeweler's types as well as regular flat types. A small vise to hold parts for soldering. 60 watt, small-tipped soldering pencil or (for those who like to go first class) a temperature-controlled soldering station. Wear eye protection while soldering; sometimes the rosin can spit out. Also, solder in a well-ventilated area. 60/40 rosin-core "multicore" solder intended specifically for electronics work. Never use acid core solder! It's for plumbing. Hookup wire; #22 or #24 gauge stranded works fine. A small metal or plastic box in which you can mount the parts. Hey, some people use coffee cans...whatever works. So now that you have your tools, what can you build? Here are a few quick examples. A-B SWITCH BOX Let's start with a box that switches a main input and output between two different effects or other devices (Fig. 1). Fig. 1: This photo shows the front and back of the A-B switch box. Now take a look at the schematic (Fig. 2). Fig. 2: A-B Box Schematic. All you need are six jacks, and one switch; S1 is called a "double pole, double throw" switch, because it has two "poles" that can switch between two different positions. For example, this box is ideal for switching between two mono effects boxes. However, if you ignore the labels, you can do some other tricks as well, like switch a power amp between two different sets of speakers. Patch the "Main Input" jack to your amp's left output, and the "Main Output" jack to your amp's right output. Connect the left channel from one set of speakers to "A Input," and the right channel from the same set of speakers to "A Output." Similarly, connect the left channel from the other set of speakers to "B Input," and the right channel from the same set of speakers to "B Output." Now you can switch between the two speaker systems. You can also use the A-B Switch Box to add true bypass to an effect—just patch a standard cable from "B Input" to "B Output," as this provides a bypass path whenever you switch to the "B" position. STEREO/MONO BREAKOUT BOX The stereo/mono breakout box (Fig. 3) provides an easy way to get mono gear to relate to gear with stereo insert jacks, or break out a stereo input or output to two mono connections. Fig. 3: Break out a stereo jack to two separate connections with this breakout box. Wire one mono jack hot lead to the stereo jack tip connection, the other mono jack hot lead to the stereo jack ring connection, then connect all the grounds together. To use the box with stereo (TRS) insert jacks, patch a stereo cord between a device's insert jack and the breakout box's stereo jack, then patch the mono jacks to your signal processor's in and out connections. If you don't get the tip and ring connections right the first time, reverse them and you should get signal. THE UNIVERSAL ADAPTER BOX Fig. 4 shows another real simple, but invaluable, box. Fig. 4: Do you have two cables with plugs that don't get along? Here's the answer. The Universal Adapter Box simply has a bunch of jacks wired together: Stereo phone jack, two RCA phono jacks, stereo minijack, and two 1/4" phone jacks. As one example of an application, this is really useful for laptops. Run a cable with two mini plugs between the computer's audio out and the stereo minijack, and you now have a breakout box: Use the stereo jack for headphones, or use the two RCA or 1/4" phone jacks to feed a stereo system or mixer. These projects can take less than an hour if you don't care too much about looks. And if winter is approaching, don’t forget that a warm soldering iron will help heat up your room! -HC- ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  20. How to Manage VST Plug-Ins with Windows The way you install your plug-ins can help determine how easy they are to use by Craig Anderton You’ve blown your budget on some kick-ass plug-ins, both processors and instruments, and downloaded scores of public domain goodies from the interwebz. Face it: you have a bad case of plug-initis. But as the number of plug-ins increases, watch out. Some plug-ins are incompatible with certain programs or systems. Or, a program that used to work fine might hang whenever it tries to load a new plug-in—so you can’t open the program to tell it to ignore the problematic plug. And for some reason, that tempo-synched delay simply refuses to sync . . . Before matters get out of hand, let’s look at how to raise well-behaved plug-ins. We’ll concentrate on the VST format, but also give a nod to the DirectX plug-ins. I call DirectX a “zombie format” because new development ceased a long time ago after Microsoft lost interest, but they remain relevant and are supported by Sony, Cakewalk, Acoustica, and others. CALLING ALL PLUG-INS VST plug-ins usually reside in a folder called (surprise!) “Vstplugins.” Ideally, you should keep all your VST plug-ins in one folder. If you have multiple programs that support VST plug-ins, point them to this one folder. Usually you can specify a folder where a program should install its plug-ins during the installation process, or via a Preferences or Options page where you can specify the folder’s path. Because some program’s plug-ins will work with other programs, you might want to specify C:\Program Files\Common Files as an appropriate location to install common shared executable content like plug-ins. For example, C:\Program Files\Common Files\VST3 can be a suitable location for VST3 files; some vendors (including Celemony) put plug-ins in C:\Program Files\Common Files\VST2. Some programs will instead insist on installing their own plug-ins folder, and reference that. You may be able to simply drag a plug-in from that folder to your main folder, but that doesn’t always work. Move the plug-in, tell your program not to scan the original folder, and see if the program recognizes the plug-in in your main folder. If so, you should be okay but there may also be presets and other elements scattered around. Often, it’s best just to leave any installed folders (Fig. 1), and include them in the scan path. (Also, do a search on “vstplugins”—you might be surprised at how many folders you unearth.) Fig. 1: In Live’s Preferences menu, like most other hosts, you can specify a path to the VST plug-in folder. Clicking on the browse button can change the path from the default folder. DirectX plugs can live anywhere, because installing them registers them in the scary Windows Registry. However, you may not know exactly where they are, which can be a hassle if you want to uninstall a troublesome plug-in and there’s no obvious uninstall routine. If you’re a Cakewalk SONAR user, the Plug-In Manager lists your DX plug-ins and shows their file paths, file names, and registry keys; it can also import, export, and manage (delete, rename) plug-in presets. WHEN GOOD PLUGS GO BAD Many programs that host plug-ins scan and initialize them when launched. Often there will be a “status line” that shows each plug’s name as it’s being scanned. Sometimes, though, the program will “freeze” at a particular plug-in. Don’t panic! Like other software, first check for updates on the web, especially if you’ve changed operating systems. Sometimes that alone fixes the problem. If it doesn’t, with a VST host, look at the status line. If it’s stuck on a plug-in name, 99% of the time that will be the plug-in causing the problem. Reboot, go to the Vstplugins folder, then drag the plug-in out of the folder. Now re-launch the program. This time, it won’t hang at the plug-in because the plug-in isn’t there. Hopefully, the program will load completely but if it hangs on another plug-in, remove it too. After the program loads successfully, quit the program, then drag one of the plug-ins back into the Vstplugins folder and re-launch the program. Often times, the plug-in will now be recognized. Repeat for all the plug-ins you removed. If the program still hangs at a particular plug-in, try re-booting and re-launching. If a problem remains after a few reboots/re-launches, the plug-in may simply be incompatible. If a particular program is allergic to certain plug-ins, you can: Create a separate plug-ins folder, drag over copies of all the plug-ins known to work with the program (or re-install if needed), and point the program to this folder. Disable problematic plug-ins from within the program. For example, Steinberg Wavelab has an Organize Master Section Plug-Ins function under the Options menu where you can “turn off” plug-ins (Fig. 2). Fig. 2: Wavelab’s plug-in organizer can enable and disable specific plug-ins; it also provides some information about them. This problem seems less common with DirectX, but if a DX host program hangs on launch while scanning plug-ins, close it and launch again. In almost every time this has happened to me, running the program a couple of times will eventually recognize all the plug-ins. With some programs, you can create different plug-in layouts so that, for example, you could create a set that excludes all DirectX plug-ins (Fig. 3). Fig. 3: SONAR’s plug-in manager lets you enable or exclude plug-ins. It also furnishes path and registry information, allows exchanging plug-in presets with other SONAR users, and lets you create “layouts” for particular plug-ins—for example, you can create one with only 64-bit plug-ins, and another with particular collections of plug-ins. 32-BIT PLUG-INS IN A 64-BIT WORLD People often ask if their favorite 32-bit plug-ins will work with 64-bit programs. The answer is a definitive...maybe. There are two popular “wrappers,” BitBridge and JBridge, which allow 32-bit plug-ins to work in 64-bit systems. It’s amazing they work at all, but do note that some plug-ins simply will not play nice. Also, some users report one program being able to wrap some plug-ins while the other one can’t, so it is somewhat hit-or-miss. It’s highly recommended that if you’re running a 64-bit DAW with a 64-bit OS that you use true 64-bit plug-ins, but that said, these utilities can extend the life of your 32-bit plug-ins while you wait for 64-bit versions to arrive...assuming they ever do, of course. Bear in mind that plug-in bridging can make some systems less stable. This isn’t really the fault of the bridge software, but the result of trying to do something that stretches the boundaries of compatibility. LEAN AND FRESH If you don’t use a plug-in, remove it from your system (for VST, uinstall or if that's not possible, drag to a different folder or delete; for DirectX, use the uninstall routine). Although plug-ins tend to be pretty solid, fewer plug-ins mean less scanning time when a program launches, and a tidier selection process. Okay! Now go weed out the clutter and refresh your plug-ins—your system will thank you for it. -HC- ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  21. Craig's List – Go 5 Rounds with Digital Multieffects vs. Analog Stompboxes! Stomp hard on that footswitch—then let’s get ready to rumble! by Craig Anderton Round 1: Hissy fits. Consider the most popular environmental sounds— ocean waves and rain. And what do dentists squirt into your ears to reduce pain? Yup, white noise. Case closed: Unlike grainy, pompous digital noise, analog circuits soothe you with sweet, tranquilizing hiss that can put you to sleep even faster than watching a Transformers movie! Round 1 goes to analog effects. Round 2: State-of-the-art name weirdness. Only in the bizarro world of analog boutique stompboxes would products be christened “Mold Spore,” “Atomic Dump,” “Way Huge Swollen Pickle,” or “Attack Goat.” (I swear I didn’t make those up.) As to stompbox companies, “Dwarfcraft” sets the definitive standard for names inspired by excessive inhalation of solder fumes. No contest, and analog gets a knockdown. Round 3: Knobs vs. keypads. Knobs can have little pointy arrows, be different colors, include shiny metal inserts, exhibit retro qualities, and best of all, they’re round (and at least to geeks, vaguely erotic). Keypads belong on deadeningly dull devices like ATMs, TV remotes, microwave ovens, and other appliances that have nothing to do with music. Well, except for the little victory “ding” microwave ovens do when they’ve finished mugging your food…and the affinity they have for CDs*. Round 4: Art museum readiness. Do we even need to stage this round of sick artwork silkscreened on metal vs. numbers and letters on dark gray plastic? Freakishly sick artwork gets a solid win for this round, because dark gray plastic was invented by malevolent aliens plotting to demoralize earth’s population prior to a full-scale invasion. Why do you think they’re called “grays”? Round 5: Fight the power. Analog effects use batteries. An alkaline battery is its own mini power plant, fueled by a titanic struggle among manganese dioxide, zinc powder, and potassium hydroxide electrolytes as they give birth to armies of electrons coursing relentlessly through the circuit board pathways that form your effect’s arteries. Plugging something derisively called a “wall wart” into an AC outlet doesn’t have anywhere near the same panache. Unless, of course, you plug a 115V adapter into 230V…then it gets interesting! -HC- * Warning: CD cooking should be attempted only by licensed professionals using RIAA-certified microwave ovens. ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  22. How to Avoid "Hidden Distortion" in Amp Sims Not all distortion is created equal... by Craig Anderton Amp sims have slowly but surely gained acceptance over the years. Although some guitarists will always prefer using tubes, it’s also true that amp sims can often provide sounds that are difficult, if not impossible, to obtain in the physical world and that has inspired many guitarists to start using amp sims. But another factor is that amp sims are not just “plug and play.” It takes a certain amount of effort to get them to sound good—which can sometimes involve adding EQ before and/or after the sim, avoiding particular amp cabinet combinations (or choosing “golden” combinations), and so on. But one of the most important elements is gain-staging, and here’s why. Many guitarists experience bad amp sim tone because they don’t realize there’s the potential for two types of distortion within modules like amp and cabinet emulators: The “good” amp distortion we know and love, and the “nasty” digital distortion that results from not setting levels correctly inside the sim. KNOW YOUR DISTORTION With analog technology, if you overload an amp input you just get more distortion. Because it’s analog distortion, it sounds fine—just more distorted. But if you overload a digital amp’s input, remember that digital technology has a fixed, and unforgiving, amount of headroom. If you don’t exceed that headroom, the amp sim will sound as the designers intended. But if your signal crosses that threshold, the result is ugly, non-harmonic distortion. Never go “into the red” with digital audio—unless you’re scoring a Mad Max sequel, and want to conjure up visions of a post-apocalyptic society where the music totally sucks. SETTING INTERFACE INPUT LEVELS To avoid digital distortion, it’s important to optimize levels as you work your way from input to output. The most important gain setting is the audio interface’s input gain control, which will often be complemented by a front panel clipping LED. Adjust this so that the guitar isn’t overloading your audio interface, which will likely have a small mixer application with metering so you can verify levels (just note that the application’s fader isn’t what’s controlling the input level—it’s the interface’s hardware level control). If distortion happens this early in the chain, then it will only get worse as it moves downstream. Set the audio interface preamp gain so the guitar never goes into the red (Fig. 1), no matter how hard you hit the strings. Be conservative, as changing pickups or playing with the controls might change levels. You can always increase the gain at the sim’s input. Fig. 1: The metering for TASCAM's US-366 interface shows that the guitar input (Analog 1) level control is set so the input levels are avoiding overload. AMP SIM INPUT TRIM Your sim will likely have an input meter and level control; adjust this so that the signal never hits the red. Going one step further, Peavey’s ReValver includes an input “Learn” function (Fig. 2). Click on Learn, then play your guitar with maximum force. Fig. 2: ReValver’s Learn function automatically prevents the input and/or output from being overloaded. Learn analyzes your signal, then automatically sets levels so that the peaks of your playing don’t exceed the available input headroom. Beautiful. TRIMMING LEVELS WITHIN THE AMP Like their real-world equivalents, amp sims can be high gain devices—high enough to overload their headroom internally. This is where many guitarists take the wrong turn toward bad sound by turning up the master volume too high. The cabinets in Native Instruments’ Guitar Rig include a volume control with Learn function (Fig. 3); for sims without a Learn function, like IK’s AmpliTube, you’ll find a meter—adjust the module’s volume control so there’s no overload. Fig. 3: Guitar Rig has a Learn function for optimizing internal amp levels. SETTING OUTPUT STAGE LEVELS The final stage where level matters is the output. AmpliTube has an additional level control and meter to help you keep things under control, while Guitar Rig has a special “Preset Volume” output module with a Learn function that matches levels among patches, but also prevents distortion. ReValver offers an additional output Learn function. If you set gains properly through the signal chain from interface input to final output, you’ll avoid the kind of bad distortion that ruins what the good distortion brings to the party. - HC- _____________________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  23. 10 Questions with Super Producer Michael Wagener The maven of metal talks tech by Craig Anderton Michael Wagener is the premier mixer/engineer/producer for metal, with a resumé of work that includes Metallica, Poison, Dokken, Alice Cooper, Ozzy Osbourne, Accept, Motley Crue, Great White, Plasmatics, X, Extreme, Megadeth, and many others—but his versatility also extends to artists like Janet Jackson and Muriel Anderson. We spent a relaxed afternoon in his WireWorld Studios, and after drooling over his recording gear and fabulous guitar collection, asked a few questions. Why are you selling some of your amps? I use the Kemper Profiler for almost everything. I have so many profiles…there are amps I haven’t turned on in over two years. I just don’t need them anymore. The main control room at WireWorld studios Do you get nostalgic for tape? Roger Nichols and I sat down once and recorded a kick with different adjustments for bias, azimuth, different tape types, and different tape machines. Nothing ever came back with the same punch as digital. Some people say digital stresses the body more, but I don’t know if that’s true. I wouldn't want to go back to tape. Do you think 96 kHz makes a difference on playback? Once I compared a variety of digital systems, and actually thought that 48 kHz sounded best for a sampling rate for the type of music I normally record. But that could have been due to converter design or some other factor. For playback, I don’t know anyone who can hear the difference consistently between 44.1 and 96 kHz. What about DSD? DSD really does sound better to me or let’s say: it feels better to me. There’s something special about it, but the problem is you can’t do any kind of editing—as soon as you want to edit, it has to go back to PCM. How has production changed over the years? Producers used to do just about everything—sometimes even figure out transportation and accommodations for a group, not just musical considerations. Now it can mean anything. Someone creating beats on a laptop by himself can call himself a producer. Oh, and record companies don't give advances to producers any more [laughs]. Because many of the acts Michael produces are international, he maintains an incredible collection of guitars so artists needn't have their instruments suffer at the hands of the airlines What’s your DAW of choice? Yamaha’s Nuendo. It feels right and makes sense to me. I use both Windows and Mac, but my recording is all done with a custom Windows machine running Windows 7. Why haven’t you upgraded to Windows 10? Everything’s working! My SSL AWS 900+ SE mixer talks to the computer, which talks to Nuendo, and everything talks to a bunch of devices in my patch bay. It’s all working, so I don’t see any need to change it. I suppose applying the security patches might be a good idea, but I connect only to sites I know, and only when needed. Has your background in electronics come in handy? Yes, I can do a lot of the maintenance on my analog gear. I really can’t do anything with digital, though. I also don't understand some decisions companies make, like soldering in batteries for memory backup [laughs]. Michael with a member of the Finnish metal band Lordi Are you a “leave your gear on” kinda guy? I leave the computer on so I don’t have to waste time booting up, and leave on the SSL and preamps but turn off most of the outboard equipment. Where do you see the record business going? Sessions in big studios continue to decrease. I think of all those schools turning out engineers…the jobs just won’t be there for most of them. Then again, there’s also a need for ongoing, continuing education for people who are engineers, or getting started in recording—that’s why I still do workshops for beginners and experts at my or their studio. That way people can benefit from what I’ve learned, without having to take the time to discover it for themselves. -HC- _____________________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  24. PSP Audioware L’otary Rotating Speaker Plug-In The music really does go 'round and 'round by Craig Anderton Hey, I love my readers...you’re who support me. So I’m going to save you some time: if you’ve been searching for a plug-in that sounds just like the rotating speaker "whose name dare not be mentioned," your search has ended. This is it, and it does more than expected - check out the demo. However if you want to know why I came to that conclusion, feel free to keep reading. THE ROTATING SPEAKER Mechanical signal processors are really hard to emulate. Modeling a guitar pedal's circuit or a digital synthesizer is one thing, but mechanical objects—plate reverbs, talk boxes, rotating speakers, and the like—are very challenging because of the large number of variables. With rotating speakers, it’s not just about the speakers and cabinets; distortion comes into play, as well as the acoustic by-products of a mechanical systemand room ambience. L’otary nails that sound, but doesn’t stop there. It reminds me of why I liked the design philosophy behind IK Multimedia’s SampleTron that emulates the Mellotron—it emulated the funkiness of a real Mellotron, but gave you the option to dial back the funkiness to have a “perfect” Mellotron—e.g., no tape hiss, no flutter, etc. WHAT YOU NEED TO KNOW Supported formats are VST 2.4, AU, AAX, and RTAS, for Windows XP or better or Macs at OSX 10.8 or above, and L'otary uses file-based copy protection. See the web site for detailed system requirements. L’otary models the rotating speaker elements independently —the drum, horn, distortion, and ambience. If you want the typical rotating speaker sound, it’s here. The drum and horn rotation speeds are independent. Although they open to correct defaults (I’ve researched rotating speaker speeds), being able to make them faster or slower independently opens up new possibilities. The Tremolo/Chorale lever function also nails what you’d expect, and makes the correct transition from slow to fast. You can alter the horn and drum inertia . Separate mic models for the horn and drum include distortion and filtering (highpass for the horn, lowpass for the drum). The Amplifier section is particularly worthy of note because it provides the distortion options possible with “the real thing.” L’otary simulates the mechanical noises associated with rotating speakers, but you can turn this all the way off if desired. A Setup parameter chooses direct amp sound or one of five mic positions. This is important, because how an engineer decided to mic a rotary speaker cabinet could make a major difference in the sound. There are global controls for Width, Mix (I always did want some direct in with the rotating speaker!), Mechanical Ratio to accentuate clicks while de-emphasizing motor and rotation noises, bass reflex port signal amount, and crosstalk. There’s a low-CPU mode but the much-higher CPU consumption mode offers only a slight sonic advantage. Choose low CPU mode until the final mix (or render the track with the effect). The MIDI implementation is comprehensive. In addition to MIDI Learn/Forget, you can store MIDI profile (not just program settings) for later use, specify the channel over which CC messages will be received, and identify incoming MIDI messages. There’s a nifty rotating speaker speed display. It won’t help you write a better song, but it’s great eye candy. Then again, that may help you write a better song . LIMITATIONS You can’t control multiple parameters with a single MIDI controller. Chiropractors will see a drop in business due to fewer people hauling around rotating speaker cabinets. CONCLUSIONS PSP Audioware is one of the software industry’s better-kept secrets, but they’ve been earning a solid reputation among plug-in connoisseurs since the days of products like the Vintage Warmer. Although there are many rotating speaker emulations on the market, including versions in amp sims, I’ve never used anything that’s both as accurate and versatile as L’otary. Is it worth $99? Download the 14-day, fully functional demo and decide for yourself. The bottom line is that L’otary is a premium-quality plug-in that delivers a difficult-to-deliver sound. -HC- Resources L’otary landing page Downloable demo Buy direct for $99 Demo video (with a very convincing section starting at 1:13) ___________________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
  25. Craig’s List - Five Artist Contract Lines Explained Whooo-hooo! You got a record contract! And to celebrate, here are five translations of key contract lines—thanks to having, uh, “borrowed” a lawyer’s secret decoder ring. by Craig Anderton 1. “Subsequent to completion of the Recording, Company may assign its existing rights and obligations hereunder without the consent of Artist.” This is actually for your benefit - after all, didn’t you always want your music featured in a laxative commercial? Or a KKK recruitment video? Or the music bed behind the cable access TV spot for Honest Frankie’s Quality Used Yugo dealership in Ho-Ho-Kus, NJ? Exciting exposure opportunities await you when a record company president is highly motivated to pay off his gambling debts! Especially in New Jersey. 2. “In perpetuity and throughout the entire universe.” A bunch of lawyers were stinking drunk one night. “How about ‘throughout the world?’” “Nah, let’s do ‘throughout the solar system.’” [much laughter] “The galaxy!” [hearty guffaws] “The ENTIRE EFFING UNIVERSE!!” The lawyers all dissolved in gales of laughter and wrote “universe” into a contract as a lark—and the term stuck. (Although to be fair, some believe lawyers are spawned from the evil ice planet Blarf, so “universe” might actually be relevant.) 3. “Right of inspection of books with prior written notice of no less than seven (7) days.” Even accountants who move slower than Jabba the Hut can sub the funny money books for the real ones in less than seven days. And if you do inspect the books, expect to be locked in a small cubicle with a man who keeps referring to himself as “Thee Avenger,” has a really big teardop tattoo, and plays absent-mindedly with a knife he calls “my Precious.” Yessiree—you’re “livin’ the dream!” 4. “The recitals contained at the beginning of this agreement are incorporated herein by this reference.” No one has any idea what this means. No one ever has. No one ever will. In a brilliant move—given that lawyers bill by the hour—this line is inserted specifically so lawyers can argue about it for hours and hours. And hours. Even days and weeks, if needed. Ka-ching! 5. “Covenant of Good Faith and Fair Dealing: Company and Artist agree to perform their obligations under this Agreement, in every respect and at all times, in good faith.” Although contracts are allegedly nonfiction documents, a hallowed legal tradition is that every contract include at least one line that’s totally bogus. This replaces the clause used in older contracts, which was “Company and artist shall slay dragons, turn lead into gold, and cast magikal spells in the company of elves and fairies.” Spoiler alert: That didn’t happen either. ______________________________________________ Craig Anderton is Editorial Director of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.
×
×
  • Create New...