Jump to content

Zak Claxton Vocals - A Case Study


Recommended Posts

Zak recently posted a rough mix of one of his songs. It's called "Come Around", and you can check it out right here.

 

A few people were asking a few questions about the vocals; what processing we used, etc... so with Zak's permission, I thought I'd detail the process.

 

First of all, we tracked them with a Soundelux ELUX 251 running into a Neve 8801 channel strip. Both are great products. I used a bit of compression on the Neve, but only few dB on the loudest peaks. I didn't use any EQ while tracking. I used a Stedman pop filter placed about 3-4" from the mic, and had Zak a few inches in front of that. Zak tends to "work the mic", moving closer for soft sections and back a bit when he's belting things.

 

I like to adjust my tracking methodology to suit the preferences of the singers I'm working with. Some people like to go "old school" and track, then come in and listen, and then decide what they want to punch in on to improve; and others prefer to go until they feel they made a mistake, then, using a bit of pre-roll, continue on from there. Other singers prefer to do several uninterrupted passes, then comp. That tends to be my default method. I like the "performance" aspect of it, and the uninterrupted flow often works well for the singer, but again, whatever makes them comfortable is what we usually end up going with. I personally feel that comfortable and happy musicians tend to make for better sounding performances and tracks, so I always ask them if they have a preference, ask to make sure they're happy with the headphone mix, etc. I don't want them distracted by anything - I want them to be able to focus in on the performance. I'll go with whatever it takes to get the best I can out of the artist; I just think that making them as comfortable and relaxed as possible and just letting them "perform", with occasional praise, encouragement and suggestions, usually tends to do it.

 

Each "pass" is done on a single track, but I use the playlists feature of Pro Tools and put each take onto a separate playlist of that track. In Pro Tools 8, you can then "fan" those playlists out so that they're all visible simultaneously. Highlight an area of the song, and use the solo button on each playlist to audition what's on each one in that section of the song. Once you decide which one you like, just highlight it and hit the arrow button and it is automatically placed into the main track. I create a new, empty playlist called "Ld Vocal Comp" and then compile to that. It's a very fast and efficient way of comping.

 

Here's s screenshot of the lead vocal playlists for Come Around.

 

LdVocalCompPlaylists.jpg

 

Notice how some playlists are "full takes", while others only have certain sections of the song recorded. As we're doing the passes, I'm making mental notes about any potential "problem areas" - spots in the song where the singer may be struggling, or where I feel the performance is consistently not quite as strong as the rest of the track. I might make comments and suggestions between passes, and if there are still some areas where I feel we can get a stronger performance, we might go back in and concentrate on just those lines or phrases. Zak is a very solid singer, but he was a bit under the weather that day, so there's a few spots that we went back in on and just grabbed a phrase or two until I was sure I had what I needed.

 

Once we're done tracking, my next step is creating the comp. I'm listening to various things when doing that. Phrasing and pitch are important considerations, but with Autotune and Elastic Audio, those can be adjusted these days, so they're not the only thing I consider. I also want to hear how it builds, what the vibe and emotion is like, and how it is going to work with the phrases or sections that precede and follow it; with no unpleasant and weird sounding formant or timbrel shifts to give away the edit. Any noises, such as mouth smacks and pops, fidgeting noises and other anomalies are also spotted at this point, and if they can't be edited out, or if I feel they're bad enough to be unrepairable or noticeable on the final mix, they might be enough to make me decide to pick another take for that section or phrase. If something is too far out of tune to where I feel Autotuning it is going to be perceptible, it might also make me use an alternate take.

 

Here's a screenshot of the final comp for the lead vocals. Notice how there are several different colored sections to the waveforms - that indicates which pass (playlist) they came from.

 

LdVocalComp.jpg

 

After I have the comp done, I go back in and do my crossfades manually. I could just select the whole track and hit Cntrl F and apply fades automatically, but I prefer doing it manually so that I can make sure I have the region start and end points exactly where I want them. This allows me to make certain I don't have a fade right in between the breaths from two different recordings - if you do, you might wind up with a unnatural sounding "double breath". It also gives me the opportunity to make certain I'm not accidentally cutting off the beginning or end of any words. I'll also edit out any unneeded sections of the song where the performer isn't singing at this point. What I don't edit out is all of the breaths. I like to leave those in, because to me, it can sound really unnatural without any breathing from the singer. If the breaths are too loud, I'll use volume automation to pull them back a bit, but again, I never eliminate them entirely.

 

My next step is to do any pitch correction I feel we need. I know how to muck up vocals with Autotune, but normally what I'm shooting for is imperceptible, natural sounding correction. I almost always use Graphic Mode in Autotune; I prefer the greater degree of control, and the fact that you can make it sound much more transparent. It's also much better for performances with lots of glissandos and pitch bends.

 

I insert Autotune as an insert on the comped track, and use a pre fader bus to route it to a new track, which I normally name "Ld Vox AT RTN". I ALWAYS name tracks before recording anything, so I avoid having unnamed audio files in the audio folder. I then mute the original source track and select input monitoring on the Ld Vox AT RTN track so I can monitor what I'm doing as I'm adjusting pitch. Then I highlight a phrase or two at a time and "track pitch" in Autotune. Once I've done that, I'll use the "Make Curve" function to create curves that I can then manually adjust. Sometimes I'll use the "Make Auto" button to quickly get things "close", then adjust it manually. Once I'm satisfied with what I'm hearing (and again, since I'm going for "unprocessed sounding", it's not unusual for me to solo out the vocal tracks to make sure I can't hear it - if *I* can't hear it when it's soled out, chances are that everyone else isn't going to notice it in the mix either), then I record it to the new track.

 

Here's a screenshot showing Autotune and the original vocal track, as well as the aux send and "Ld Vox AT RTN" track that I record the processed audio to. Please ignore the Ultra Channel plugin below the Autotune insert point - normally there would be nothing on the track active other than Autotune; otherwise, that processing would also be added to the new (bused) track, and at this stage, I don't want that. I just forgot to disable it when I went back in and re-enabled Autotune for the picture. :o

 

LdVocalAutotune-1.jpg

 

Once I'm done with any Autotuning I feel is needed, then I do any Elastic Audio timing corrections I feel are needed to fine tune phrasing and timing. I always do this after any Autotuning, because I feel Autotune, at least version 4, tends to track better and gives less audibly noticeable correction results if it's working on a track that has not already been manipulated with Elastic Audio. In the case of Come Around, no Elastic Audio was used at all; that's all Jeff's phrasing and my comping, so I don't have any screenshots and won't go into detail about how I typically use it - I might write that up some other time. But when needed, Elastic Audio is an incredible tool - although I have found cases where I've run into its limitations.

 

Fader moves are then frequently used to duck the relative levels of breaths, bring out certain phrases or words, etc. I do use compression quite frequently, but volume automation is also a very important tool for me too.

Link to comment
Share on other sites

Before getting into the automation and effects, here's a couple more screen shots to illustrate what I mean by "double breaths". First, a picture of a poorly placed crossfade; notice the bump, or rise, in two spots to either side of the "X" crossfade point - it definitely sounds unnatural this way...

 

DoubleBreathCrossfade.jpg

 

...but by nudging that crossfade over a bit, you only hear the single breath, and the edit point comes across much more naturally.

 

GoodCrossfade.jpg

 

Making your edits sound natural and imperceptible is very important if you want to maintain the illusion of a single, "live" performance. You don't want to blow the illusion (after all, all good audio engineers are, at least in part, audio illusionists ;) ) by drawing attention away from the vocal with a noticeable edit.

 

In the effort to make the comping edits inaudible, I'll put the edit points where ever they need to go, while giving me the overall "performance" I want to hear and while still coming across as unedited and natural sounding. One edit / comp point in the song that illustrates this is the line "but before the day". It is sourced from three different takes - it breaks down as "But before the day."

 

ButBeforeTheDayComp.jpg

 

That line occurs in three spots in the song, and I only did the mid-syllable comp edit on one of them. No, I'm not going to tell you which of the three I did it on. You'll have to listen carefully and / or just guess. :p:D Yes, I could have copied and flown one of the other two "but before the day" lines over to that spot in the song, but the edit worked, and since the song was recorded without a click, I also avoided having to apply Elastic Time to make a "flown" part sit right in the new section of the song... but if that's what it would have taken, of course I would have done it...

 

Here's s shot of edit screen on the line right before the first chorus - it's the line "I hear my basic fundamental ringing out of tune, and almost out of time". I've got the volume automation displayed, as well as the aux send fader for the stereo effects. Notice the aux fader level (far left hand side) is all the way down on the verses - some of the vocal effects only kick in on the choruses.

 

VolAutomationIHearMyBasictoTime.jpg

 

I have one plugin inserted on the vocal track - an Eventide Ultra-Channel. It's a pretty CPU hungry TDM only plugin, but it packs a lot of functions into a single plugin; and like all the Eventide Anthology II plugins, it sounds fantastic. I'm using it for EQ, delay, a bit of harmonizer, and also for compression via the Omnipressor. The EQ has a high pass filter set to get rid of any subsonic gunk, and I'm also giving Zak a bit of a bump in the low mids for "weight", as well as a bit of a presence peak boost to bring out the articulation and help him cut through the guitars a bit. The harmonizer is just barely mixed in. It's more for gloss and shimmer than any perceptible "harmonized / doubled" effect. The delay is also mixed fairly low (~ -18db). It has a delay time of ~124ms and it is set to about 20% on the repeats. This plugin is active throughout the song - whenever Zak is singing, you're hearing the Ultra Channel plugin.

 

LdVocalUltraChannel.jpg

 

I used a stereo aux send (#21/22) to feed the effects inserted on an aux return channel; again, I'm using automation on that aux send to kick it on only for the choruses. The lyrics for the area displayed here are "time... I'm not too old to play the game"; it's the end of the first verse, and the transition into the first chorus. The pink area shows the aux fader level automation where it turns on for the chorus. Notice the difference in the aux fader (again, on the left side of the picture) - the level is now up as we hit the chorus.

 

AuxSendAutomationChoruses.jpg

 

What's on that aux return? Two things. A Line 6 Echo Farm and a Eventide H3000 Factory plugin. The Echo Farm is using the Deluxe Memory Man algorithm, and I've set the delay time to 40ms. The delay repeats are set to minimum, and the mix is set to 100% wet, and the modulation is turned off. It's basically acting as a predelay for the Doubler preset on the Eventide. While I could have easily edited the doubler preset and done a predelay there, I liked the sound the Echo Farm was adding. Again, this only kicks in on the choruses; I wanted it to give Zak a bit bigger and wider sound, and for it to help (along with some additional guitar parts that kick in at the same point) add some sonic variation on the choruses. By making everything a bit bigger on the choruses, it helps to kick things up for those important sections.

 

LdVocalEfxRTN.jpg

 

I did pull the aux sends down just as we come into the instrumental "hard stops" on the lines "you'll see me come around", and faded it back in once the band started up again on the second syllable of the word "around".

 

The only other effects on the vocal are from two of the processors on my Yamaha digital board. Again, I absolutely love that thing. Add a bit of the early reflection preset, and it gives anything you run through it a nice sense of space without being washy like you can get from too much reverb. A second effects processor from the Yamaha is adding the slightest touch of plate verb to the overall vocal sound, but again, it's really light. A lot of the time I like using small amounts of "effect" or processing from a few processors instead of lots from a single processor. That doesn't only apply to effects like delay and reverb - I'll often use multiple compressors in a similar way - hitting each with just a bit of gain reduction instead of slamming one processor or plugin extra hard. On Come Around, I used the small amount of compression "on the way in" (from the Neve while tracking), and then the compressor / Omnipressor in the Ultra Channel plugin, and finally, a little bit more with the vocal channel's compressor on the Yamaha for the mix. Of course, if "going big" with just one processor gives me the sound I'm after, I won't hesitate to do that either.

 

I normally use a "hybrid" approach for mixes instead of mixing entirely in the box, or entirely on the board. I like the automation in Pro Tools - you can automate anything with that program, and so I tend to use that, along with my control surfaces for almost all my automation, but I send single channels and "stems" (submixes of multiple tracks) to the Yamaha. I use the aux sends in the Yamaha to route to external hardware effects processors and / or to its four internal effects processors. Everything is summed in the Yamaha. I route the stereo output from the Yamaha to either my Otari half track analog deck, or to a second computer via S/PDIF digital for tracking the stereo mix.

Link to comment
Share on other sites

Since the handclaps were mentioned in one of the other threads that links here, I'll mention a bit about them too. There are four tracks of handclaps, and the handclaps were performed by six people on each track, with all six of us standing in a circle around the ELUX 251, with the mic set to omni - Zak Claxton, Kat Claxton, Phil O'Keefe, Sandy O'Keefe, Bunny Knutson, and Ken Lee. I used my Frontier Design Tranzport so I could control Pro Tools from out in the studio while joining in on the fun... so what you're hearing is 24 pairs of hands clapping on each "hit".

 

Not quite Roy Thomas Baker-esque on the track / performer count, but it's headed in that direction. ;):lol:

Link to comment
Share on other sites

  • Members

Holy crap! I'm glad I removed my engineer's hat on this one and just tried to sing and strum some guitar.

 

Nice and very informative piece Phil! I had no idea about most of your techniques here. Since I'm the guy singing on this, most people would probably think that I'd already be aware of this stuff, but as I said above, the reason I'm working with Phil is so I don't have to think about this and can concentrate on being the performer.

 

(Oh, and yes, I am Zak Claxton when I make music, in case you're confused, my fellow forum folk.)

Link to comment
Share on other sites

  • Members

Vocals can be a lot of work. I do a lot of what Phil does there except for the AutoTune part (I don't have AutoTune).

 

Thank you very much for that, Phil!! I have a feeling a lot of people are going to be printing that out.

 

I do have one question. You write:

 

Notice how there are several different colored sections to the waveforms - that indicates which pass (playlist) they came from.

 

How does that work exactly? Maybe it's a playlist thing. When I move audio from one track to another, it changes color automatically. Is this only a playlist thing? Something that differs in PT 8? A setting that can be altered? Just curious.

 

Isn't this great? All these techniques, and I'm asking about the color of the waveforms. :D

Link to comment
Share on other sites

  • Moderators

Great posts! I love the detail you've provided. Nice job, with the recording and with this tutorial of sorts. Very cool.

 

So Phil, you mention you sometimes use EA for pocketing vocals? Me too, but I find I get a sort of popping sound at times. Have you experienced this and what's your workaround if there is one?

Link to comment
Share on other sites

  • Members

Wow cool. :phil:

 

There are four tracks of handclaps, and the handclaps were performed by six people on each track, with all six of us standing in a circle around the ELUX 251, with the mic set to omni - Zak Claxton, Kat Claxton, Phil O'Keefe, Sandy O'Keefe, Bunny Knutson, and Ken Lee.

 

The one and only Ken Lee?

[YOUTUBE]_RgL2MKfWTo[/YOUTUBE]

 

:idea::eek:

Link to comment
Share on other sites

  • Members

hows pro tools 8 working out for you, phil? I'm dying to upgrade from 7.3. I don't even have elastic audio since that came with 7.4. I keep hearing a lot of problems about PT 8 and that a lot of systems aren't fast enough to run it. Im running a first generation Macbook Pro. I'm thinking of upgrading by getting the mbox usb for $250 vs. the $150 upgrade

Link to comment
Share on other sites

  • Members

Hola!

 

I saw the announcement of this article on twitter :) Great tool when used correctly hu?

 

Great article also, as always. Thank you very much, I am printing it for sure.

Link to comment
Share on other sites

  • Members

OK, I have a question - you said that Zak Claxton is a good singer, but a little under the weather during this session. Reading through this, it also seems as though using autotune was a foregone conclusion, as you didn't preface that section with anything along the lines of "I heard a couple of wonky notes" or "this is where I always use autotune" . So, my question is - Do you tend to use autotune on every vox track, even those with imperceptible pitch deviations, or only on tracks that have noticeable areas that need to be corrected?

 

My personal temptation would be to leave well enough alone, and only use the autotune on noticeable parts. But, of course, I have much less experience than most people here - I'm dying to know!

Link to comment
Share on other sites

  • Members

 

My personal temptation would be to leave well enough alone, and only use the autotune on noticeable parts. But, of course, I have much less experience than most people here - I'm dying to know!

 

 

I agree with you. People have different "philosophies" regarding the use of Autotune and it's really up to you. Personally I don't like it when an engineer thinks of it as a foregone conclusion, nor do I mind a note here or there that is a little out of tune if the feel is good. That's how most of my favorite records were made, that's how ALL records were made before the existence of Autotune (when I started engineering), and when my own band records, our attitude is that if it seems like Autotune is going to be necessary, it's time to do another take. As an engineer, I simply don't work with people who require that degree of manipulation.

 

Of course, if you run a pro studio and you depend on it completely to pay the bills, there are going to be a lot of times when you have to work with singers who need, and expect, you to use Autotune. And certainly, I'd rather hear a singer who is focused on getting the right feel and has had a few notes tuned than one who's so focused on staying on pitch that they lose the feel. In that case, so long as you use manual correction on only a few notes, it can still sound natural and retain the character of the performance.

 

What I'm saying is that many engineers will tell you that you must use Autotune and it's a foregone conclusion, but you don't need to listen to them if you don't like the way that sounds (I don't). You don't have to use it unless a client requests it, if that's your aesthetic. I can count on one hand the number of times I've used pitch correction, and it wasn't on vocals.

 

There isn't a right or wrong way to work - there's your aesthetic and how you choose to work, and there are a lot of choices. Don't feel obligated to jump on bandwagons and do the latest thing that "everybody" is doing. Do what works for you and sounds good to you and does the best job of capturing the emotion in the performance.

Link to comment
Share on other sites

  • Members

 


My personal temptation would be to leave well enough alone, and only use the autotune on noticeable parts. But, of course, I have much less experience than most people here - I'm dying to know!

 

 

If I had AutoTune, that'd be my inclination as well. But as Lee points out, there's different aesthetics, reasons, preferences, etc.

Link to comment
Share on other sites

hows pro tools 8 working out for you, phil? I'm dying to upgrade from 7.3. I don't even have elastic audio since that came with 7.4. I keep hearing a lot of problems about PT 8 and that a lot of systems aren't fast enough to run it. Im running a first generation Macbook Pro. I'm thinking of upgrading by getting the mbox usb for $250 vs. the $150 upgrade

 

First of all PT 8 is a gigantic upgrade; possibly the biggest one yet from Digidesign. There's a ton of new features, and overall, it's a vast improvement over PT 7.x. The new and expaned plugin bundle alone is, IMO, worth the cost of admission. :cool:

 

Will it run on your first gen MacBook Pro? It should. Digidesign still lists that as a "compatible" machine on their website, and IIRC, the slowest MBP they ever released has a 1.83GHz Intel Core Duo in it (yours may have a slightly faster CPU, depending on what you opted for at purchase), so I imagine you'd be just fine with that as a PT platform.

 

I just got a refurbished White MacBook (2.1 GHz Core 2 Duo, 1 GB RAM) from Apple to replace my aged TiBook. It only came with 1 GB of RAM, but I have 4 GB on order; I'm going to take the 1 GB that's in here and put it into my wife's Mac Mini (1.66 GHz Core 2 Duo, 512 MB RAM). I'm thinking about either getting a Mbox Micro or MINI for "on the go" use with the new laptop; mostly for editing and so forth.

 

My main DAW is not the latest / greatest / fastest computer - it's a DIY built Athlon 64 4200 dual core. It was plenty fast for running 7.3, but I did notice issues where it would choke when using too many Elastic Audio tracks simultaneously with PT 7.4; upgrading the RAM from 2GB to 4GB pretty much eliminated the issues. I honestly did not notice any significant decrease in "horsepower" when I switched over to PT 8. However, that's usually one of the first things everyone tests and looks into over on the DUC whenever Digidesign releases a new version, so you might want to inquire over there.

 

If you only have 1 GB of RAM in your computer, I'd definitely recommend adding more to it if you want to upgrade to PT 8.

 

While you're over on the DUC, you might want to look around and see what "issues" others have reported having when they made the switch to PT 8. There have been some problems for some people, but remember - not everyone will experience those problems. It's a "support" forum, so you'll tend to see more "issues" than "kudos" on a site like that. Overall, I've personally seen a slight decrease in overall stability, but nothing major. There's a few bugs in 8, but considering the magnitude of the upgrade, that doesn't really surprise me, and nothing has been objectionable enough to make me want to revert to PT 7.4.

Link to comment
Share on other sites

OK, I have a question - you said that Zak Claxton is a good singer, but a little under the weather during this session. Reading through this, it also seems as though using autotune was a foregone conclusion, as you didn't preface that section with anything along the lines of "I heard a couple of wonky notes" or "this is where I always use autotune" . So, my question is - Do you tend to use autotune on every vox track, even those with imperceptible pitch deviations, or only on tracks that have noticeable areas that need to be corrected?


My personal temptation would be to leave well enough alone, and only use the autotune on noticeable parts. But, of course, I have much less experience than most people here - I'm dying to know!

 

Every case is unique, and no, I don't automatically reach for autotune for every artist, nor do I use it on every single note. I do analyze everything, but that doesn't mean I apply correction to everything.

 

As far as "perceptible pitch deviations", I usually catch just about everything, even before opening up autotune. Perfect pitch does have certain advantages. ;) Sometimes I don't even use Autotune, and instead, will just use a standard pitch shift plugin and adjust a word by X amount of cents... but that usually only works if the pitch is constant throughout the note, but yet still sharp or flat. If it deviates by different amounts over the course of the word or phrase, then that method won't work.

 

To me, perfection in pitch isn't what I'm shooting for. I'm more interested in feel and vibe... but if the feel and vibe are great, but the pitch is a bit off to where it is distracting from the vibe, and I have the choice between that and a take where the pitch is solid but the vibe and feel isn't as good, then I have no qualms about reaching for the pitch correction tools.

 

With Zak, we work really fast, all things considered. There's little in the way of pre-production, and really nothing in the way of group rehearsals. We listen to a vocal + acoustic guitar demo on our own in advance of the session date, and we might all rehearse a bit on our own, but other than the most general of descriptions about where he wants to "go" with it, we really don't work things out in advance.

 

We're not really shooting for a highly polished and refined vibe or feel. We really don't have much time together to do multiple attempts at different variations of things - we normally wax two to three songs over the course of a 8-10 hour day... with just about everything being tracked for the songs over the course of that period of time... so I tend to have to do a bit more editing and comping later after everyone has gone home. Instead of trying to work it all out to perfection, then hitting "record", I hit record and grab everything I can in the limited amount of time that I have the guys out here. They're all coming from a ways away - 80 miles or so in the case of Zak... so due to the way we're doing things, the approach is a bit different than I might take with someone else. But so far, it seems to be working - at least Zak seems to be happy, and with this project, that's my primary concern.

Link to comment
Share on other sites

Vocals can be a lot of work. I do a lot of what Phil does there except for the AutoTune part (I don't have AutoTune).


Thank you very much for that, Phil!! I have a feeling a lot of people are going to be printing that out.


I do have one question. You write:


Notice how there are several different colored sections to the waveforms - that indicates which pass (playlist) they came from.


How does that work exactly? Maybe it's a playlist thing. When I move audio from one track to another, it changes color automatically. Is this only a playlist thing? Something that differs in PT 8? A setting that can be altered? Just curious.


Isn't this great? All these techniques, and I'm asking about the color of the waveforms.
:D

 

Yes, that's new in Pro Tools 8, and I believe it only works that way - with the different waveform colors - when you're comping from multiple takes on different playlists on a single track.

 

Take a look at these two images:

 

DoubleBreathCrossfade.jpg

 

GoodCrossfade.jpg

 

Both of those images show the same location in the song. The first one is immediately after I did the comp. The reason the second one is a different color is because I used the "duplicate playlist" command to create a new copy of the playlist right after I finished doing the comp (before starting to do my Autotuning); when you do that, all of the regions become the same color again.

 

I nearly always create a new copy of a playlist after each major step in the process. That way, I can revert to the previous version if the need arises. The way that I name the playlists allows me to see at a glance what has been done up to that point. First I start with Lead Vocal; each new playlist for alternate takes gets a new name automatically - Lead Vocal.01, Lead Vocal.02, etc. Then when I create the comp, I create a new blank playlist and name it something like Lead Vocal.07 COMP, and compile all the "best bits" to that. Then the next playlist is another "blank" playlist, and I name it something like "Lead Vocal COMP AT", then I cut the material from the track that I bussed everything to when I was doing the Autotune and paste it back on to that new playlist on the original track. If I was going to do any elastic audio manipulation, I would then duplicate the "Lead Vocal COMP AT" playlist and rename it "Lead Vocal COMP AT EA", and apply my Elastic Audio to that playlist.

 

Again, this allows me to see what has been done on each playlist.

Link to comment
Share on other sites

  • Members

P.S. When I'm doing a vocal comp, I try to find a balance between smoothing out the edits and letting some of it show through. Lots of volume automation and I now almost always run the vocal tracks through a separate bus to get some glue and it's makes it a lot easier to mix the balance between the vocals and the jams.

 

My final step in a mix with a comp is to do a final pass of volume automation on the vocal bus.

 

I really find volume automation to be one of the best digital tricks there is. Plosives, sibilance, rogue transients, and all sorts of other ills can be cured with simple volume automation. I'll reach for volume automation before compression/sidechaining/etc to correct an issue.

 

Cool thread, like I said. The screenshots are great, even though I'm not using PT. The lessons and ideas can be easily reapplied using any DAW.

Link to comment
Share on other sites

  • 3 weeks later...
  • Members

Great thread Phil,

 

I found this one on HCFX when you first posted it, but just re-found now that I'm looking into getting an upgrade from 7.4 LE. I've only got 2gigs of RAM on my trusty HP laptop, think that'll be enough? I think I'm going to go for it!

Link to comment
Share on other sites

  • Members

Great post(s) Phil.

 

I work in Cubase 5.

 

One thing that surprised me is that you create a separate comp track for the chosen segments and then manually do the crossfades.

 

FWIW in Cubase I mute all the virtual track segments/takes then unmute one by one till IM happy with the result.

Many times a crossfade isnt even required.

IF one is I simply select the two virtual segments and pull down a crossfade menu. Most times this gives a good result. If an issue arises I can then easily undo , go back and change the boundaries of the segments so as to fix the issue. Once the segments are selected and crossfaded for that track Cubase will "flatten" the track via one pull down menu operation. It can even delete the unused audio file segments if you like.

 

Does PT provide you this workflow option?

 

If I correctly understand the manual cross fade process you mentioned I think it requires more steps. It also seems more complicated if you find you need to change a segment boundary. You'd need to go back to the original source track to do that and re-do the paste to the comp track using the new segment boundaries.

 

As for the yammy effects... I thought I might want to keep my AW4416 on line when I went ITB. However, my few outboard effects can be accessed thru my DAW/audio interface and the plug ins Im using blow away the AW effects.

My lonely yammy is in a box in the basement.

It nearly worthless.

I just cant bring myself to sell such a capable box for under 400 bucks

Link to comment
Share on other sites

  • 1 month later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...